prompt,context,A,B,C,D,E,answer,dataset,context_1,context_2,more_context,prompt_2,clean_answer,apk
"Which of the following statements accurately describes the impact of Modified Newtonian Dynamics (MOND) on the observed ""missing baryonic mass"" discrepancy in galaxy clusters?","The presence of a clustered thick disk-like component of dark matter in the Galaxy has been suggested by Sanchez-Salcedo (1997, 1999) and Kerins (1997).Kerins, E. J. 1997, Astronomy and Astrophysics, 322, 709-718 (ADS entry )Sánchez-Salcedo, F. J. 1997, Astrophysical Journal, 487, L61-L64 (ADS entry )Sánchez-Salcedo, F. J. 1999, Monthly Notices of the Royal Astronomical Society, 303, 755-772 (ADS entry ) ==See also== * Dark matter * Brown dwarfs * White dwarfs * Microlensing * Hypercompact stellar system * Massive compact halo object (MACHOs) * Weakly interacting massive particles (WIMPs) ==References== Category:Star clusters Category:Open clusters Observations of the Bullet Cluster are the strongest evidence for the existence of dark matter; however, Brownstein and Moffat have shown that their modified gravity theory can also account for the properties of the cluster. == Observational methods == Clusters of galaxies have been found in surveys by a number of observational techniques and have been studied in detail using many methods: * Optical or infrared: The individual galaxies of clusters can be studied through optical or infrared imaging and spectroscopy. The observed distortions can be used to model the distribution of dark matter in the cluster. == Temperature and density == Clusters of galaxies are the most recent and most massive objects to have arisen in the hierarchical structure formation of the Universe and the study of clusters tells one about the way galaxies form and evolve. A 2021 article postulated that approximately 50% of all baryonic matter is outside dark matter haloes, filling the space between galaxies, and that this would explain the missing baryons not accounted for in the 2017 paper. == Current state == Currently, many groups have observed the intergalactic medium and circum-galactic medium to obtain more measurements and observations of baryons to support the leading observations. In cosmology, the missing baryon problem is an observed discrepancy between the amount of baryonic matter detected from shortly after the Big Bang and from more recent epochs. Brownstein and Moffat use a theory of modified gravity to explain X-ray cluster masses without dark matter. The missing baryon problem has been resolved but research groups are working to detect the WHIM using varying methods to confirm results. ==References== Category:Physical cosmology Category:Baryons Baryons make up only ~5% of the universe, while dark matter makes up 26.8%. ==Early universe measurements== The abundance of baryonic matter in the early universe can be obtained indirectly from two independent methods: * The theory of Big Bang nucleosynthesis, which predicts the observed relative abundance of the chemical elements in observations of the recent universe. The missing baryon problem is different from the dark matter problem, which is non-baryonic in nature.See Lambda-CDM model. In a typical cluster perhaps only 5% of the total mass is in the form of galaxies, maybe 10% in the form of hot X-ray emitting gas and the remainder is dark matter. In astronomy, a RAMBO or robust association of massive baryonic objects is a dark cluster made of brown dwarfs or white dwarfs. It is composed of mostly ionized hydrogen and is about 10% of a galaxy cluster's total mass; the rest being dark matter. This is highly nontrivial, since although luminous matter such as stars and galaxies are easily summed, baryonic matter can also exist in highly non-luminous form, such as black holes, planets, and highly diffuse interstellar gas. Cosmological hydrodynamical simulations from theory predict that a fraction of the missing baryons are located in galactic haloes at temperatures of 106 K and the (WHIM) at temperatures of 105–107 K, with recent observations providing strong support. 50x50px Available under CC BY 4.0. In models for the gravitational formation of structure with cold dark matter, the smallest structures collapse first and eventually build the largest structures, clusters of galaxies. Large scale galaxy surveys in the 2000s revealed a baryon deficit. At the same time, a census of baryons in the recent observable universe has found that observed baryonic matter accounts for less than half of that amount. A mass deficit is the amount of mass (in stars) that has been removed from the center of a galaxy, presumably by the action of a binary supermassive black hole. thumb|left|The figure illustrates how mass deficits are measured, using the observed brightness profile of a galaxy The density of stars increases toward the center in most galaxies. One claim of a solution was published in 2017 when two groups of scientists said they found evidence for the location of missing baryons in intergalactic matter. When observed visually, clusters appear to be collections of galaxies held together by mutual gravitational attraction. ","MOND is a theory that reduces the observed missing baryonic mass in galaxy clusters by postulating the existence of a new form of matter called ""fuzzy dark matter.""",MOND is a theory that increases the discrepancy between the observed missing baryonic mass in galaxy clusters and the measured velocity dispersions from a factor of around 10 to a factor of about 20.,MOND is a theory that explains the missing baryonic mass in galaxy clusters that was previously considered dark matter by demonstrating that the mass is in the form of neutrinos and axions.,MOND is a theory that reduces the discrepancy between the observed missing baryonic mass in galaxy clusters and the measured velocity dispersions from a factor of around 10 to a factor of about 2.,MOND is a theory that eliminates the observed missing baryonic mass in galaxy clusters by imposing a new mathematical formulation of gravity that does not require the existence of dark matter.,D,kaggle200,"There have been a number of attempts to solve the problem of galaxy rotation by modifying gravity without invoking dark matter. One of the most discussed is modified Newtonian dynamics (MOND), originally proposed by Mordehai Milgrom in 1983, which modifies the Newtonian force law at low accelerations to enhance the effective gravitational attraction. MOND has had a considerable amount of success in predicting the rotation curves of low-surface-brightness galaxies, matching the baryonic Tully–Fisher relation, and the velocity dispersions of the small satellite galaxies of the Local Group.
The Bullet Cluster provides the best current evidence for the nature of dark matter and provides ""evidence against some of the more popular versions of Modified Newtonian dynamics (MOND)"" as applied to large galactic clusters. At a statistical significance of 8, it was found that the spatial offset of the center of the total mass from the center of the baryonic mass peaks cannot be explained with an alteration of the gravitational force law alone.
MOND is an example of a class of theories known as modified gravity, and is an alternative to the hypothesis that the dynamics of galaxies are determined by massive, invisible dark matter halos. Since Milgrom's original proposal, proponents of MOND have claimed to successfully predict a variety of galactic phenomena that they state are difficult to understand as consequences of dark matter. However, MOND and its generalizations do not adequately account for observed properties of galaxy clusters, and no satisfactory cosmological model has been constructed from the hypothesis.
The most serious problem facing Milgrom's law is that it cannot eliminate the need for dark matter in all astrophysical systems: galaxy clusters show a residual mass discrepancy even when analyzed using MOND. The fact that some form of unseen mass must exist in these systems detracts from the adequacy of MOND as a solution to the missing mass problem, although the amount of extra mass required is a fifth that of a Newtonian analysis, and there is no requirement that the missing mass be non-baryonic. It has been speculated that 2 eV neutrinos could account for the cluster observations in MOND while preserving the hypothesis's successes at the galaxy scale. Indeed, analysis of sharp lensing data for the galaxy cluster Abell 1689 shows that MOND only becomes distinctive at Mpc distance from the center, so that Zwicky's conundrum remains, and 1.8 eV neutrinos are needed in clusters.","Outstanding problems for MOND The most serious problem facing Milgrom's law is that it cannot eliminate the need for dark matter in all astrophysical systems: galaxy clusters show a residual mass discrepancy even when analyzed using MOND. The fact that some form of unseen mass must exist in these systems detracts from the adequacy of MOND as a solution to the missing mass problem, although the amount of extra mass required is a fifth that of a Newtonian analysis, and there is no requirement that the missing mass be non-baryonic. It has been speculated that 2 eV neutrinos could account for the cluster observations in MOND while preserving the hypothesis's successes at the galaxy scale. Indeed, analysis of sharp lensing data for the galaxy cluster Abell 1689 shows that MOND only becomes distinctive at Mpc distance from the center, so that Zwicky's conundrum remains, and 1.8 eV neutrinos are needed in clusters.The 2006 observation of a pair of colliding galaxy clusters known as the ""Bullet Cluster"", poses a significant challenge for all theories proposing a modified gravity solution to the missing mass problem, including MOND. Astronomers measured the distribution of stellar and gas mass in the clusters using visible and X-ray light, respectively, and in addition mapped the inferred dark matter density using gravitational lensing. In MOND, one would expect the ""missing mass"" to be centred on regions of visible mass which experience accelerations lower than a0 (assuming the external field effect is negligible). In ΛCDM, on the other hand, one would expect the dark matter to be significantly offset from the visible mass because the halos of the two colliding clusters would pass through each other (assuming, as is conventional, that dark matter is collisionless), whilst the cluster gas would interact and end up at the centre. An offset is clearly seen in the observations. It has been suggested, however, that MOND-based models may be able to generate such an offset in strongly non-spherically symmetric systems, such as the Bullet Cluster.A significant piece of evidence in favor of standard dark matter is the observed anisotropies in the cosmic microwave background. While ΛCDM is able to explain the observed angular power spectrum, MOND has a much harder time, though recently it has been shown that MOND can fit the observations too. MOND also encounters difficulties explaining structure formation, with density perturbations in MOND perhaps growing so rapidly that too much structure is formed by the present epoch. However, forming galaxies more rapidly than in ΛCDM can be a good thing to some extent.Several other studies have noted observational difficulties with MOND. For example, it has been claimed that MOND offers a poor fit to the velocity dispersion profile of globular clusters and the temperature profile of galaxy clusters, that different values of a0 are required for agreement with different galaxies' rotation curves, and that MOND is naturally unsuited to forming the basis of cosmology. Furthermore, many versions of MOND predict that the speed of light is different from the speed of gravity, but in 2017 the speed of gravitational waves was measured to be equal to the speed of light to high precision. This is well understood in modern relativistic theories of MOND, with the constraint from gravitational waves actually helping by substantially restricting how a covariant theory might be constructed.Besides these observational issues, MOND and its relativistic generalizations are plagued by theoretical difficulties. Several ad hoc and inelegant additions to general relativity are required to create a theory compatible with a non-Newtonian non-relativistic limit, though the predictions in this limit are rather clear. This is the case for the more commonly used modified gravity versions of MOND, but some formulations (most prominently those based on modified inertia) have long suffered from poor compatibility with cherished physical principles such as conservation laws. Researchers working on MOND generally do not interpret it as a modification of inertia, with only very limited work done on this area.
Milgrom's law fully specifies the rotation curve of a galaxy given only the distribution of its baryonic mass. In particular, MOND predicts a far stronger correlation between features in the baryonic mass distribution and features in the rotation curve than does the dark matter hypothesis (since dark matter dominates the galaxy's mass budget and is conventionally assumed not to closely track the distribution of baryons). Such a tight correlation is claimed to be observed in several spiral galaxies, a fact which has been referred to as ""Renzo's rule"".
MOND is an example of a class of theories known as modified gravity, and is an alternative to the hypothesis that the dynamics of galaxies are determined by massive, invisible dark matter halos. Since Milgrom's original proposal, proponents of MOND have claimed to successfully predict a variety of galactic phenomena that they state are difficult to understand as consequences of dark matter.Though MOND explains the anomalously great rotational velocities of galaxies at their perimeters, it does not fully explain the velocity dispersions of individual galaxies within galaxy clusters. MOND reduces the discrepancy between the velocity dispersions and clusters' observed missing baryonic mass from a factor of around 10 to a factor of about 2. However, the residual discrepancy cannot be accounted for by MOND, requiring that other explanations close the gap such as the presence of as-yet undetected missing baryonic matter.The accurate measurement of the speed of gravitational waves compared to the speed of light in 2017 ruled out a certain class of modified gravity theories but concluded that other MOND theories that dispense with the need for dark matter remained viable. Two years later, theories put forth by Constantinos Skordis and Tom Zlosnik were consistent with gravitational waves that always travel at the speed of light. Later still in 2021, Skordis and Zlosnik developed a subclass of their theory called ""RMOND"", for ""relativistic MOND"", which had ""been shown to reproduce in great detail the main observations in cosmology, including the cosmic-microwave-background power spectrum, and the matter structure power spectrum."" ","MOND reduces the discrepancy between the velocity dispersions and clusters' observed missing baryonic mass from a factor of around 10 to a factor of about 2At a statistical significance of 8, it was found that the spatial offset of the center of the total mass from the center of the baryonic mass peaks cannot be explained with an alteration of the gravitational force law alone.
MOND is an example of a class of theories known as modified gravity, and is an alternative to the hypothesis that the dynamics of galaxies are determined by massive, invisible dark matter halosMOND has had a considerable amount of success in predicting the rotation curves of low-surface-brightness galaxies, matching the baryonic Tully–Fisher relation, and the velocity dispersions of the small satellite galaxies of the Local Group.
The Bullet Cluster provides the best current evidence for the nature of dark matter and provides ""evidence against some of the more popular versions of Modified Newtonian dynamics (MOND)"" as applied to large galactic clustersThe fact that some form of unseen mass must exist in these systems detracts from the adequacy of MOND as a solution to the missing mass problem, although the amount of extra mass required is a fifth that of a Newtonian analysis, and there is no requirement that the missing mass be non-baryonicHowever, MOND and its generalizations do not adequately account for observed properties of galaxy clusters, and no satisfactory cosmological model has been constructed from the hypothesis.
The most serious problem facing Milgrom's law is that it cannot eliminate the need for dark matter in all astrophysical systems: galaxy clusters show a residual mass discrepancy even when analyzed using MONDIn particular, MOND predicts a far stronger correlation between features in the baryonic mass distribution and features in the rotation curve than does the dark matter hypothesis (since dark matter dominates the galaxy's mass budget and is conventionally assumed not to closely track the distribution of baryons)Indeed, analysis of sharp lensing data for the galaxy clu","MOND reduces the discrepancy between the velocity dispersions and clusters' observed missing baryonic mass from a factor of around 10 to a factor of about 2At a statistical significance of 8, it was found that the spatial offset of the center of the total mass from the center of the baryonic mass peaks cannot be explained with an alteration of the gravitational force law alone.
MOND is an example of a class of theories known as modified gravity, and is an alternative to the hypothesis that the dynamics of galaxies are determined by massive, invisible dark matter halosMOND has had a considerable amount of success in predicting the rotation curves of low-surface-brightness galaxies, matching the baryonic Tully–Fisher relation, and the velocity dispersions of the small satellite galaxies of the Local Group.
The Bullet Cluster provides the best current evidence for the nature of dark matter and provides ""evidence against some of the more popular versions of Modified Newtonian dynamics (MOND)"" as applied to large galactic clustersThe fact that some form of unseen mass must exist in these systems detracts from the adequacy of MOND as a solution to the missing mass problem, although the amount of extra mass required is a fifth that of a Newtonian analysis, and there is no requirement that the missing mass be non-baryonicHowever, MOND and its generalizations do not adequately account for observed properties of galaxy clusters, and no satisfactory cosmological model has been constructed from the hypothesis.
The most serious problem facing Milgrom's law is that it cannot eliminate the need for dark matter in all astrophysical systems: galaxy clusters show a residual mass discrepancy even when analyzed using MONDIn particular, MOND predicts a far stronger correlation between features in the baryonic mass distribution and features in the rotation curve than does the dark matter hypothesis (since dark matter dominates the galaxy's mass budget and is conventionally assumed not to closely track the distribution of baryons)Indeed, analysis of sharp lensing data for the galaxy clu[SEP]Which of the following statements accurately describes the impact of Modified Newtonian Dynamics (MOND) on the observed ""missing baryonic mass"" discrepancy in galaxy clusters?","['D', 'E', 'C']",1.0
Which of the following is an accurate definition of dynamic scaling in self-similar systems?,"Many of these systems evolve in a self-similar fashion in the sense that data obtained from the snapshot at any fixed time is similar to the respective data taken from the snapshot of any earlier or later time. Many other seemingly disparate systems which are found to exhibit dynamic scaling. The form of their proposal for dynamic scaling was: :f(x,t)\sim t^{-w}x^{-\tau} \varphi \left( \frac x {t^z} \right), where the exponents satisfy the following relation: :w=(2-\tau)z. == Test for dynamic scaling == In such systems we can define a certain time-dependent stochastic variable x. Dynamic scaling (sometimes known as Family-Vicsek scaling) is a litmus test that shows whether an evolving system exhibits self-similarity. Self-similar processes are types of stochastic processes that exhibit the phenomenon of self-similarity. Essentially such systems can be termed as temporal self-similarity since the same system is similar at different times. == Examples == Many phenomena investigated by physicists are not static but evolve probabilistically with time (i.e. Stochastic process). If the numerical values of the dimensional quantities change, but corresponding dimensionless quantities remain invariant then we can argue that snapshots of the system at different times are similar. One way of verifying dynamic scaling is to plot dimensionless variables f/t^\theta as a function of x/t^z of the data extracted at various different time. In the study of partial differential equations, particularly in fluid dynamics, a self-similar solution is a form of solution which is similar to itself if the independent and dependent variables are appropriately scaled. Then if all the plots of f vs x obtained at different times collapse onto a single universal curve then it is said that the systems at different time are similar and it obeys dynamic scaling. The litmus test of such self-similarity is provided by the dynamic scaling. == History == The term ""dynamic scaling"" as one of the essential concepts to describe the dynamics of critical phenomena seems to originate in the seminal paper of Pierre Hohenberg and Bertrand Halperin (1977), namely they suggested ""[...] that the wave vector- and frequencydependent susceptibility of a ferromagnet near its Curie point may be expressed as a function independent of |T-T_C| provided that the length and frequency scales, as well as the magnetization and magnetic field, are rescaled by appropriate powers of |T-T_C|.."" In general a function is said to exhibit dynamic scaling if it satisfies: :f(x,t)\sim t^\theta \varphi \left( \frac x {t^z} \right). Self-similarity in packetised data networks can be caused by the distribution of file sizes, human interactions and/ or Ethernet dynamics. A self-similar phenomenon behaves the same when viewed at different degrees of magnification, or different scales on a dimension (space or time). When this happens we say that the system is self-similar. That is, the system is similar to itself at different times. Self-similar Ethernet traffic exhibits dependencies over a long range of time scales. In computer architecture, dynamic voltage scaling is a power management technique in which the voltage used in a component is increased or decreased, depending upon circumstances. Deriving mathematical models which accurately represent long- range dependent traffic is a fertile area of research. ==Self-similar stochastic processes modeled by Tweedie distributions== Leland et al have provided a mathematical formalism to describe self-similar stochastic processes. For example: * kinetics of aggregation described by Smoluchowski coagulation equation, * complex networks described by Barabasi–Albert model, * the kinetic and stochastic Cantor set, * the growth model within the Kardar–Parisi–Zhang (KPZ) universality class; one find that the width of the surface W(L,t) exhibits dynamic scaling.. * the area size distribution of the blocks of weighted planar stochastic lattice (WPSL) also exhibits dynamic scaling. * the marginal probabilities of fractional Poisson processes exhibits dynamic scaling. ==References== Category:Physical phenomena Category:Stochastic models ","Dynamic scaling refers to the evolution of self-similar systems, where data obtained from snapshots at fixed times exhibits similarity to the respective data taken from snapshots of any earlier or later time. This similarity is tested by a certain time-dependent stochastic variable x.","Dynamic scaling refers to the non-evolution of self-similar systems, where data obtained from snapshots at fixed times is similar to the respective data taken from snapshots of any earlier or later time. This similarity is tested by a certain time-dependent stochastic variable x.","Dynamic scaling refers to the evolution of self-similar systems, where data obtained from snapshots at fixed times is dissimilar to the respective data taken from snapshots of any earlier or later time. This dissimilarity is tested by a certain time-independent stochastic variable y.","Dynamic scaling refers to the non-evolution of self-similar systems, where data obtained from snapshots at fixed times is dissimilar to the respective data taken from snapshots of any earlier or later time. This dissimilarity is tested by a certain time-independent stochastic variable y.","Dynamic scaling refers to the evolution of self-similar systems, where data obtained from snapshots at fixed times is independent of the respective data taken from snapshots of any earlier or later time. This independence is tested by a certain time-dependent stochastic variable z.",A,kaggle200,"Later Tamás Vicsek and Fereydoon Family proposed the idea of dynamic scaling in the context of diffusion-limited aggregation (DLA) of clusters in two dimensions. The form of their proposal for dynamic scaling was:
Dynamic scaling (sometimes known as Family-Vicsek scaling) is a litmus test that shows whether an evolving system exhibits self-similarity. In general a function is said to exhibit dynamic scaling if it satisfies:
In such systems we can define a certain time-dependent stochastic variable formula_11. We are interested in computing the probability distribution of formula_11 at various instants of time i.e. formula_13. The numerical value of formula_14 and the typical or mean value of formula_11 generally changes over time. The question is: what happens to the corresponding dimensionless variables? If the numerical values of the dimensional quantities change, but corresponding dimensionless quantities remain invariant then we can argue that snapshots of the system at different times are similar. When this happens we say that the system is self-similar.
Many of these systems evolve in a self-similar fashion in the sense that data obtained from the snapshot at any fixed time is similar to the respective data taken from the snapshot of any earlier or later time. That is, the system is similar to itself at different times. The litmus test of such self-similarity is provided by the dynamic scaling.","Dynamic scaling (sometimes known as Family-Vicsek scaling) is a litmus test that shows whether an evolving system exhibits self-similarity. In general a function is said to exhibit dynamic scaling if it satisfies: f(x,t)∼tθφ(xtz).
In such systems we can define a certain time-dependent stochastic variable x . We are interested in computing the probability distribution of x at various instants of time i.e. f(x,t) . The numerical value of f and the typical or mean value of x generally changes over time. The question is: what happens to the corresponding dimensionless variables? If the numerical values of the dimensional quantities change, but corresponding dimensionless quantities remain invariant then we can argue that snapshots of the system at different times are similar. When this happens we say that the system is self-similar. One way of verifying dynamic scaling is to plot dimensionless variables f/tθ as a function of x/tz of the data extracted at various different time. Then if all the plots of f vs x obtained at different times collapse onto a single universal curve then it is said that the systems at different time are similar and it obeys dynamic scaling. The idea of data collapse is deeply rooted to the Buckingham Pi theorem. Essentially such systems can be termed as temporal self-similarity since the same system is similar at different times.
Here the exponent θ is fixed by the dimensional requirement [f]=[tθ] . The numerical value of f/tθ should remain invariant despite the unit of measurement of t is changed by some factor since φ is a dimensionless quantity. Many of these systems evolve in a self-similar fashion in the sense that data obtained from the snapshot at any fixed time is similar to the respective data taken from the snapshot of any earlier or later time. That is, the system is similar to itself at different times. The litmus test of such self-similarity is provided by the dynamic scaling.","The litmus test of such self-similarity is provided by the dynamic scaling.The litmus test of such self-similarity is provided by the dynamic scalingIn general a function is said to exhibit dynamic scaling if it satisfies:
In such systems we can define a certain time-dependent stochastic variable formula_11In general a function is said to exhibit dynamic scaling if it satisfies: f(x,t)∼tθφ(xtz).
In such systems we can define a certain time-dependent stochastic variable x Dynamic scaling (sometimes known as Family-Vicsek scaling) is a litmus test that shows whether an evolving system exhibits self-similarityThen if all the plots of f vs x obtained at different times collapse onto a single universal curve then it is said that the systems at different time are similar and it obeys dynamic scalingThe form of their proposal for dynamic scaling was:
Dynamic scaling (sometimes known as Family-Vicsek scaling) is a litmus test that shows whether an evolving system exhibits self-similarity One way of verifying dynamic scaling is to plot dimensionless variables f/tθ as a function of x/tz of the data extracted at various different timeEssentially such systems can be termed as temporal self-similarity since the same system is similar at different times.
Here the exponent θ is fixed by the dimensional requirement [f]=[tθ] When this happens we say that the system is self-similarThe question is: what happens to the corresponding dimensionless variables? If the numerical values of the dimensional quantities change, but corresponding dimensionless quantities remain invariant then we can argue that snapshots of the system at different times are similarWhen this happens we say that the system is self-similar.
Many of these systems evolve in a self-similar fashion in the sense that data obtained from the snapshot at any fixed time is similar to the respective data taken from the snapshot of any earlier or later time- Later Tamás Vicsek and Fereydoon Family proposed the idea of dynamic scaling in the context of diffusion-limited aggregation (DLA) of clusters in two dimensionsTh","The litmus test of such self-similarity is provided by the dynamic scaling.The litmus test of such self-similarity is provided by the dynamic scalingIn general a function is said to exhibit dynamic scaling if it satisfies:
In such systems we can define a certain time-dependent stochastic variable formula_11In general a function is said to exhibit dynamic scaling if it satisfies: f(x,t)∼tθφ(xtz).
In such systems we can define a certain time-dependent stochastic variable x Dynamic scaling (sometimes known as Family-Vicsek scaling) is a litmus test that shows whether an evolving system exhibits self-similarityThen if all the plots of f vs x obtained at different times collapse onto a single universal curve then it is said that the systems at different time are similar and it obeys dynamic scalingThe form of their proposal for dynamic scaling was:
Dynamic scaling (sometimes known as Family-Vicsek scaling) is a litmus test that shows whether an evolving system exhibits self-similarity One way of verifying dynamic scaling is to plot dimensionless variables f/tθ as a function of x/tz of the data extracted at various different timeEssentially such systems can be termed as temporal self-similarity since the same system is similar at different times.
Here the exponent θ is fixed by the dimensional requirement [f]=[tθ] When this happens we say that the system is self-similarThe question is: what happens to the corresponding dimensionless variables? If the numerical values of the dimensional quantities change, but corresponding dimensionless quantities remain invariant then we can argue that snapshots of the system at different times are similarWhen this happens we say that the system is self-similar.
Many of these systems evolve in a self-similar fashion in the sense that data obtained from the snapshot at any fixed time is similar to the respective data taken from the snapshot of any earlier or later time- Later Tamás Vicsek and Fereydoon Family proposed the idea of dynamic scaling in the context of diffusion-limited aggregation (DLA) of clusters in two dimensionsTh[SEP]Which of the following is an accurate definition of dynamic scaling in self-similar systems?","['A', 'D', 'B']",1.0
Which of the following statements accurately describes the origin and significance of the triskeles symbol?,"It is possible that this usage is related with the Greek name of the island of Sicily, Trinacria (Τρινακρία ""having three headlands"").Liddell and Scott’s Greek-English Lexicon (A Lexicon Abridged from), Oxford, 1944, p.27, Cassell's Latin Dictionary, Marchant, J.R.V, & Charles, Joseph F., (Eds.), Revised Edition, 1928 The Sicilian triskeles is shown with the head of Medusa at the center.Matthews, Jeff (2005) Symbols of Naples The ancient symbol has been re-introduced in modern flags of Sicily since 1848. An early flag of Sicily, proposed in 1848, included the Sicilian triskeles or ""Trinacria symbol"". The triskeles was adopted as emblem by the rulers of Syracuse. The oldest find of a triskeles in Sicily is a vase dated to 700 BCE, for which researchers assume a Minoan-Mycenaean origin. ===Roman period and Late Antiquity=== Late examples of the triple spiral symbols are found in Iron Age Europe, e.g. carved in rock in Castro Culture settlement in Galicia, Asturias and Northern Portugal. In the Hellenistic period, the symbol becomes associated with the island of Sicily, appearing on coins minted under Dionysius I of Syracuse beginning in BCE.Arthur Bernard Cook, Zeus: a study in ancient religion, Volume 3, Part 2 (1940), p. 1074. The actual triskeles symbol of three human legs is found especially in Greek antiquity, beginning in archaic pottery and continued in coinage of the classical period. Also p. 134: [On CRs] ""Using Celtic symbols such as triskeles and spirals"" Other uses of triskelion-like emblems include the logo for the Trisquel Linux distribution and the seal of the United States Department of Transportation. The triskelion was a motif in the art of the Iron age Celtic La Tène culture. ===Classical Antiquity=== The triskeles proper, composed of three human legs, is younger than the triple spiral, found in decorations on Greek pottery especially as a design shown on hoplite shields, and later also minted on Greek and Anatolian coinage. Airavella, Allariz, Galicia File:Torque de Santa Tegra 1.JPG|Triskelion and spirals on a Galician torc terminal. ===Medieval=== File:Triskel-triskele-triquetre-triscel VAN DEN HENDE ALAIN CC-BY-SA-40 0718 PDP BG 007.jpg|Triskèle Saint-Marcellin (in Isère / France) File:Triskel_et_Biskel_-_Saint_Antoine_l_Abbaye_- _Alain_Van_den_Hende_17071627_Licence_CC40.jpg|On the front of Abbatial church of Saint-Antoine-l'Abbaye with 2 groups of 2 triskelions and 1 biskel (in Isère / France) File:Triskele karja church.jpg|Mural depicting a triskelion on the ceiling of Karja church in Saaremaa, Estonia ===Modern=== File:Flag of the Isle of Mann.svg|Flag of the Isle of Man File:Sicilian Flag.svg|Flag of Sicily, with the triskeles-and-Gorgoneion symbol File:Flag of Ust-Orda Buryat Autonomous Okrug.svg|Flag of Ust-Orda Buryat Okrug File:Flag of Ingushetia.svg|Flag of Ingushetia File:27. The spiral triskele is one of the primary symbols of Celtic Reconstructionist Paganism, used to represent a variety of triplicities in cosmology and theology; it is also a favored symbol due to its association with the god Manannán mac Lir.Bonewits, Isaac (2006) Bonewits's Essential Guide to Druidism. The three legs (triskeles) symbol is rarely found as a charge in late medieval heraldry, notably as the arms of the King of Mann (Armorial Wijnbergen, ), and as canting arms in the city seal of the Bavarian city of Füssen (dated 1317). ==Modern usage== The triskeles was included in the design of the Army Gold Medal awarded to British Army majors and above who had taken a key part in the Battle of Maida (1806).Charles Norton Elvin, A Dictionary of Heraldry (1889), p. 126. It later appears in heraldry, and, other than in the flag of Sicily, came to be used in the flag of the Isle of Man (known as ny tree cassyn ""the three legs"").Adopted in 1932, the flag of the Isle of Man is derived from the arms of the King of Mann recorded in the 13th century. thumb|Neolithic triple spiral symbol A triskelion or triskeles is an ancient motif consisting of a triple spiral exhibiting rotational symmetry or other patterns in triplicate that emanate from a common center. Later versions of Sicilian flags have retained the emblem, including the one officially adopted in 2000. Greek (triskelḗs) means ""three-legged"".τρισκελής, Henry George Liddell, Robert Scott, A Greek-English Lexicon, on Perseus Digital Library; from τρι- (tri-), ""three times"" (τρι- , Henry George Liddell, Robert Scott, A Greek-English Lexicon, on Perseus Digital Library) and ""σκέλος"" (skelos), ""leg (σκέλος, Henry George Liddell, Robert Scott, A Greek–English Lexicon, on Perseus Digital Library) While the Greek adjective ""three-legged [e.g. of a table]"" is ancient, use of the term for the symbol is modern, introduced in 1835 by Honoré Théodoric d'Albert de Luynes as French ,Honore-Theodoric-Paul-Joseph d'Albert de Luynes, Etudes numismatiques sur quelques types relatifs au culte d'Hecate (1835), 83f. and adopted in the spelling triskeles following Otto Olshausen (1886).Johannes Maringer, ""Das Triskeles in der vor- und frühgeschichtlichen Kunst"", Anthropos 74.3/4 (1979), pp. 566-576 The form triskelion (as it were Greek Classical Greek does not have , but the form ""small tripod"" is on record as the diminutive of ""three-pronged"". In Ireland before the 5th century, in Celtic Christianity the symbol took on new meaning, as a symbol of the Trinity (Father, Son, and Holy Spirit). ==Medieval use== The triple spiral design is found as a decorative element in Gothic architecture. The Flag of the Isle of Man (1932) shows a heraldic design of a triskeles of three armoured legs. As a ""Celtic symbol"", it is used primarily by groups with a Celtic cultural orientation and, less frequently, can also be found in use by various eclectic or syncretic traditions such as Neopaganism. Birch's use of triskelos is informed by the Duc de Luynes' triskèle, and it continues to see some use alongside the better-formed triskeles into the 20th century in both English and German, e.g. in a 1932 lecture by C. G. Jung (lecture of 26 October, edited in The Psychology of Kundalini Yoga: Notes of the Seminar Given in 1932. 1996, 43ff.). ==Use in European antiquity== ===Neolithic to Iron Age=== The triple spiral symbol, or three spiral volute, appears in many early cultures, the first in Malta (4400–3600 BCE) and in the astronomical calendar at the famous megalithic tomb of Newgrange in Ireland built around 3200 BCE, as well as on Mycenaean vessels. The Duc de Luynes in his 1835 study noted the co- occurrence of the symbol with the eagle, the cockerel, the head of Medusa, Perseus, three crescent moons, three ears of corn, and three grains of corn. ","The triskeles symbol was reconstructed as a feminine divine triad by the rulers of Syracuse, and later adopted as an emblem. Its usage may also be related to the Greek name of Sicily, Trinacria, which means ""having three headlands."" The head of Medusa at the center of the Sicilian triskeles represents the three headlands.","The triskeles symbol is a representation of three interlinked spirals, which was adopted as an emblem by the rulers of Syracuse. Its usage in modern flags of Sicily has its origins in the ancient Greek name for the island, Trinacria, which means ""Sicily with three corners."" The head of Medusa at the center is a representation of the island's rich cultural heritage.","The triskeles symbol is a representation of a triple goddess, reconstructed by the rulers of Syracuse, who adopted it as an emblem. Its significance lies in the fact that it represents the Greek name for Sicily, Trinacria, which contains the element ""tria,"" meaning three. The head of Medusa at the center of the Sicilian triskeles represents the three headlands.","The triskeles symbol represents three interlocked spiral arms, which became an emblem for the rulers of Syracuse. Its usage in modern flags of Sicily is due to the island's rich cultural heritage, which dates back to ancient times. The head of Medusa at the center represents the lasting influence of Greek mythology on Sicilian culture.","The triskeles symbol is a representation of the Greek goddess Hecate, reconstructed by the rulers of Syracuse. Its adoption as an emblem was due to its cultural significance, as it represented the ancient Greek name for Sicily, Trinacria. The head of Medusa at the center of the Sicilian triskeles represents the island's central location in the Mediterranean.",A,kaggle200,"While the Greek adjective ""three-legged [e.g. of a table]"" is ancient, use of the term for the symbol is modern, introduced in 1835 by Honoré Théodoric d'Albert de Luynes as French , and adopted in the spelling ""triskeles"" following Otto Olshausen (1886).
The ancient symbol has been re-introduced in modern flags of Sicily since 1848. The oldest find of a triskeles in Sicily is a vase dated to 700 BCE, for which researchers assume a Minoan-Mycenaean origin.
The actual ""triskeles"" symbol of three human legs is found especially in Greek antiquity, beginning in archaic pottery and continued in coinage of the classical period.
The triskeles was adopted as emblem by the rulers of Syracuse. It is possible that this usage is related with the Greek name of the island of Sicily, ""Trinacria"" (Τρινακρία ""having three headlands"").","A triskelion or triskeles is an ancient motif consisting of a triple spiral exhibiting rotational symmetry or other patterns in triplicate that emanate from a common center.
The spiral design can be based on interlocking Archimedean spirals, or represent three bent human legs. It is found in artifacts of the European Neolithic and Bronze Age with continuation into the Iron Age especially in the context of the La Tène culture and related Celtic traditions.
The actual triskeles symbol of three human legs is found especially in Greek antiquity, beginning in archaic pottery and continued in coinage of the classical period.
In the Hellenistic period, the symbol becomes associated with the island of Sicily, appearing on coins minted under Dionysius I of Syracuse beginning in c. 382 BCE.
It later appears in heraldry, and, other than in the flag of Sicily, came to be used in the flag of the Isle of Man (known as ny tree cassyn 'the three legs').Greek τρισκελής (triskelḗs) means 'three-legged'.
While the Greek adjective τρισκελής 'three-legged (e.g., of a table)' is ancient, use of the term for the symbol is modern, introduced in 1835 by Honoré Théodoric d'Albert de Luynes as French triskèle, and adopted in the spelling triskeles following Otto Olshausen (1886).
The form triskelion (as it were Greek τρισκέλιον) is a diminutive which entered English usage in numismatics in the late 19th century.
The form consisting of three human legs (as opposed to the triple spiral) has also been called a ""triquetra of legs"", also triskelos or triskel.
The triskeles was included in the design of the Army Gold Medal awarded to British Army majors and above who had taken a key part in the Battle of Maida (1806).
An early flag of Sicily, proposed in 1848, included the Sicilian triskeles or ""Trinacria symbol"".
Later versions of Sicilian flags have retained the emblem, including the one officially adopted in 2000.
The Flag of the Isle of Man (1932) shows a heraldic design of a triskeles of three armoured legs.
Classical Antiquity The triskeles proper, composed of three human legs, is younger than the triple spiral, found in decorations on Greek pottery especially as a design shown on hoplite shields, and later also minted on Greek and Anatolian coinage.
An early example is found on the shield of Achilles in an Attic hydria of the late 6th century BCE.
It is found on coinage in Lycia, and on staters of Pamphylia (at Aspendos, 370–333 BCE) and Pisidia. The meaning of the Greek triskeles is not recorded directly.
The Duc de Luynes in his 1835 study noted the co-occurrence of the symbol with the eagle, the cockerel, the head of Medusa, Perseus, three crescent moons, three ears of corn, and three grains of corn.
From this, he reconstructed feminine divine triad which he identified with the ""triple goddess"" Hecate.
The triskeles was adopted as emblem by the rulers of Syracuse. It is possible that this usage is related with the Greek name of the island of Sicily, Trinacria (Τρινακρία 'having three headlands').
The Sicilian triskeles is shown with the head of Medusa at the center.
The ancient symbol has been re-introduced in modern flags of Sicily since 1848. The oldest find of a triskeles in Sicily is a vase dated to 700 BCE, for which researchers assume a Minoan-Mycenaean origin.
Roman period and Late Antiquity Late examples of the triple spiral symbols are found in Iron Age Europe, e.g. carved in rock in Castro Culture settlement in Galicia, Asturias and Northern Portugal.
In Ireland before the 5th century, in Celtic Christianity the symbol took on new meaning, as a symbol of the Trinity (Father, Son, and Holy Spirit).","The meaning of the Greek triskeles is not recorded directly.
The Duc de Luynes in his 1835 study noted the co-occurrence of the symbol with the eagle, the cockerel, the head of Medusa, Perseus, three crescent moons, three ears of corn, and three grains of corn.
From this, he reconstructed feminine divine triad which he identified with the ""triple goddess"" Hecate.
The triskeles was adopted as emblem by the rulers of SyracuseThe oldest find of a triskeles in Sicily is a vase dated to 700 BCE, for which researchers assume a Minoan-Mycenaean origin.
The actual ""triskeles"" symbol of three human legs is found especially in Greek antiquity, beginning in archaic pottery and continued in coinage of the classical period.
The triskeles was adopted as emblem by the rulers of SyracuseIt is found in artifacts of the European Neolithic and Bronze Age with continuation into the Iron Age especially in the context of the La Tène culture and related Celtic traditions.
The actual triskeles symbol of three human legs is found especially in Greek antiquity, beginning in archaic pottery and continued in coinage of the classical period.
In the Hellenistic period, the symbol becomes associated with the island of Sicily, appearing on coins minted under Dionysius I of Syracuse beginning in c. 382 BCE.
It later appears in heraldry, and, other than in the flag of Sicily, came to be used in the flag of the Isle of Man (known as ny tree cassyn 'the three legs').Greek τρισκελής (triskelḗs) means 'three-legged'.
While the Greek adjective τρισκελής 'three-legged (e.g., of a table)' is ancient, use of the term for the symbol is modern, introduced in 1835 by Honoré Théodoric d'Albert de Luynes as French triskèle, and adopted in the spelling triskeles following Otto Olshausen (1886).
The form triskelion (as it were Greek τρισκέλιον) is a diminutive which entered English usage in numismatics in the late 19th century.
The form consisting of three human legs (as opposed to the triple spiral) has also been called a ""triquetra of legs"", also triskelos or triskel.
The triskeles was included in the design o","The meaning of the Greek triskeles is not recorded directly.
The Duc de Luynes in his 1835 study noted the co-occurrence of the symbol with the eagle, the cockerel, the head of Medusa, Perseus, three crescent moons, three ears of corn, and three grains of corn.
From this, he reconstructed feminine divine triad which he identified with the ""triple goddess"" Hecate.
The triskeles was adopted as emblem by the rulers of SyracuseThe oldest find of a triskeles in Sicily is a vase dated to 700 BCE, for which researchers assume a Minoan-Mycenaean origin.
The actual ""triskeles"" symbol of three human legs is found especially in Greek antiquity, beginning in archaic pottery and continued in coinage of the classical period.
The triskeles was adopted as emblem by the rulers of SyracuseIt is found in artifacts of the European Neolithic and Bronze Age with continuation into the Iron Age especially in the context of the La Tène culture and related Celtic traditions.
The actual triskeles symbol of three human legs is found especially in Greek antiquity, beginning in archaic pottery and continued in coinage of the classical period.
In the Hellenistic period, the symbol becomes associated with the island of Sicily, appearing on coins minted under Dionysius I of Syracuse beginning in c. 382 BCE.
It later appears in heraldry, and, other than in the flag of Sicily, came to be used in the flag of the Isle of Man (known as ny tree cassyn 'the three legs').Greek τρισκελής (triskelḗs) means 'three-legged'.
While the Greek adjective τρισκελής 'three-legged (e.g., of a table)' is ancient, use of the term for the symbol is modern, introduced in 1835 by Honoré Théodoric d'Albert de Luynes as French triskèle, and adopted in the spelling triskeles following Otto Olshausen (1886).
The form triskelion (as it were Greek τρισκέλιον) is a diminutive which entered English usage in numismatics in the late 19th century.
The form consisting of three human legs (as opposed to the triple spiral) has also been called a ""triquetra of legs"", also triskelos or triskel.
The triskeles was included in the design o[SEP]Which of the following statements accurately describes the origin and significance of the triskeles symbol?","['C', 'E', 'D']",0.0
What is the significance of regularization in terms of renormalization problems in physics?,"Renormalization is distinct from regularization, another technique to control infinities by assuming the existence of new unknown physics at new scales. == Self-interactions in classical physics == thumbnail|upright=1.3|Figure 1. Regularization is the first step towards obtaining a completely finite and meaningful result; in quantum field theory it must be usually followed by a related, but independent technique called renormalization. Rather than the existence of unknown new physics, assuming the existence of particle interactions with other surrounding particles in the environment, renormalization offers an alternatively strategy to resolve infinities in such classical problems. ==Specific types== Specific types of regularization procedures include *Dimensional regularization *Pauli–Villars regularization *Lattice regularization *Zeta function regularization *Causal regularizationScharf, G.: Finite Quantum Electrodynamics: The Causal Approach, Springer 1995. In physics, especially quantum field theory, regularization is a method of modifying observables which have singularities in order to make them finite by the introduction of a suitable parameter called the regulator. Similar regularization arguments work in other renormalization problems. By contrast, any present regularization method introduces formal coefficients that must eventually be disposed of by renormalization. ===Opinions=== Paul Dirac was persistently, extremely critical about procedures of renormalization. Renormalization procedures are based on the requirement that certain physical quantities (such as the mass and charge of an electron) equal observed (experimental) values. Regularization: Classical physics theory breaks down at small scales, e.g., the difference between an electron and a point particle shown above. This early work was the inspiration for later attempts at regularization and renormalization in quantum field theory. Renormalization is a collection of techniques in quantum field theory, the statistical mechanics of fields, and the theory of self-similar geometric structures, that are used to treat infinities arising in calculated quantities by altering values of these quantities to compensate for effects of their self-interactions. In addition, there are qualms about renormalization. *Hadamard regularization ==Realistic regularization== ===Conceptual problem=== Perturbative predictions by quantum field theory about quantum scattering of elementary particles, implied by a corresponding Lagrangian density, are computed using the Feynman rules, a regularization method to circumvent ultraviolet divergences so as to obtain finite results for Feynman diagrams containing loops, and a renormalization scheme. Renormalization is based on the requirement that some physical quantities -- expressed by seemingly divergent expressions such as 1/ \epsilon -- are equal to the observed values. The difficulty with a realistic regularization is that so far there is none, although nothing could be destroyed by its bottom-up approach; and there is no experimental basis for it. ===Minimal realistic regularization=== Considering distinct theoretical problems, Dirac in 1963 suggested: ""I believe separate ideas will be needed to solve these distinct problems and that they will be solved one at a time through successive stages in the future evolution of physics. Renormalization specifies relationships between parameters in the theory when parameters describing large distance scales differ from parameters describing small distance scales. Initially viewed as a suspect provisional procedure even by some of its originators, renormalization eventually was embraced as an important and self-consistent actual mechanism of scale physics in several fields of physics and mathematics. Instead of being only a worrisome problem, renormalization has become an important theoretical tool for studying the behavior of field theories in different regimes. Other principles, such as gauge symmetry, must then be used to reduce or eliminate the ambiguity. === Zeta function regularization === Julian Schwinger discovered a relationship between zeta function regularization and renormalization, using the asymptotic relation: : I(n, \Lambda )= \int_0^{\Lambda }dp\,p^n \sim 1+2^n+3^n+\cdots+ \Lambda^n \to \zeta(-n) as the regulator . Changes in renormalization scale will simply affect how much of a result comes from Feynman diagrams without loops, and how much comes from the remaining finite parts of loop diagrams. In general, there will be a pole at the physical value (usually 4) of d, which needs to be canceled by renormalization to obtain physical quantities. showed that dimensional regularization is mathematically well defined, at least in the case of massive Euclidean fields, by using the Bernstein–Sato polynomial to carry out the analytic continuation. ","Regularizing the mass-energy of an electron with a finite radius can theoretically simplify calculations involving infinities or singularities, thereby providing explanations that would otherwise be impossible to achieve.",Regularizing the mass-energy of an electron with an infinite radius allows for the breakdown of a theory that is valid under one set of conditions. This approach can be applied to other renormalization problems as well.,Regularizing the mass-energy of an electron with a finite radius is a means of demonstrating that a system below a certain size can be explained without the need for further calculations. This approach can be applied to other renormalization problems as well.,Regularizing the mass-energy of an electron with an infinite radius can be used to provide a highly accurate description of a system under specific conditions. This approach can be transferred to other renormalization problems as well.,Regularizing the mass-energy of an electron with an infinite radius is essential for explaining how a system below a certain size operates. This approach can be applied to other renormalization problems as well.,C,kaggle200,"Renormalization was first developed in quantum electrodynamics (QED) to make sense of infinite integrals in perturbation theory. Initially viewed as a suspect provisional procedure even by some of its originators, renormalization eventually was embraced as an important and self-consistent actual mechanism of scale physics in several fields of physics and mathematics.
Renormalization is distinct from regularization, another technique to control infinities by assuming the existence of new unknown physics at new scales.
Regularization: This process shows that the physical theory originally used breaks down at small scales. It shows that the electron cannot in fact be a point particle, and that some kind of additional new physics (in this case, a finite radius) is needed to explain systems below a certain scale. This same argument will appear in other renormalization problems: a theory holds in some domain but can be seen to break down and require new physics at other scales in order to avoid infinities. (Another way to avoid the infinity but while retaining the point nature of the particle would be to postulate a small additional dimension over which the particle could 'spread out' rather than over 3D space; this is a motivation for string theory.)
If a theory featuring renormalization (e.g. QED) can only be sensibly interpreted as an effective field theory, i.e. as an approximation reflecting human ignorance about the workings of nature, then the problem remains of discovering a more accurate theory that does not have these renormalization problems. As Lewis Ryder has put it, ""In the Quantum Theory, these [classical] divergences do not disappear; on the contrary, they appear to get worse. And despite the comparative success of renormalisation theory, the feeling remains that there ought to be a more satisfactory way of doing things.""","Renormalization is distinct from regularization, another technique to control infinities by assuming the existence of new unknown physics at new scales.
If a theory featuring renormalization (e.g. QED) can only be sensibly interpreted as an effective field theory, i.e. as an approximation reflecting human ignorance about the workings of nature, then the problem remains of discovering a more accurate theory that does not have these renormalization problems. As Lewis Ryder has put it, ""In the Quantum Theory, these [classical] divergences do not disappear; on the contrary, they appear to get worse. And despite the comparative success of renormalisation theory, the feeling remains that there ought to be a more satisfactory way of doing things.""
Regularization: Classical physics theory breaks down at small scales, e.g., the difference between an electron and a point particle shown above. Addressing this problem requires new kinds of additional physical constraints. For instance, in this case, assuming a finite electron radius (i.e., regularizing the electron mass-energy) suffices to explain the system below a certain size. Similar regularization arguments work in other renormalization problems. For example, a theory may hold under one narrow set of conditions, but due to calculations involving infinities or singularities, it may breakdown under other conditions or scales. In the case of the electron, another way to avoid infinite mass-energy while retaining the point nature of the particle is to postulate tiny additional dimensions over which the particle could 'spread out' rather than restrict its motion solely over 3D space. This is precisely the motivation behind string theory and other multi-dimensional models including multiple time dimensions. Rather than the existence of unknown new physics, assuming the existence of particle interactions with other surrounding particles in the environment, renormalization offers an alternatively strategy to resolve infinities in such classical problems.","Similar regularization arguments work in other renormalization problemsRenormalization is distinct from regularization, another technique to control infinities by assuming the existence of new unknown physics at new scales.
If a theory featuring renormalization (e.gAnd despite the comparative success of renormalisation theory, the feeling remains that there ought to be a more satisfactory way of doing things.""
Regularization: Classical physics theory breaks down at small scales, e.g., the difference between an electron and a point particle shown aboveInitially viewed as a suspect provisional procedure even by some of its originators, renormalization eventually was embraced as an important and self-consistent actual mechanism of scale physics in several fields of physics and mathematics.
Renormalization is distinct from regularization, another technique to control infinities by assuming the existence of new unknown physics at new scales.
Regularization: This process shows that the physical theory originally used breaks down at small scales- Renormalization was first developed in quantum electrodynamics (QED) to make sense of infinite integrals in perturbation theoryas an approximation reflecting human ignorance about the workings of nature, then the problem remains of discovering a more accurate theory that does not have these renormalization problemsThis same argument will appear in other renormalization problems: a theory holds in some domain but can be seen to break down and require new physics at other scales in order to avoid infinitiesAnd despite the comparative success of renormalisation theory, the feeling remains that there ought to be a more satisfactory way of doing things.""Rather than the existence of unknown new physics, assuming the existence of particle interactions with other surrounding particles in the environment, renormalization offers an alternatively strategy to resolve infinities in such classical problemsFor instance, in this case, assuming a finite electron radius (i.e., regularizing the electron mass-energy) suffices to explain the system","Similar regularization arguments work in other renormalization problemsRenormalization is distinct from regularization, another technique to control infinities by assuming the existence of new unknown physics at new scales.
If a theory featuring renormalization (e.gAnd despite the comparative success of renormalisation theory, the feeling remains that there ought to be a more satisfactory way of doing things.""
Regularization: Classical physics theory breaks down at small scales, e.g., the difference between an electron and a point particle shown aboveInitially viewed as a suspect provisional procedure even by some of its originators, renormalization eventually was embraced as an important and self-consistent actual mechanism of scale physics in several fields of physics and mathematics.
Renormalization is distinct from regularization, another technique to control infinities by assuming the existence of new unknown physics at new scales.
Regularization: This process shows that the physical theory originally used breaks down at small scales- Renormalization was first developed in quantum electrodynamics (QED) to make sense of infinite integrals in perturbation theoryas an approximation reflecting human ignorance about the workings of nature, then the problem remains of discovering a more accurate theory that does not have these renormalization problemsThis same argument will appear in other renormalization problems: a theory holds in some domain but can be seen to break down and require new physics at other scales in order to avoid infinitiesAnd despite the comparative success of renormalisation theory, the feeling remains that there ought to be a more satisfactory way of doing things.""Rather than the existence of unknown new physics, assuming the existence of particle interactions with other surrounding particles in the environment, renormalization offers an alternatively strategy to resolve infinities in such classical problemsFor instance, in this case, assuming a finite electron radius (i.e., regularizing the electron mass-energy) suffices to explain the system[SEP]What is the significance of regularization in terms of renormalization problems in physics?","['C', 'A', 'B']",1.0
Which of the following statements accurately describes the relationship between the dimensions of a diffracting object and the angular spacing of features in the diffraction pattern?,"Several qualitative observations can be made of diffraction in general: * The angular spacing of the features in the diffraction pattern is inversely proportional to the dimensions of the object causing the diffraction. In other words: The smaller the diffracting object, the 'wider' the resulting diffraction pattern, and vice versa. * The diffraction angles are invariant under scaling; that is, they depend only on the ratio of the wavelength to the size of the diffracting object. The simplest descriptions of diffraction are those in which the situation can be reduced to a two-dimensional problem. * When the diffracting object has a periodic structure, for example in a diffraction grating, the features generally become sharper. The amount of diffraction depends on the size of the gap. The finer the grating spacing, the greater the angular separation of the diffracted beams. Diffraction is greatest when the size of the gap is similar to the wavelength of the wave. In contrast, the diffraction pattern created near the diffracting object and (in the near field region) is given by the Fresnel diffraction equation. Diffraction can also be a concern in some technical applications; it sets a fundamental limit to the resolution of a camera, telescope, or microscope. I Ch. 30: Diffraction * Category:Physical phenomena The third figure, for example, shows a comparison of a double-slit pattern with a pattern formed by five slits, both sets of slits having the same spacing, between the center of one slit and the next. ==Matter wave diffraction== According to quantum theory every particle exhibits wave properties and can therefore diffract. The smaller the aperture, the larger the spot size at a given distance, and the greater the divergence of the diffracted beams. ===General aperture=== The wave that emerges from a point source has amplitude \psi at location \mathbf r that is given by the solution of the frequency domain wave equation for a point source (the Helmholtz equation), : abla^2 \psi + k^2 \psi = \delta(\mathbf r), where \delta(\mathbf r) is the 3-dimensional delta function. Diffraction is the interference or bending of waves around the corners of an obstacle or through an aperture into the region of geometrical shadow of the obstacle/aperture. They add together to give a resultant wave whose amplitude, and therefore intensity, varies randomly. ===Babinet's principle=== Babinet's principle is a useful theorem stating that the diffraction pattern from an opaque body is identical to that from a hole of the same size and shape, but with differing intensities. Hence, diffraction patterns usually have a series of maxima and minima. Diffraction in such a geometrical requirement is called Fraunhofer diffraction, and the condition where Fraunhofer diffraction is valid is called Fraunhofer condition, as shown in the right box. For example, if a 0.5 mm diameter circular hole is illuminated by a laser light with 0.6 μm wavelength, then Fraunhofer diffraction occurs if the viewing distance is greater than 1000 mm. === Derivation of Fraunhofer condition === thumb|293x293px|A geometrical diagram used to derive Fraunhofer condition at which Fraunhofer diffraction is valid. The spacing of the fringes is also inversely proportional to the slit dimension. Kinematic diffraction is the approach to study diffraction phenomena by neglecting multiple scattering. ","The angular spacing of features in the diffraction pattern is indirectly proportional to the dimensions of the object causing the diffraction. Therefore, if the diffracting object is smaller, the resulting diffraction pattern will be narrower.","The angular spacing of features in the diffraction pattern is directly proportional to the dimensions of the object causing the diffraction. Therefore, if the diffracting object is smaller, the resulting diffraction pattern will be narrower.","The angular spacing of features in the diffraction pattern is independent of the dimensions of the object causing the diffraction. Therefore, if the diffracting object is smaller, the resulting diffraction pattern will be the same as if it were big.","The angular spacing of features in the diffraction pattern is inversely proportional to the dimensions of the object causing the diffraction. Therefore, if the diffracting object is smaller, the resulting diffraction pattern will be wider.","The angular spacing of features in the diffraction pattern is directly proportional to the square root of the dimensions of the object causing the diffraction. Therefore, if the diffracting object is smaller, the resulting diffraction pattern will be slightly narrower.",D,kaggle200,"We will begin from the ""N""-slit diffraction pattern derived on the diffraction formalism page, with formula_1 slits of equal size formula_2 and spacing formula_3.
Schaefer–Bergmann diffraction is the resulting diffraction pattern of light interacting with sound waves in transparent crystals or glasses.
In optics, the Fraunhofer diffraction equation is used to model the diffraction of waves when the diffraction pattern is viewed at a long distance from the diffracting object, and also when it is viewed at the focal plane of an imaging lens.
In optics, the Fraunhofer diffraction equation is used to model the diffraction of waves when plane waves are incident on a diffracting object, and the diffraction pattern is viewed at a sufficiently long distance (a distance satisfying Fraunhofer condition) from the object (in the far-field region), and also when it is viewed at the focal plane of an imaging lens. In contrast, the diffraction pattern created near the diffracting object (in the near field region) is given by the Fresnel diffraction equation.","In optics, the Fraunhofer diffraction equation is used to model the diffraction of waves when plane waves are incident on a diffracting object, and the diffraction pattern is viewed at a sufficiently long distance (a distance satisfying Fraunhofer condition) from the object (in the far-field region), and also when it is viewed at the focal plane of an imaging lens. In contrast, the diffraction pattern created near the diffracting object and (in the near field region) is given by the Fresnel diffraction equation.
Several qualitative observations can be made of diffraction in general: The angular spacing of the features in the diffraction pattern is inversely proportional to the dimensions of the object causing the diffraction. In other words: the smaller the diffracting object, the wider the resulting diffraction pattern, and vice versa. (More precisely, this is true of the sines of the angles.) The diffraction angles are invariant under scaling; that is, they depend only on the ratio of the wavelength to the size of the diffracting object.
Several qualitative observations can be made of diffraction in general: The angular spacing of the features in the diffraction pattern is inversely proportional to the dimensions of the object causing the diffraction. In other words: The smaller the diffracting object, the 'wider' the resulting diffraction pattern, and vice versa. (More precisely, this is true of the sines of the angles.) The diffraction angles are invariant under scaling; that is, they depend only on the ratio of the wavelength to the size of the diffracting object.","In contrast, the diffraction pattern created near the diffracting object and (in the near field region) is given by the Fresnel diffraction equation.
Several qualitative observations can be made of diffraction in general: The angular spacing of the features in the diffraction pattern is inversely proportional to the dimensions of the object causing the diffractionIn other words: The smaller the diffracting object, the 'wider' the resulting diffraction pattern, and vice versaIn other words: the smaller the diffracting object, the wider the resulting diffraction pattern, and vice versa(More precisely, this is true of the sines of the angles.) The diffraction angles are invariant under scaling; that is, they depend only on the ratio of the wavelength to the size of the diffracting object.
Several qualitative observations can be made of diffraction in general: The angular spacing of the features in the diffraction pattern is inversely proportional to the dimensions of the object causing the diffractionIn contrast, the diffraction pattern created near the diffracting object (in the near field region) is given by the Fresnel diffraction equation.(More precisely, this is true of the sines of the angles.) The diffraction angles are invariant under scaling; that is, they depend only on the ratio of the wavelength to the size of the diffracting object- We will begin from the ""N""-slit diffraction pattern derived on the diffraction formalism page, with formula_1 slits of equal size formula_2 and spacing formula_3.
Schaefer–Bergmann diffraction is the resulting diffraction pattern of light interacting with sound waves in transparent crystals or glasses.
In optics, the Fraunhofer diffraction equation is used to model the diffraction of waves when the diffraction pattern is viewed at a long distance from the diffracting object, and also when it is viewed at the focal plane of an imaging lens.
In optics, the Fraunhofer diffraction equation is used to model the diffraction of waves when plane waves are incident on a diffracting object, and the diffraction pattern is viewed at a su","In contrast, the diffraction pattern created near the diffracting object and (in the near field region) is given by the Fresnel diffraction equation.
Several qualitative observations can be made of diffraction in general: The angular spacing of the features in the diffraction pattern is inversely proportional to the dimensions of the object causing the diffractionIn other words: The smaller the diffracting object, the 'wider' the resulting diffraction pattern, and vice versaIn other words: the smaller the diffracting object, the wider the resulting diffraction pattern, and vice versa(More precisely, this is true of the sines of the angles.) The diffraction angles are invariant under scaling; that is, they depend only on the ratio of the wavelength to the size of the diffracting object.
Several qualitative observations can be made of diffraction in general: The angular spacing of the features in the diffraction pattern is inversely proportional to the dimensions of the object causing the diffractionIn contrast, the diffraction pattern created near the diffracting object (in the near field region) is given by the Fresnel diffraction equation.(More precisely, this is true of the sines of the angles.) The diffraction angles are invariant under scaling; that is, they depend only on the ratio of the wavelength to the size of the diffracting object- We will begin from the ""N""-slit diffraction pattern derived on the diffraction formalism page, with formula_1 slits of equal size formula_2 and spacing formula_3.
Schaefer–Bergmann diffraction is the resulting diffraction pattern of light interacting with sound waves in transparent crystals or glasses.
In optics, the Fraunhofer diffraction equation is used to model the diffraction of waves when the diffraction pattern is viewed at a long distance from the diffracting object, and also when it is viewed at the focal plane of an imaging lens.
In optics, the Fraunhofer diffraction equation is used to model the diffraction of waves when plane waves are incident on a diffracting object, and the diffraction pattern is viewed at a su[SEP]Which of the following statements accurately describes the relationship between the dimensions of a diffracting object and the angular spacing of features in the diffraction pattern?","['D', 'E', 'C']",1.0
"Which of the following statements accurately depicts the relationship between Gauss's law, electric flux, electric field, and symmetry in electric fields?","For a closed Gaussian surface, electric flux is given by: where * is the electric field, * is any closed surface, * is the total electric charge inside the surface , * is the electric constant (a universal constant, also called the ""permittivity of free space"") () This relation is known as Gauss' law for electric fields in its integral form and it is one of Maxwell's equations. Under these circumstances, Gauss's law modifies to \Phi_E = \frac{Q_\mathrm{free}}{\varepsilon} for the integral form, and abla \cdot \mathbf{E} = \frac{\rho_\mathrm{free}}{\varepsilon} for the differential form. ==Interpretations== ===In terms of fields of force=== Gauss's theorem can be interpreted in terms of the lines of force of the field as follows: The flux through a closed surface is dependent upon both the magnitude and direction of the electric field lines penetrating the surface. See the article Gaussian surface for examples where these symmetries are exploited to compute electric fields. ===Differential form=== By the divergence theorem, Gauss's law can alternatively be written in the differential form: abla \cdot \mathbf{E} = \frac{\rho}{\varepsilon_0 \varepsilon_r} where is the divergence of the electric field, is the vacuum permittivity, \varepsilon_r is the relative permittivity, and is the volume charge density (charge per unit volume). ===Equivalence of integral and differential forms=== The integral and differential forms are mathematically equivalent, by the divergence theorem. While Gauss's law holds for all situations, it is most useful for ""by hand"" calculations when high degrees of symmetry exist in the electric field. Gauss's law may be expressed as: \Phi_E = \frac{Q}{\varepsilon_0} where is the electric flux through a closed surface enclosing any volume , is the total charge enclosed within , and is the electric constant. Since the flux is defined as an integral of the electric field, this expression of Gauss's law is called the integral form. thumb|A tiny Gauss's box whose sides are perpendicular to a conductor's surface is used to find the local surface charge once the electric potential and the electric field are calculated by solving Laplace's equation. In physics and electromagnetism, Gauss's law, also known as Gauss's flux theorem, (or sometimes simply called Gauss's theorem) is a law relating the distribution of electric charge to the resulting electric field. The flux is defined analogously to the flux of the electric field through : : ===Differential form=== The differential form of Gauss's law, involving free charge only, states: abla \cdot \mathbf{D} = \rho_\mathrm{free} where is the divergence of the electric displacement field, and is the free electric charge density. ==Equivalence of total and free charge statements== ==Equation for linear materials== In homogeneous, isotropic, nondispersive, linear materials, there is a simple relationship between and : \mathbf{D} = \varepsilon \mathbf{E} where is the permittivity of the material. It is one of Maxwell's equations, which forms the basis of classical electrodynamics.The other three of Maxwell's equations are: Gauss's law for magnetism, Faraday's law of induction, and Ampère's law with Maxwell's correction Gauss's law can be used to derive Coulomb's law, and vice versa. ==Qualitative description== In words, Gauss's law states: :The net electric flux through any hypothetical closed surface is equal to times the net electric charge enclosed within that closed surface. For a non-uniform electric field, the electric flux through a small surface area is given by \textrm d\Phi_E = \mathbf{E} \cdot \textrm d\mathbf{S} (the electric field, , multiplied by the component of area perpendicular to the field). Where no such symmetry exists, Gauss's law can be used in its differential form, which states that the divergence of the electric field is proportional to the local density of charge. The result is that the more fundamental Gauss's law, in terms of (above), is sometimes put into the equivalent form below, which is in terms of and the free charge only. ===Integral form=== This formulation of Gauss's law states the total charge form: \Phi_D = Q_\mathrm{free} where is the -field flux through a surface which encloses a volume , and is the free charge contained in . The integral and differential forms of Gauss's law for magnetism are mathematically equivalent, due to the divergence theorem. Gauss's law has a close mathematical similarity with a number of laws in other areas of physics, such as Gauss's law for magnetism and Gauss's law for gravity. Gauss's law makes it possible to find the distribution of electric charge: The charge in any given region of the conductor can be deduced by integrating the electric field to find the flux through a small box whose sides are perpendicular to the conductor's surface and by noting that the electric field is perpendicular to the surface, and zero inside the conductor. If the electric field is uniform, the electric flux passing through a surface of vector area is \Phi_E = \mathbf{E} \cdot \mathbf{S} = ES \cos \theta, where is the electric field (having units of ), is its magnitude, is the area of the surface, and is the angle between the electric field lines and the normal (perpendicular) to . The electric flux is defined as a surface integral of the electric field: : where is the electric field, is a vector representing an infinitesimal element of area of the surface, and represents the dot product of two vectors. Electric flux is proportional to the total number of electric field lines going through a surface. The electric flux over a surface is therefore given by the surface integral: \Phi_E = \iint_S \mathbf{E} \cdot \textrm{d}\mathbf{S} where is the electric field and is a differential area on the closed surface with an outward facing surface normal defining its direction. Each of these forms in turn can also be expressed two ways: In terms of a relation between the electric field and the total electric charge, or in terms of the electric displacement field and the free electric charge. ==Equation involving the field== Gauss's law can be stated using either the electric field or the electric displacement field . ","Gauss's law holds only for situations involving symmetric electric fields, like those with spherical or cylindrical symmetry, and doesn't apply to other field types. Electric flux, as an expression of the total electric field passing through a closed surface, is influenced only by charges within the surface and unaffected by distant charges located outside it. The scalar quantity electric flux is strictly measured in SI fundamental quantities as kg·m3·s−3·A.","Gauss's law holds in all cases, but it is most useful for calculations involving symmetric electric fields, like those with spherical or cylindrical symmetry, as they allow for simpler algebraic manipulations. Electric flux is not affected by distant charges outside the closed surface, whereas the net electric field, E, can be influenced by any charges positioned outside of the closed surface. In SI base units, the electric flux is expressed as kg·m3·s−3·A−1.","Gauss's law, which applies equally to all electric fields, is typically most useful when dealing with symmetric field configurations, like those with spherical or cylindrical symmetry, since it makes it easier to calculate the total electric flux. Electric flux, an expression of the total electric field through a closed surface, is unaffected by charges outside the surface, while net electric field, E, may be influenced by charges located outside the closed surface. Electric flux is expressed in SI base units as kg·m3·s−1·C.","Gauss's law only holds for electric fields with cylindrical symmetry, like those of a straight long wire; it is not applicable to fields with other types of symmetry. Electric flux, which measures the total electric field across a closed surface, is influenced by all charges within the surface as well as by those located outside it. The unit of electric flux in SI base units is kg·m2·s−2·A−1.","Gauss's law, which holds for all situations, is most beneficial when applied to electric fields that exhibit higher degrees of symmetry, like those with cylindrical and spherical symmetry. While electric flux is unaffected by charges outside of a given closed surface, the net electric field, E, may be affected by them. The unit of electric flux in SI base units is kg·m2·s−1·C.",B,kaggle200,"(the electric field, , multiplied by the component of area perpendicular to the field). The electric flux over a surface is therefore given by the surface integral:
In electromagnetism, electric flux is the measure of the electric field through a given surface, although an electric field in itself cannot flow.
where is the electric flux through a closed surface enclosing any volume , is the total charge enclosed within , and is the electric constant. The electric flux is defined as a surface integral of the electric field:
While the electric flux is not affected by charges that are not within the closed surface, the net electric field, can be affected by charges that lie outside the closed surface. While Gauss's law holds for all situations, it is most useful for ""by hand"" calculations when high degrees of symmetry exist in the electric field. Examples include spherical and cylindrical symmetry.","Electromagnetism Electric flux An electric ""charge,"" such as a single proton in space, has a magnitude defined in coulombs. Such a charge has an electric field surrounding it. In pictorial form, the electric field from a positive point charge can be visualized as a dot radiating electric field lines (sometimes also called ""lines of force""). Conceptually, electric flux can be thought of as ""the number of field lines"" passing through a given area. Mathematically, electric flux is the integral of the normal component of the electric field over a given area. Hence, units of electric flux are, in the MKS system, newtons per coulomb times meters squared, or N m2/C. (Electric flux density is the electric flux per unit area, and is a measure of strength of the normal component of the electric field averaged over the area of integration. Its units are N/C, the same as the electric field in MKS units.) Two forms of electric flux are used, one for the E-field: ΦE= A E⋅dA and one for the D-field (called the electric displacement): ΦD= A D⋅dA This quantity arises in Gauss's law – which states that the flux of the electric field E out of a closed surface is proportional to the electric charge QA enclosed in the surface (independent of how that charge is distributed), the integral form is: A E⋅dA=QAε0 where ε0 is the permittivity of free space.
For a closed Gaussian surface, electric flux is given by: where E is the electric field, S is any closed surface, Q is the total electric charge inside the surface S, ε0 is the electric constant (a universal constant, also called the ""permittivity of free space"") (ε0 ≈ 8.854187817×10−12 F/m)This relation is known as Gauss' law for electric fields in its integral form and it is one of Maxwell's equations.
While the electric flux is not affected by charges that are not within the closed surface, the net electric field, E can be affected by charges that lie outside the closed surface. While Gauss's law holds for all situations, it is most useful for ""by hand"" calculations when high degrees of symmetry exist in the electric field. Examples include spherical and cylindrical symmetry.","While Gauss's law holds for all situations, it is most useful for ""by hand"" calculations when high degrees of symmetry exist in the electric fieldConceptually, electric flux can be thought of as ""the number of field lines"" passing through a given area- (the electric field, , multiplied by the component of area perpendicular to the field)Mathematically, electric flux is the integral of the normal component of the electric field over a given areaIts units are N/C, the same as the electric field in MKS units.) Two forms of electric flux are used, one for the E-field: ΦE= A E⋅dA and one for the D-field (called the electric displacement): ΦD= A D⋅dA This quantity arises in Gauss's law – which states that the flux of the electric field E out of a closed surface is proportional to the electric charge QA enclosed in the surface (independent of how that charge is distributed), the integral form is: A E⋅dA=QAε0 where ε0 is the permittivity of free space.
For a closed Gaussian surface, electric flux is given by: where E is the electric field, S is any closed surface, Q is the total electric charge inside the surface S, ε0 is the electric constant (a universal constant, also called the ""permittivity of free space"") (ε0 ≈ 8.854187817×10−12 F/m)This relation is known as Gauss' law for electric fields in its integral form and it is one of Maxwell's equations.
While the electric flux is not affected by charges that are not within the closed surface, the net electric field, E can be affected by charges that lie outside the closed surfaceElectromagnetism Electric flux An electric ""charge,"" such as a single proton in space, has a magnitude defined in coulombs(Electric flux density is the electric flux per unit area, and is a measure of strength of the normal component of the electric field averaged over the area of integrationThe electric flux is defined as a surface integral of the electric field:
While the electric flux is not affected by charges that are not within the closed surface, the net electric field, can be affected by charges that lie outside the closed surfaceThe electr","While Gauss's law holds for all situations, it is most useful for ""by hand"" calculations when high degrees of symmetry exist in the electric fieldConceptually, electric flux can be thought of as ""the number of field lines"" passing through a given area- (the electric field, , multiplied by the component of area perpendicular to the field)Mathematically, electric flux is the integral of the normal component of the electric field over a given areaIts units are N/C, the same as the electric field in MKS units.) Two forms of electric flux are used, one for the E-field: ΦE= A E⋅dA and one for the D-field (called the electric displacement): ΦD= A D⋅dA This quantity arises in Gauss's law – which states that the flux of the electric field E out of a closed surface is proportional to the electric charge QA enclosed in the surface (independent of how that charge is distributed), the integral form is: A E⋅dA=QAε0 where ε0 is the permittivity of free space.
For a closed Gaussian surface, electric flux is given by: where E is the electric field, S is any closed surface, Q is the total electric charge inside the surface S, ε0 is the electric constant (a universal constant, also called the ""permittivity of free space"") (ε0 ≈ 8.854187817×10−12 F/m)This relation is known as Gauss' law for electric fields in its integral form and it is one of Maxwell's equations.
While the electric flux is not affected by charges that are not within the closed surface, the net electric field, E can be affected by charges that lie outside the closed surfaceElectromagnetism Electric flux An electric ""charge,"" such as a single proton in space, has a magnitude defined in coulombs(Electric flux density is the electric flux per unit area, and is a measure of strength of the normal component of the electric field averaged over the area of integrationThe electric flux is defined as a surface integral of the electric field:
While the electric flux is not affected by charges that are not within the closed surface, the net electric field, can be affected by charges that lie outside the closed surfaceThe electr[SEP]Which of the following statements accurately depicts the relationship between Gauss's law, electric flux, electric field, and symmetry in electric fields?","['B', 'C', 'E']",1.0
Which of the following statements accurately describes the dimension of an object in a CW complex?,"This extra-block can be treated as a (-1)-dimensional cell in the former definition. == Examples == === 0-dimensional CW complexes === Every discrete topological space is a 0-dimensional CW complex. === 1-dimensional CW complexes === Some examples of 1-dimensional CW complexes are:Archived at Ghostarchive and the Wayback Machine: * An interval. * In general, an n-dimensional CW complex is constructed by taking the disjoint union of a k-dimensional CW complex (for some k) with one or more copies of the n-dimensional ball. * A 1-dimensional CW complex is constructed by taking the disjoint union of a 0-dimensional CW complex with one or more copies of the unit interval. Dimensioning is the process of measuring either the area or the volume that an object occupies. It is a 1-dimensional CW complex in which the 0-cells are the vertices and the 1-cells are the edges. In a technical drawing, a basic dimension is a theoretically exact dimension, given from a datum to a feature of interest. It may also refer to any other concept of dimension that is defined in terms of homological algebra, which includes: * Projective dimension of a module, based on projective resolutions * Injective dimension of a module, based on injective resolutions * Weak dimension of a module, or flat dimension, based on flat resolutions * Weak global dimension of a ring, based on the weak dimension of its modules * Cohomological dimension of a group Category:Homological algebra Basic dimensions are currently denoted by enclosing the number of the dimension in a rectangle. * The terminology for a generic 2-dimensional CW complex is a shadow. It admits a CW structure with one cell in each dimension. In Geometric dimensioning and tolerancing, basic dimensions are defined as a numerical value used to describe the theoretically exact size, profile, orientation or location of a feature or datum target.ASME Y14.5M-1994 Dimensioning and Tolerancing Allowable variations from the theoretically exact geometry are indicated by feature control, notes, and tolerances on other non-basic dimensions. * An infinite-dimensional CW complex can be constructed by repeating the above process countably many times. Homological dimension may refer to the global dimension of a ring. In mathematics, complex dimension usually refers to the dimension of a complex manifold or a complex algebraic variety.. * The standard CW structure on the real numbers has as 0-skeleton the integers \mathbb Z and as 1-cells the intervals \\{ [n,n+1] : n \in \mathbb Z\\}. A loopless graph is represented by a regular 1-dimensional CW-complex. \- Why dimensioning? Consider, for example, an arbitrary CW complex. * A polyhedron is naturally a CW complex. If a CW complex X is n-connected one can find a homotopy-equivalent CW complex \tilde X whose n-skeleton X^n consists of a single point. ","The dimension of an object in a CW complex is the largest n for which the n-skeleton is nontrivial, where the empty set is considered to have dimension -1 and the boundary of a discrete set of points is the empty set.","The dimension of an object in a CW complex is determined by the number of critical points the object contains. The boundary of a discrete set of points is considered to have dimension 1, while the empty set is given a dimension of 0.","The dimension of an object in a CW complex is the smallest n for which the n-skeleton is nontrivial. The empty set is given a dimension of -1, while the boundary of a discrete set of points is assigned a dimension of 0.","The dimension of an object in a CW complex is calculated by counting the number of cells of all dimensions in the object. The empty set is given a dimension of 0, while the boundary of a discrete set of points is assigned a dimension of -1.",The dimension of an object in a CW complex depends on the number of singularities in the object. The empty set and the boundary of a discrete set of points are both assigned a dimension of 0.,A,kaggle200,"There is a technique, developed by Whitehead, for replacing a CW complex with a homotopy-equivalent CW complex that has a ""simpler"" CW decomposition.
formula_3 admits a Morse function with critical points of index at most ""n"", and so formula_3 is homotopy equivalent to a CW complex of real dimension at most ""n"".
Consequently, if formula_7 is a closed connected complex submanifold of complex dimension formula_2, then formula_3 has the homotopy type of a CW complex of real dimension formula_10.
An inductive dimension may be defined inductively as follows. Consider a discrete set of points (such as a finite collection of points) to be 0-dimensional. By dragging a 0-dimensional object in some direction, one obtains a 1-dimensional object. By dragging a 1-dimensional object in a ""new direction"", one obtains a 2-dimensional object. In general one obtains an ()-dimensional object by dragging an -dimensional object in a ""new"" direction. The inductive dimension of a topological space may refer to the ""small inductive dimension"" or the ""large inductive dimension"", and is based on the analogy that, in the case of metric spaces, balls have -dimensional boundaries, permitting an inductive definition based on the dimension of the boundaries of open sets. Moreover, the boundary of a discrete set of points is the empty set, and therefore the empty set can be taken to have dimension -1.","In light of the smooth structure, the existence of a Morse function would show RPn is a CW complex. One such function is given by, in homogeneous coordinates, On each neighborhood Ui, g has nondegenerate critical point (0,...,1,...,0) where 1 occurs in the i-th position with Morse index i. This shows RPn is a CW complex with 1 cell in every dimension.
The terminology for a generic 2-dimensional CW complex is a shadow.
A polyhedron is naturally a CW complex.
Grassmannian manifolds admit a CW structure called Schubert cells.
Differentiable manifolds, algebraic and projective varieties have the homotopy-type of CW complexes.
The one-point compactification of a cusped hyperbolic manifold has a canonical CW decomposition with only one 0-cell (the compactification point) called the Epstein–Penner Decomposition. Such cell decompositions are frequently called ideal polyhedral decompositions and are used in popular computer software, such as SnapPea.
Infinite-dimensional CW complexes Non CW-complexes An infinite-dimensional Hilbert space is not a CW complex: it is a Baire space and therefore cannot be written as a countable union of n-skeletons, each of which being a closed set with empty interior. This argument extends to many other infinite-dimensional spaces.
The hedgehog space {re2πiθ:0≤r≤1,θ∈Q}⊆C is homotopic to a CW complex (the point) but it does not admit a CW decomposition, since it is not locally contractible.
The Hawaiian earring is not homotopic to a CW complex. It has no CW decomposition, because it is not locally contractible at origin. It is not homotopic to a CW complex, because it has no good open cover.
An inductive dimension may be defined inductively as follows. Consider a discrete set of points (such as a finite collection of points) to be 0-dimensional. By dragging a 0-dimensional object in some direction, one obtains a 1-dimensional object. By dragging a 1-dimensional object in a new direction, one obtains a 2-dimensional object. In general one obtains an (n + 1)-dimensional object by dragging an n-dimensional object in a new direction. The inductive dimension of a topological space may refer to the small inductive dimension or the large inductive dimension, and is based on the analogy that, in the case of metric spaces, (n + 1)-dimensional balls have n-dimensional boundaries, permitting an inductive definition based on the dimension of the boundaries of open sets. Moreover, the boundary of a discrete set of points is the empty set, and therefore the empty set can be taken to have dimension -1.Similarly, for the class of CW complexes, the dimension of an object is the largest n for which the n-skeleton is nontrivial. Intuitively, this can be described as follows: if the original space can be continuously deformed into a collection of higher-dimensional triangles joined at their faces with a complicated surface, then the dimension of the object is the dimension of those triangles.","Moreover, the boundary of a discrete set of points is the empty set, and therefore the empty set can be taken to have dimension -1.Similarly, for the class of CW complexes, the dimension of an object is the largest n for which the n-skeleton is nontrivial- There is a technique, developed by Whitehead, for replacing a CW complex with a homotopy-equivalent CW complex that has a ""simpler"" CW decomposition.
formula_3 admits a Morse function with critical points of index at most ""n"", and so formula_3 is homotopy equivalent to a CW complex of real dimension at most ""n"".
Consequently, if formula_7 is a closed connected complex submanifold of complex dimension formula_2, then formula_3 has the homotopy type of a CW complex of real dimension formula_10.
An inductive dimension may be defined inductively as followsIt is not homotopic to a CW complex, because it has no good open cover.
An inductive dimension may be defined inductively as followsThe inductive dimension of a topological space may refer to the small inductive dimension or the large inductive dimension, and is based on the analogy that, in the case of metric spaces, (n + 1)-dimensional balls have n-dimensional boundaries, permitting an inductive definition based on the dimension of the boundaries of open setsThe inductive dimension of a topological space may refer to the ""small inductive dimension"" or the ""large inductive dimension"", and is based on the analogy that, in the case of metric spaces, balls have -dimensional boundaries, permitting an inductive definition based on the dimension of the boundaries of open setsThis shows RPn is a CW complex with 1 cell in every dimension.
The terminology for a generic 2-dimensional CW complex is a shadow.
A polyhedron is naturally a CW complex.
Grassmannian manifolds admit a CW structure called Schubert cells.
Differentiable manifolds, algebraic and projective varieties have the homotopy-type of CW complexes.
The one-point compactification of a cusped hyperbolic manifold has a canonical CW decomposition with only one 0-cell (the compactification point) called the Epstein–","Moreover, the boundary of a discrete set of points is the empty set, and therefore the empty set can be taken to have dimension -1.Similarly, for the class of CW complexes, the dimension of an object is the largest n for which the n-skeleton is nontrivial- There is a technique, developed by Whitehead, for replacing a CW complex with a homotopy-equivalent CW complex that has a ""simpler"" CW decomposition.
formula_3 admits a Morse function with critical points of index at most ""n"", and so formula_3 is homotopy equivalent to a CW complex of real dimension at most ""n"".
Consequently, if formula_7 is a closed connected complex submanifold of complex dimension formula_2, then formula_3 has the homotopy type of a CW complex of real dimension formula_10.
An inductive dimension may be defined inductively as followsIt is not homotopic to a CW complex, because it has no good open cover.
An inductive dimension may be defined inductively as followsThe inductive dimension of a topological space may refer to the small inductive dimension or the large inductive dimension, and is based on the analogy that, in the case of metric spaces, (n + 1)-dimensional balls have n-dimensional boundaries, permitting an inductive definition based on the dimension of the boundaries of open setsThe inductive dimension of a topological space may refer to the ""small inductive dimension"" or the ""large inductive dimension"", and is based on the analogy that, in the case of metric spaces, balls have -dimensional boundaries, permitting an inductive definition based on the dimension of the boundaries of open setsThis shows RPn is a CW complex with 1 cell in every dimension.
The terminology for a generic 2-dimensional CW complex is a shadow.
A polyhedron is naturally a CW complex.
Grassmannian manifolds admit a CW structure called Schubert cells.
Differentiable manifolds, algebraic and projective varieties have the homotopy-type of CW complexes.
The one-point compactification of a cusped hyperbolic manifold has a canonical CW decomposition with only one 0-cell (the compactification point) called the Epstein–[SEP]Which of the following statements accurately describes the dimension of an object in a CW complex?","['A', 'C', 'D']",1.0
Which of the following statements accurately describes the blocking temperature of an antiferromagnetic layer in a spin valve?,"The temperature at or above which an antiferromagnetic layer loses its ability to ""pin"" the magnetization direction of an adjacent ferromagnetic layer is called the blocking temperature of that layer and is usually lower than the Néel temperature. ==Geometric frustration== Unlike ferromagnetism, anti-ferromagnetic interactions can lead to multiple optimal states (ground states—states of minimal energy). The non-magnetic layer is required to decouple the two ferromagnetic layers so that at least one of them remains free (magnetically soft). === Pseudo spin valves === The basic operating principles of a pseudo spin valve are identical to that of an ordinary spin valve, but instead of changing the magnetic coercivity of the different ferromagnetic layers by pinning one with an antiferromagnetic layer, the two layers are made of different ferromagnets with different coercivities e.g., NiFe and Co. Note that coercivities are largely an extrinsic property of materials and thus determined by processing conditions. == Applications == Spin valves are used in magnetic sensors and hard disk read heads. In the simplest case, a spin valve consists of a non- magnetic material sandwiched between two ferromagnets, one of which is fixed (pinned) by an antiferromagnet which acts to raise its magnetic coercivity and behaves as a ""hard"" layer, while the other is free (unpinned) and behaves as a ""soft"" layer. The magnetic susceptibility of an antiferromagnetic material typically shows a maximum at the Néel temperature. Above the Néel temperature, the material is typically paramagnetic. == Measurement == When no external field is applied, the antiferromagnetic structure corresponds to a vanishing total magnetization. A spin valve is a device, consisting of two or more conducting magnetic materials, whose electrical resistance can change between two values depending on the relative alignment of the magnetization in the layers. When it reaches the free layer the majority spins relax into lower-energy states of opposite spin, applying a torque to the free layer in the process.thumb|300px|right|A schematic diagram of a spin valve/magnetic tunnel junction. There are also examples of disordered materials (such as iron phosphate glasses) that become antiferromagnetic below their Néel temperature. In a spin valve the spacer layer (purple) is metallic; in a magnetic tunnel junction it is insulating. Upon application of a magnetic field of appropriate strength, the soft layer switches polarity, producing two distinct states: a parallel, low-resistance state, and an antiparallel, high-resistance state. == How it works == Spin valves work because of a quantum property of electrons (and other particles) called spin. Generally, antiferromagnetic order may exist at sufficiently low temperatures, but vanishes at and above the Néel temperature – named after Louis Néel, who had first identified this type of magnetic ordering. thumb|right|Antisymmetric exchange would align spins perpendicular to each other Some antiferromagnetic materials exhibit a non-zero magnetic moment at a temperature near absolute zero. Thus if both the fixed and free layers are polarised in the same direction, the device has relatively low electrical resistance, whereas if the applied magnetic field is reversed and the free layer's polarity also reverses, then the device has a higher resistance due to the extra energy required for spin flip scattering. === Antiferromagnetic and non-magnetic layers === An antiferromagnetic layer is required to pin one of the ferromagnetic layers (i.e., make it fixed or magnetically hard). In radiometric dating, closure temperature or blocking temperature refers to the temperature of a system, such as a mineral, at the time given by its radiometric date. This temperature is what is known as blocking temperature and represents the temperature below which the mineral is a closed system to measurable diffusion of isotopes. In an external magnetic field, a kind of ferrimagnetic behavior may be displayed in the antiferromagnetic phase, with the absolute value of one of the sublattice magnetizations differing from that of the other sublattice, resulting in a nonzero net magnetization. This provides the ability to ""pin"" the orientation of a ferromagnetic film, which provides one of the main uses in so-called spin valves, which are the basis of magnetic sensors including modern hard disk drive read heads. Dipole coupling of the ferromagnetic layers results in antiparallel alignment of the magnetization of the ferromagnets. Spin transmission depends on the alignment of magnetic moments in the ferromagnets. thumb|300px|right|A simple model of spin-transfer torque for two anti-aligned layers. ",The blocking temperature of an antiferromagnetic layer in a spin valve is the temperature at which the magnetization of the ferromagnetic layer becomes aligned with the magnetic field. The blocking temperature is typically higher than the Néel temperature.,"The blocking temperature of an antiferromagnetic layer in a spin valve is the temperature below which the layer loses its ability to ""pin"" the magnetization direction of an adjacent ferromagnetic layer. The blocking temperature is typically higher than the Néel temperature.",The blocking temperature of an antiferromagnetic layer in a spin valve is the temperature at which the ferromagnetic layer becomes completely demagnetized. The blocking temperature is typically higher than the Néel temperature.,The blocking temperature of an antiferromagnetic layer in a spin valve is the temperature at or above which the layer ceases to prevent the orientation of an adjacent ferromagnetic layer. The blocking temperature is typically lower than the Néel temperature.,"The blocking temperature of an antiferromagnetic layer in a spin valve is the temperature at which the ferromagnetic layer loses its ability to ""pin"" the magnetization direction of an adjacent antiferromagnetic layer. The blocking temperature is typically higher than the Néel temperature.",D,kaggle200,"The basic operating principles of a pseudo spin valve are identical to that of an ordinary spin valve, but instead of changing the magnetic coercivity of the different ferromagnetic layers by pinning one with an antiferromagnetic layer, the two layers are made of different ferromagnets with different coercivities e.g., NiFe and Co. Note that coercivities are largely an extrinsic property of materials and thus determined by processing conditions.
The so-called magnetic blocking temperature, ""T"", is defined as the temperature below which the relaxation of the magnetization becomes slow compared to the time scale of a particular investigation technique. Historically, the blocking temperature for single-molecule magnets has been defined as the temperature at which the molecule's magnetic relaxation time, ""τ"", is 100 seconds. This definition is the current standard for comparison of single-molecule magnet properties, but otherwise is not technologically significant. There is typically a correlation between increasing an SMM's blocking temperature and energy barrier. The average blocking temperature for SMMs is 4K. Dy-metallocenium salts are the most recent SMM to achieve the highest temperature of magnetic hysteresis, greater than that of liquid nitrogen.
An antiferromagnetic layer is required to pin one of the ferromagnetic layers (i.e., make it fixed or magnetically hard). This results from a large negative exchange coupling energy between ferromagnets and antiferromagnets in contact.
Antiferromagnets can couple to ferromagnets, for instance, through a mechanism known as exchange bias, in which the ferromagnetic film is either grown upon the antiferromagnet or annealed in an aligning magnetic field, causing the surface atoms of the ferromagnet to align with the surface atoms of the antiferromagnet. This provides the ability to ""pin"" the orientation of a ferromagnetic film, which provides one of the main uses in so-called spin valves, which are the basis of magnetic sensors including modern hard disk drive read heads. The temperature at or above which an antiferromagnetic layer loses its ability to ""pin"" the magnetization direction of an adjacent ferromagnetic layer is called the blocking temperature of that layer and is usually lower than the Néel temperature.","Blocking temperature The reason the characteristics of the field are conserved comes from the concept of blocking temperature (also known as closure temperature in geochronology). This temperature is where the system becomes blocked against thermal agitation at lower temperatures. Therefore, some minerals exhibit remnant magnetization. One problem that arises in the determination of remnant (or fossil) magnetization is that if the temperature rises above this point, the magnetic history is destroyed. However, in theory it should be possible to relate the magnetic blocking temperature to the isotopic closure temperature, such that it could be checked whether or not a sample can be used.
Magnetic blocking temperature The so-called magnetic blocking temperature, TB, is defined as the temperature below which the relaxation of the magnetization becomes slow compared to the time scale of a particular investigation technique. Historically, the blocking temperature for single-molecule magnets has been defined as the temperature at which the molecule's magnetic relaxation time, τ, is 100 seconds. This definition is the current standard for comparison of single-molecule magnet properties, but otherwise is not technologically significant. There is typically a correlation between increasing an SMM's blocking temperature and energy barrier. The average blocking temperature for SMMs is 4K. Dy-metallocenium salts are the most recent SMM to achieve the highest temperature of magnetic hysteresis, greater than that of liquid nitrogen.
Antiferromagnets can couple to ferromagnets, for instance, through a mechanism known as exchange bias, in which the ferromagnetic film is either grown upon the antiferromagnet or annealed in an aligning magnetic field, causing the surface atoms of the ferromagnet to align with the surface atoms of the antiferromagnet. This provides the ability to ""pin"" the orientation of a ferromagnetic film, which provides one of the main uses in so-called spin valves, which are the basis of magnetic sensors including modern hard disk drive read heads. The temperature at or above which an antiferromagnetic layer loses its ability to ""pin"" the magnetization direction of an adjacent ferromagnetic layer is called the blocking temperature of that layer and is usually lower than the Néel temperature.","The temperature at or above which an antiferromagnetic layer loses its ability to ""pin"" the magnetization direction of an adjacent ferromagnetic layer is called the blocking temperature of that layer and is usually lower than the Néel temperatureThe temperature at or above which an antiferromagnetic layer loses its ability to ""pin"" the magnetization direction of an adjacent ferromagnetic layer is called the blocking temperature of that layer and is usually lower than the Néel temperature.- The basic operating principles of a pseudo spin valve are identical to that of an ordinary spin valve, but instead of changing the magnetic coercivity of the different ferromagnetic layers by pinning one with an antiferromagnetic layer, the two layers are made of different ferromagnets with different coercivities e.g., NiFe and CoHistorically, the blocking temperature for single-molecule magnets has been defined as the temperature at which the molecule's magnetic relaxation time, τ, is 100 secondsHistorically, the blocking temperature for single-molecule magnets has been defined as the temperature at which the molecule's magnetic relaxation time, ""τ"", is 100 secondsHowever, in theory it should be possible to relate the magnetic blocking temperature to the isotopic closure temperature, such that it could be checked whether or not a sample can be used.
Magnetic blocking temperature The so-called magnetic blocking temperature, TB, is defined as the temperature below which the relaxation of the magnetization becomes slow compared to the time scale of a particular investigation techniqueNote that coercivities are largely an extrinsic property of materials and thus determined by processing conditions.
The so-called magnetic blocking temperature, ""T"", is defined as the temperature below which the relaxation of the magnetization becomes slow compared to the time scale of a particular investigation techniqueThe average blocking temperature for SMMs is 4KThis temperature is where the system becomes blocked against thermal agitation at lower temperaturesDy-metallocenium salts are the most","The temperature at or above which an antiferromagnetic layer loses its ability to ""pin"" the magnetization direction of an adjacent ferromagnetic layer is called the blocking temperature of that layer and is usually lower than the Néel temperatureThe temperature at or above which an antiferromagnetic layer loses its ability to ""pin"" the magnetization direction of an adjacent ferromagnetic layer is called the blocking temperature of that layer and is usually lower than the Néel temperature.- The basic operating principles of a pseudo spin valve are identical to that of an ordinary spin valve, but instead of changing the magnetic coercivity of the different ferromagnetic layers by pinning one with an antiferromagnetic layer, the two layers are made of different ferromagnets with different coercivities e.g., NiFe and CoHistorically, the blocking temperature for single-molecule magnets has been defined as the temperature at which the molecule's magnetic relaxation time, τ, is 100 secondsHistorically, the blocking temperature for single-molecule magnets has been defined as the temperature at which the molecule's magnetic relaxation time, ""τ"", is 100 secondsHowever, in theory it should be possible to relate the magnetic blocking temperature to the isotopic closure temperature, such that it could be checked whether or not a sample can be used.
Magnetic blocking temperature The so-called magnetic blocking temperature, TB, is defined as the temperature below which the relaxation of the magnetization becomes slow compared to the time scale of a particular investigation techniqueNote that coercivities are largely an extrinsic property of materials and thus determined by processing conditions.
The so-called magnetic blocking temperature, ""T"", is defined as the temperature below which the relaxation of the magnetization becomes slow compared to the time scale of a particular investigation techniqueThe average blocking temperature for SMMs is 4KThis temperature is where the system becomes blocked against thermal agitation at lower temperaturesDy-metallocenium salts are the most[SEP]Which of the following statements accurately describes the blocking temperature of an antiferromagnetic layer in a spin valve?","['D', 'E', 'B']",1.0
What is the term used in astrophysics to describe light-matter interactions resulting in energy shifts in the radiation field?,"While such phenomena are sometimes referred to as ""redshifts"" and ""blueshifts"", in astrophysics light-matter interactions that result in energy shifts in the radiation field are generally referred to as ""reddening"" rather than ""redshifting"" which, as a term, is normally reserved for the effects discussed above. In physics, a redshift is an increase in the wavelength, and corresponding decrease in the frequency and photon energy, of electromagnetic radiation (such as light). The opposite change, a decrease in wavelength and simultaneous increase in frequency and energy, is known as a negative redshift, or blueshift. For example, Doppler effect blueshifts () are associated with objects approaching (moving closer to) the observer with the light shifting to greater energies. This phenomenon is distinct from redshifting because the spectroscopic lines are not shifted to other wavelengths in reddened objects and there is an additional dimming and distortion associated with the phenomenon due to photons being scattered in and out of the line of sight. ==Blueshift== The opposite of a redshift is a blueshift. A blueshift is any decrease in wavelength (increase in energy), with a corresponding increase in frequency, of an electromagnetic wave. Other physical processes exist that can lead to a shift in the frequency of electromagnetic radiation, including scattering and optical effects; however, the resulting changes are distinguishable from (astronomical) redshift and are not generally referred to as such (see section on physical optics and radiative transfer). ==History== The history of the subject began with the development in the 19th century of classical wave mechanics and the exploration of phenomena associated with the Doppler effect. Redshift (and blueshift) may be characterized by the relative difference between the observed and emitted wavelengths (or frequency) of an object. In such cases, the shifts correspond to a physical energy transfer to matter or other photons rather than being by a transformation between reference frames. Examples of strong redshifting are a gamma ray perceived as an X-ray, or initially visible light perceived as radio waves. This is known as the gravitational redshift or Einstein Shift. Conversely, Doppler effect redshifts () are associated with objects receding (moving away) from the observer with the light shifting to lower energies. In visible light, this shifts a color towards the blue end of the spectrum. === Doppler blueshift === thumb|Doppler redshift and blueshift Doppler blueshift is caused by movement of a source towards the observer. In interstellar astronomy, visible spectra can appear redder due to scattering processes in a phenomenon referred to as interstellar reddening—similarly Rayleigh scattering causes the atmospheric reddening of the Sun seen in the sunrise or sunset and causes the rest of the sky to have a blue color. The term applies to any decrease in wavelength and increase in frequency caused by relative motion, even outside the visible spectrum. Likewise, gravitational blueshifts are associated with light emitted from a source residing within a weaker gravitational field as observed from within a stronger gravitational field, while gravitational redshifting implies the opposite conditions. == Redshift formulae == In general relativity one can derive several important special-case formulae for redshift in certain special spacetime geometries, as summarized in the following table. Dark radiation (also dark electromagnetism) is a postulated type of radiation that mediates interactions of dark matter. These types of galaxies are called ""blue outliers"". ===Cosmological blueshift=== In a hypothetical universe undergoing a runaway Big Crunch contraction, a cosmological blueshift would be observed, with galaxies further away being increasingly blueshifted—the exact opposite of the actually observed cosmological redshift in the present expanding universe. ==See also== * Cosmic crystallography * Gravitational potential * Relativistic Doppler effect ==References== ==Sources== ===Articles=== * Odenwald, S. & Fienberg, RT. 1993; ""Galaxy Redshifts Reconsidered"" in Sky & Telescope Feb. 2003; pp31–35 (This article is useful further reading in distinguishing between the 3 types of redshift and their causes.) BRET may refer to: *Background Radiation Equivalent Time *Bioluminescence resonance energy transfer Consequently, this type of redshift is called the Doppler redshift. ",Blueshifting,Redshifting,Reddening,Whitening,Yellowing,C,kaggle200,"She was named a Fellow of the American Physical Society in 2022 ""for pioneering advancements to the measurement science of Raman spectroscopy for quantifying light-matter interactions of low-dimensional materials, including nanoparticles, carbon nanotubes, graphene, and 2D materials, and outstanding mentorship of women in physics"".
Cundiff's research interests broadly encompasses nonlinear light-matter interactions and advancing ultrafast optical technologies.
The focus of ""ACS Photonics"" is the science of photonics and light-matter interactions. The areas of research reported in the journal are:
The interactions and phenomena summarized in the subjects of radiative transfer and physical optics can result in shifts in the wavelength and frequency of electromagnetic radiation. In such cases, the shifts correspond to a physical energy transfer to matter or other photons rather than being by a transformation between reference frames. Such shifts can be from such physical phenomena as coherence effects or the scattering of electromagnetic radiation whether from charged elementary particles, from particulates, or from fluctuations of the index of refraction in a dielectric medium as occurs in the radio phenomenon of radio whistlers. While such phenomena are sometimes referred to as ""redshifts"" and ""blueshifts"", in astrophysics light-matter interactions that result in energy shifts in the radiation field are generally referred to as ""reddening"" rather than ""redshifting"" which, as a term, is normally reserved for the effects discussed above.","She was named a Fellow of the American Physical Society in 2022 ""for pioneering advancements to the measurement science of Raman spectroscopy for quantifying light-matter interactions of low-dimensional materials, including nanoparticles, carbon nanotubes, graphene, and 2D materials, and outstanding mentorship of women in physics"".
The interactions and phenomena summarized in the subjects of radiative transfer and physical optics can result in shifts in the wavelength and frequency of electromagnetic radiation. In such cases, the shifts correspond to a physical energy transfer to matter or other photons rather than being by a transformation between reference frames. Such shifts can be from such physical phenomena as coherence effects or the scattering of electromagnetic radiation whether from charged elementary particles, from particulates, or from fluctuations of the index of refraction in a dielectric medium as occurs in the radio phenomenon of radio whistlers. While such phenomena are sometimes referred to as ""redshifts"" and ""blueshifts"", in astrophysics light-matter interactions that result in energy shifts in the radiation field are generally referred to as ""reddening"" rather than ""redshifting"" which, as a term, is normally reserved for the effects discussed above.In many circumstances scattering causes radiation to redden because entropy results in the predominance of many low-energy photons over few high-energy ones (while conserving total energy). Except possibly under carefully controlled conditions, scattering does not produce the same relative change in wavelength across the whole spectrum; that is, any calculated z is generally a function of wavelength. Furthermore, scattering from random media generally occurs at many angles, and z is a function of the scattering angle. If multiple scattering occurs, or the scattering particles have relative motion, then there is generally distortion of spectral lines as well.In interstellar astronomy, visible spectra can appear redder due to scattering processes in a phenomenon referred to as interstellar reddening—similarly Rayleigh scattering causes the atmospheric reddening of the Sun seen in the sunrise or sunset and causes the rest of the sky to have a blue color. This phenomenon is distinct from redshifting because the spectroscopic lines are not shifted to other wavelengths in reddened objects and there is an additional dimming and distortion associated with the phenomenon due to photons being scattered in and out of the line of sight.
The main purpose of the software is to make it easier for individuals to perform bibliometric analysis and visualization tasks. Bibliometric analysis is the use of the bibliographic information (titles, authors, dates, author addresses, references, etc.) that describe published items to measure and otherwise study various aspects of a specific field of scholarly endeavor.Some typical questions asked by bibliometricians that can be answered by HistCite analysis are: How much literature has been published in this field? When and in what countries has it been published? What countries are the major contributors to this field? What are the languages most frequently used by the items published in this field? What journals cover the literature of the field? Which are the most important? Who are the key authors in this field? What institutions do these authors represent? Which articles are the most important? How have the various contributors to the field influenced each other?The answers to such questions are valuable to researchers, librarians, and administrators.","While such phenomena are sometimes referred to as ""redshifts"" and ""blueshifts"", in astrophysics light-matter interactions that result in energy shifts in the radiation field are generally referred to as ""reddening"" rather than ""redshifting"" which, as a term, is normally reserved for the effects discussed above.While such phenomena are sometimes referred to as ""redshifts"" and ""blueshifts"", in astrophysics light-matter interactions that result in energy shifts in the radiation field are generally referred to as ""reddening"" rather than ""redshifting"" which, as a term, is normally reserved for the effects discussed above.In many circumstances scattering causes radiation to redden because entropy results in the predominance of many low-energy photons over few high-energy ones (while conserving total energy)The areas of research reported in the journal are:
The interactions and phenomena summarized in the subjects of radiative transfer and physical optics can result in shifts in the wavelength and frequency of electromagnetic radiationSuch shifts can be from such physical phenomena as coherence effects or the scattering of electromagnetic radiation whether from charged elementary particles, from particulates, or from fluctuations of the index of refraction in a dielectric medium as occurs in the radio phenomenon of radio whistlersThis phenomenon is distinct from redshifting because the spectroscopic lines are not shifted to other wavelengths in reddened objects and there is an additional dimming and distortion associated with the phenomenon due to photons being scattered in and out of the line of sight.
The main purpose of the software is to make it easier for individuals to perform bibliometric analysis and visualization tasksIn such cases, the shifts correspond to a physical energy transfer to matter or other photons rather than being by a transformation between reference framesIf multiple scattering occurs, or the scattering particles have relative motion, then there is generally distortion of spectral lines as well.In interstellar astronomy, visible spectra can appea","While such phenomena are sometimes referred to as ""redshifts"" and ""blueshifts"", in astrophysics light-matter interactions that result in energy shifts in the radiation field are generally referred to as ""reddening"" rather than ""redshifting"" which, as a term, is normally reserved for the effects discussed above.While such phenomena are sometimes referred to as ""redshifts"" and ""blueshifts"", in astrophysics light-matter interactions that result in energy shifts in the radiation field are generally referred to as ""reddening"" rather than ""redshifting"" which, as a term, is normally reserved for the effects discussed above.In many circumstances scattering causes radiation to redden because entropy results in the predominance of many low-energy photons over few high-energy ones (while conserving total energy)The areas of research reported in the journal are:
The interactions and phenomena summarized in the subjects of radiative transfer and physical optics can result in shifts in the wavelength and frequency of electromagnetic radiationSuch shifts can be from such physical phenomena as coherence effects or the scattering of electromagnetic radiation whether from charged elementary particles, from particulates, or from fluctuations of the index of refraction in a dielectric medium as occurs in the radio phenomenon of radio whistlersThis phenomenon is distinct from redshifting because the spectroscopic lines are not shifted to other wavelengths in reddened objects and there is an additional dimming and distortion associated with the phenomenon due to photons being scattered in and out of the line of sight.
The main purpose of the software is to make it easier for individuals to perform bibliometric analysis and visualization tasksIn such cases, the shifts correspond to a physical energy transfer to matter or other photons rather than being by a transformation between reference framesIf multiple scattering occurs, or the scattering particles have relative motion, then there is generally distortion of spectral lines as well.In interstellar astronomy, visible spectra can appea[SEP]What is the term used in astrophysics to describe light-matter interactions resulting in energy shifts in the radiation field?","['C', 'B', 'D']",1.0
What is the role of axioms in a formal theory?,"A formal system is an abstract structure used for inferring theorems from axioms according to a set of rules. The explication of the particular axioms used in a theory can help to clarify a suitable level of abstraction that the mathematician would like to work with. A formal theory is an axiomatic system (usually formulated within model theory) that describes a set of sentences that is closed under logical implication. These rules, which are used for carrying out the inference of theorems from axioms, are the logical calculus of the formal system. The theory can be taken to include just those axioms, or their logical or provable consequences, as desired. In mathematics and logic, an axiomatic system is any set of axioms from which some or all axioms can be used in conjunction to logically derive theorems. A formal system is essentially an ""axiomatic system"". A theory is a consistent, relatively-self-contained body of knowledge which usually contains an axiomatic system and all its derived theorems. In mathematical logic, a theory (also called a formal theory) is a set of sentences in a formal language. Once a formal system is given, one can define the set of theorems which can be proved inside the formal system. An axiomatic system that is completely described is a special kind of formal system. Thus all axioms are considered theorems. A formal proof is a complete rendition of a mathematical proof within a formal system. == Properties == An axiomatic system is said to be consistent if it lacks contradiction. Models can also be used to show the independence of an axiom in the system. The point of view that generating formal proofs is all there is to mathematics is often called formalism. A structure that satisfies all the axioms of the formal system is known as a model of the logical system. Axioms is a peer-reviewed open access scientific journal that focuses on all aspects of mathematics, mathematical logic and mathematical physics. By definition, every axiom is automatically a theorem. The singular accomplishment of axiomatic set theory is its ability to give a foundation for the derivation of the entirety of classical mathematics from a handful of axioms. More generally, the reduction of a body of propositions to a particular collection of axioms underlies the mathematician's research program. ","Basis statements called axioms form the foundation of a formal theory and, together with the deducing rules, help in deriving a set of statements called theorems using proof theory.",Axioms are supplementary statements added to a formal theory that break down otherwise complex statements into more simple ones.,"Axioms are redundant statements that can be derived from other statements in a formal theory, providing additional perspective to theorems derived from the theory.",The axioms in a theory are used for experimental validation of the theorems derived from the statements in the theory.,"The axioms in a formal theory are added to prove that the statements derived from the theory are true, irrespective of their validity in the real world.",A,kaggle200,"A number of scholars claim that Gödel's incompleteness theorem suggests that any attempt to construct a theory of everything is bound to fail. Gödel's theorem, informally stated, asserts that any formal theory sufficient to express elementary arithmetical facts and strong enough for them to be proved is either inconsistent (both a statement and its denial can be derived from its axioms) or incomplete, in the sense that there is a true statement that can't be derived in the formal theory.
The axioms for a functor require that these play harmoniously with substitution. Substitution is usually
In mathematical logic, the concepts of theorems and proofs have been formalized in order to allow mathematical reasoning about them. In this context, statements become well-formed formulas of some formal language. A theory consists of some basis statements called ""axioms"", and some ""deducing rules"" (sometimes included in the axioms). The theorems of the theory are the statements that can be derived from the axioms by using the deducing rules. This formalization led to proof theory, which allows proving general theorems about theorems and proofs. In particular, Gödel's incompleteness theorems show that every consistent theory containing the natural numbers has true statements on natural numbers that are not theorems of the theory (that is they cannot be proved inside the theory).
Thus, in a formal theory such as Peano arithmetic in which one can make statements about numbers and their arithmetical relationships to each other, one can use a Gödel numbering to indirectly make statements about the theory itself. This technique allowed Gödel to prove results about the consistency and completeness properties of formal systems.","In mathematics and logic, an axiomatic system is any set of axioms from which some or all axioms can be used in conjunction to logically derive theorems. A theory is a consistent, relatively-self-contained body of knowledge which usually contains an axiomatic system and all its derived theorems. An axiomatic system that is completely described is a special kind of formal system. A formal theory is an axiomatic system (usually formulated within model theory) that describes a set of sentences that is closed under logical implication. A formal proof is a complete rendition of a mathematical proof within a formal system.
Axioms assertions (including rules) in a logical form that together comprise the overall theory that the ontology describes in its domain of application. This definition differs from that of ""axioms"" in generative grammar and formal logic. In these disciplines, axioms include only statements asserted as a priori knowledge. As used here, ""axioms"" also include the theory derived from axiomatic statements.
In mathematical logic, the concepts of theorems and proofs have been formalized in order to allow mathematical reasoning about them. In this context, statements become well-formed formulas of some formal language. A theory consists of some basis statements called axioms, and some deducing rules (sometimes included in the axioms). The theorems of the theory are the statements that can be derived from the axioms by using the deducing rules. This formalization led to proof theory, which allows proving general theorems about theorems and proofs. In particular, Gödel's incompleteness theorems show that every consistent theory containing the natural numbers has true statements on natural numbers that are not theorems of the theory (that is they cannot be proved inside the theory).","As used here, ""axioms"" also include the theory derived from axiomatic statements.
In mathematical logic, the concepts of theorems and proofs have been formalized in order to allow mathematical reasoning about themA formal theory is an axiomatic system (usually formulated within model theory) that describes a set of sentences that is closed under logical implicationA theory consists of some basis statements called axioms, and some deducing rules (sometimes included in the axioms)A theory consists of some basis statements called ""axioms"", and some ""deducing rules"" (sometimes included in the axioms)This definition differs from that of ""axioms"" in generative grammar and formal logicIn these disciplines, axioms include only statements asserted as a priori knowledgeIn mathematics and logic, an axiomatic system is any set of axioms from which some or all axioms can be used in conjunction to logically derive theoremsThe theorems of the theory are the statements that can be derived from the axioms by using the deducing rulesA theory is a consistent, relatively-self-contained body of knowledge which usually contains an axiomatic system and all its derived theoremsThis formalization led to proof theory, which allows proving general theorems about theorems and proofsAn axiomatic system that is completely described is a special kind of formal systemA formal proof is a complete rendition of a mathematical proof within a formal system.
Axioms assertions (including rules) in a logical form that together comprise the overall theory that the ontology describes in its domain of applicationIn particular, Gödel's incompleteness theorems show that every consistent theory containing the natural numbers has true statements on natural numbers that are not theorems of the theory (that is they cannot be proved inside the theory).
Thus, in a formal theory such as Peano arithmetic in which one can make statements about numbers and their arithmetical relationships to each other, one can use a Gödel numbering to indirectly make statements about the theory itselfIn particular, Gödel's incomple","As used here, ""axioms"" also include the theory derived from axiomatic statements.
In mathematical logic, the concepts of theorems and proofs have been formalized in order to allow mathematical reasoning about themA formal theory is an axiomatic system (usually formulated within model theory) that describes a set of sentences that is closed under logical implicationA theory consists of some basis statements called axioms, and some deducing rules (sometimes included in the axioms)A theory consists of some basis statements called ""axioms"", and some ""deducing rules"" (sometimes included in the axioms)This definition differs from that of ""axioms"" in generative grammar and formal logicIn these disciplines, axioms include only statements asserted as a priori knowledgeIn mathematics and logic, an axiomatic system is any set of axioms from which some or all axioms can be used in conjunction to logically derive theoremsThe theorems of the theory are the statements that can be derived from the axioms by using the deducing rulesA theory is a consistent, relatively-self-contained body of knowledge which usually contains an axiomatic system and all its derived theoremsThis formalization led to proof theory, which allows proving general theorems about theorems and proofsAn axiomatic system that is completely described is a special kind of formal systemA formal proof is a complete rendition of a mathematical proof within a formal system.
Axioms assertions (including rules) in a logical form that together comprise the overall theory that the ontology describes in its domain of applicationIn particular, Gödel's incompleteness theorems show that every consistent theory containing the natural numbers has true statements on natural numbers that are not theorems of the theory (that is they cannot be proved inside the theory).
Thus, in a formal theory such as Peano arithmetic in which one can make statements about numbers and their arithmetical relationships to each other, one can use a Gödel numbering to indirectly make statements about the theory itselfIn particular, Gödel's incomple[SEP]What is the role of axioms in a formal theory?","['A', 'B', 'C']",1.0
What did Fresnel predict and verify with regards to total internal reflections?,"For added confidence, Fresnel predicted and verified that four total internal reflections at 68°27' would give an accurate circular polarization if two of the reflections had water as the external medium while the other two had air, but not if the reflecting surfaces were all wet or all dry.Fresnel, 1866, pp.761,793–6; Whewell, 1857, p.359. Fresnel promptly confirmed by experiment that the equations correctly predicted the direction of polarization of the reflected beam when the incident beam was polarized at 45° to the plane of incidence, for light incident from air onto glass or water; in particular, the equations gave the correct polarization at Brewster's angle.Buchwald, 1989, pp.390–91; Fresnel, 1866, pp.646–8. Using old experimental data, he promptly confirmed that the equations correctly predicted the direction of polarization of the reflected beam when the incident beam was polarized at 45° to the plane of incidence, for light incident from air onto glass or water.Buchwald, 1989, pp.390–91; Fresnel, 1866, pp.646–8. For the first time, polarization could be understood quantitatively, as Fresnel's equations correctly predicted the differing behaviour of waves of the s and p polarizations incident upon a material interface. ==Overview== When light strikes the interface between a medium with refractive index n1 and a second medium with refractive index n2, both reflection and refraction of the light may occur. One could predict reflection coefficients that agreed with observation by supposing (like Fresnel) that different refractive indices were due to different densities and that the vibrations were normal to what was then called the plane of polarization, or by supposing (like MacCullagh and Neumann) that different refractive indices were due to different elasticities and that the vibrations were parallel to that plane.Whittaker, 1910, pp.133,148–9; Darrigol, 2012, pp.212,229–31. In each case there were two solutions, and in each case he reported that the larger angle of incidence gave an accurate circular polarization (for an initial linear polarization at 45° to the plane of reflection). The verification involved * calculating the angle of incidence that would introduce a total phase difference of 90° between the s and p components, for various numbers of total internal reflections at that angle (generally there were two solutions), * subjecting light to that number of total internal reflections at that angle of incidence, with an initial linear polarization at 45° to the plane of incidence, and * checking that the final polarization was circular.Fresnel, 1866, pp.760–61,792–6; Whewell, 1857, p.359. The verification involved * calculating the angle of incidence that would introduce a total phase difference of 90° between the s and p components, for various numbers of total internal reflections at that angle (generally there were two solutions), * subjecting light to that number of total internal reflections at that angle of incidence, with an initial linear polarization at 45° to the plane of incidence, and * checking that the final polarization was circular.Fresnel, 1866, pp.760–61,792–6; Whewell, 1857, p.359. Although the reflection and transmission are dependent on polarization, at normal incidence (θ = 0) there is no distinction between them so all polarization states are governed by a single set of Fresnel coefficients (and another special case is mentioned below in which that is true). ==Configuration== right|thumb|300px|Variables used in the Fresnel equations In the diagram on the right, an incident plane wave in the direction of the ray IO strikes the interface between two media of refractive indices n1 and n2 at point O. Part of the wave is reflected in the direction OR, and part refracted in the direction OT. Similarly, Fresnel calculated and verified the angle of incidence that would give a 90° phase difference after three reflections at the same angle, and four reflections at the same angle. The experimental confirmation was reported in a ""postscript"" to the work in which Fresnel first revealed his theory that light waves, including ""unpolarized"" waves, were purely transverse.A. Fresnel, ""Note sur le calcul des teintes que la polarisation développe dans les lames cristallisées"" et seq., Annales de Chimie et de Physique, vol.17, pp.102–11 (May 1821), 167–96 (June 1821), 312–15 (""Postscript"", July 1821); reprinted in Fresnel, 1866, pp.609–48; translated as ""On the calculation of the tints that polarization develops in crystalline plates, &postscript;"", / , 2021. By including total internal reflection in a chromatic-polarization experiment, he found that the apparently depolarized light was a mixture of components polarized parallel and perpendicular to the plane of incidence, and that the total reflection introduced a phase difference between them.Darrigol, 2012, p.207. Fresnel, ""Mémoire sur la double réfraction que les rayons lumineux éprouvent en traversant les aiguilles de cristal de roche suivant les directions parallèles à l'axe"" (""Memoir on the double refraction that light rays undergo in traversing the needles of quartz in the directions parallel to the axis""), read 9 December 1822; printed in Fresnel, 1866, pp.731–51 (full text), pp.719–29 (extrait, first published in Bulletin de la Société philomathique for 1822, pp. 191–8). in which he introduced the needed terms linear polarization, circular polarization, and elliptical polarization,Buchwald, 1989, pp.230–31; Fresnel, 1866, p.744. and in which he explained optical rotation as a species of birefringence: linearly-polarized light can be resolved into two circularly-polarized components rotating in opposite directions, and if these propagate at different speeds, the phase difference between them — hence the orientation of their linearly-polarized resultant — will vary continuously with distance.Buchwald, 1989, p.442; Fresnel, 1866, pp.737–9,749. Unlike partial reflection between transparent media, total internal reflection is accompanied by a non-trivial phase shift (not just zero or 180°) for each component of polarization (perpendicular or parallel to the plane of incidence), and the shifts vary with the angle of incidence. In 1817 he noticed that plane-polarized light seemed to be partly depolarized by total internal reflection, if initially polarized at an acute angle to the plane of incidence. For glass with a refractive index of 1.51, Fresnel calculated that a 45° phase difference between the two reflection coefficients (hence a 90° difference after two reflections) required an angle of incidence of 48°37' or 54°37'. Total internal reflection occurs when the first medium has a larger refractive index than the second medium, for example, light that starts in water and bounces off the water-to-air interface. The Fresnel equations give the ratio of the reflected wave's electric field to the incident wave's electric field, and the ratio of the transmitted wave's electric field to the incident wave's electric field, for each of two components of polarization. For the case of three reflections he also tested the smaller angle, but found that it gave some coloration due to the proximity of the critical angle and its slight dependence on wavelength. Another reason why internal reflection may be less than total, even beyond the critical angle, is that the external medium may be ""lossy"" (less than perfectly transparent), in which case the external medium will absorb energy from the evanescent wave, so that the maintenance of the evanescent wave will draw power from the incident wave. ","Fresnel predicted and verified that three total internal reflections at 75°27' would give a precise circular polarization if two of the reflections had water as the external medium and the third had air, but not if the reflecting surfaces were all wet or all dry.","Fresnel predicted and verified that eight total internal reflections at 68°27' would give an accurate circular polarization if four of the reflections had water as the external medium while the other four had air, but not if the reflecting surfaces were all wet or all dry.","Fresnel predicted and verified that four total internal reflections at 30°27' would result in circular polarization if two of the reflections had water as the external medium while the other two had air, regardless if the reflecting surfaces were all wet or all dry.","Fresnel predicted and verified that two total internal reflections at 68°27' would give an accurate linear polarization if one of the reflections had water as the external medium and the other had air, but not if the reflecting surfaces were all wet or all dry.","Fresnel predicted and verified that four total internal reflections at 68°27' would give a precise circular polarization if two of the reflections had water as the external medium while the other two had air, but not if the reflecting surfaces were all wet or all dry.",E,kaggle200,"This 45° relative shift is employed in Fresnel's invention, now known as the Fresnel rhomb, in which the angles of incidence are chosen such that the two internal reflections cause a total relative phase shift of 90° between the two polarizations of an incident wave. This device performs the same function as a birefringent quarter-wave plate, but is more achromatic (that is, the phase shift of the rhomb is less sensitive to wavelength). Either device may be used, for instance, to transform linear polarization to circular polarization (which Fresnel also discovered) and vice versa.
Light passing through a Fresnel rhomb undergoes two total internal reflections at the same carefully chosen angle of incidence. After one such reflection, the ""p"" component is advanced by 1/8 of a cycle (45°; π/4 radians) relative to the ""s"" component. With ""two"" such reflections, a relative phase shift of 1/4 of a cycle (90°; π/2) is obtained. The word ""relative"" is critical: as the wavelength is very small compared with the dimensions of typical apparatus, the ""individual"" phase advances suffered by the ""s"" and ""p"" components are not readily observable, but the ""difference"" between them is easily observable through its effect on the state of polarization of the emerging light.
For added confidence, Fresnel predicted and verified that four total internal reflections at 68°27' would give an accurate circular polarization if two of the reflections had water as the external medium while the other two had air, but not if the reflecting surfaces were all wet or all dry.
For added confidence, Fresnel predicted and verified that four total internal reflections at 68°27' would give an accurate circular polarization if two of the reflections had water as the external medium while the other two had air, but not if the reflecting surfaces were all wet or all dry.","Prism rotators use multiple internal reflections to produce beams with rotated polarization. Because they are based on total internal reflection, they are broadband—they work over a broad range of wavelengths.
Double Fresnel rhomb A double Fresnel rhomb rotates the linear polarization axis by 90° using four internal reflections. A disadvantage may be a low ratio of useful optical aperture to length.
For glass with a refractive index of 1.51, Fresnel calculated that a 45° phase difference between the two reflection coefficients (hence a 90° difference after two reflections) required an angle of incidence of 48°37' or 54°37'. He cut a rhomb to the latter angle and found that it performed as expected. Thus the specification of the Fresnel rhomb was completed. Similarly, Fresnel calculated and verified the angle of incidence that would give a 90° phase difference after three reflections at the same angle, and four reflections at the same angle. In each case there were two solutions, and in each case he reported that the larger angle of incidence gave an accurate circular polarization (for an initial linear polarization at 45° to the plane of reflection). For the case of three reflections he also tested the smaller angle, but found that it gave some coloration due to the proximity of the critical angle and its slight dependence on wavelength. (Compare Fig. 13 above, which shows that the phase difference δ is more sensitive to the refractive index for smaller angles of incidence.) For added confidence, Fresnel predicted and verified that four total internal reflections at 68°27' would give an accurate circular polarization if two of the reflections had water as the external medium while the other two had air, but not if the reflecting surfaces were all wet or all dry.Fresnel's deduction of the phase shift in TIR is thought to have been the first occasion on which a physical meaning was attached to the argument of a complex number. Although this reasoning was applied without the benefit of knowing that light waves were electromagnetic, it passed the test of experiment, and survived remarkably intact after James Clerk Maxwell changed the presumed nature of the waves. Meanwhile, Fresnel's success inspired James MacCullagh and Augustin-Louis Cauchy, beginning in 1836, to analyze reflection from metals by using the Fresnel equations with a complex refractive index. The imaginary part of the complex index represents absorption.The term critical angle, used for convenience in the above narrative, is anachronistic: it apparently dates from 1873.In the 20th century, quantum electrodynamics reinterpreted the amplitude of an electromagnetic wave in terms of the probability of finding a photon. In this framework, partial transmission and frustrated TIR concern the probability of a photon crossing a boundary, and attenuated total reflectance concerns the probability of a photon being absorbed on the other side.
Similarly, Fresnel calculated and verified the angle of incidence that would give a 90° phase difference after three reflections at the same angle, and four reflections at the same angle. In each case there were two solutions, and in each case he reported that the larger angle of incidence gave an accurate circular polarization (for an initial linear polarization at 45° to the plane of reflection). For the case of three reflections he also tested the smaller angle, but found that it gave some coloration due to the proximity of the critical angle and its slight dependence on wavelength. (Compare Fig. 2 above, which shows that the phase difference δ is more sensitive to the refractive index for smaller angles of incidence.) For added confidence, Fresnel predicted and verified that four total internal reflections at 68°27' would give an accurate circular polarization if two of the reflections had water as the external medium while the other two had air, but not if the reflecting surfaces were all wet or all dry.","Meanwhile, Fresnel's success inspired James MacCullagh and Augustin-Louis Cauchy, beginning in 1836, to analyze reflection from metals by using the Fresnel equations with a complex refractive index(Compare Fig. 2 above, which shows that the phase difference δ is more sensitive to the refractive index for smaller angles of incidence.) For added confidence, Fresnel predicted and verified that four total internal reflections at 68°27' would give an accurate circular polarization if two of the reflections had water as the external medium while the other two had air, but not if the reflecting surfaces were all wet or all dryIn this framework, partial transmission and frustrated TIR concern the probability of a photon crossing a boundary, and attenuated total reflectance concerns the probability of a photon being absorbed on the other side.
Similarly, Fresnel calculated and verified the angle of incidence that would give a 90° phase difference after three reflections at the same angle, and four reflections at the same angle(Compare Fig. 13 above, which shows that the phase difference δ is more sensitive to the refractive index for smaller angles of incidence.) For added confidence, Fresnel predicted and verified that four total internal reflections at 68°27' would give an accurate circular polarization if two of the reflections had water as the external medium while the other two had air, but not if the reflecting surfaces were all wet or all dry.Fresnel's deduction of the phase shift in TIR is thought to have been the first occasion on which a physical meaning was attached to the argument of a complex numberFor the case of three reflections he also tested the smaller angle, but found that it gave some coloration due to the proximity of the critical angle and its slight dependence on wavelengthThus the specification of the Fresnel rhomb was completedThe word ""relative"" is critical: as the wavelength is very small compared with the dimensions of typical apparatus, the ""individual"" phase advances suffered by the ""s"" and ""p"" components are not readily observable, but the ""","Meanwhile, Fresnel's success inspired James MacCullagh and Augustin-Louis Cauchy, beginning in 1836, to analyze reflection from metals by using the Fresnel equations with a complex refractive index(Compare Fig. 2 above, which shows that the phase difference δ is more sensitive to the refractive index for smaller angles of incidence.) For added confidence, Fresnel predicted and verified that four total internal reflections at 68°27' would give an accurate circular polarization if two of the reflections had water as the external medium while the other two had air, but not if the reflecting surfaces were all wet or all dryIn this framework, partial transmission and frustrated TIR concern the probability of a photon crossing a boundary, and attenuated total reflectance concerns the probability of a photon being absorbed on the other side.
Similarly, Fresnel calculated and verified the angle of incidence that would give a 90° phase difference after three reflections at the same angle, and four reflections at the same angle(Compare Fig. 13 above, which shows that the phase difference δ is more sensitive to the refractive index for smaller angles of incidence.) For added confidence, Fresnel predicted and verified that four total internal reflections at 68°27' would give an accurate circular polarization if two of the reflections had water as the external medium while the other two had air, but not if the reflecting surfaces were all wet or all dry.Fresnel's deduction of the phase shift in TIR is thought to have been the first occasion on which a physical meaning was attached to the argument of a complex numberFor the case of three reflections he also tested the smaller angle, but found that it gave some coloration due to the proximity of the critical angle and its slight dependence on wavelengthThus the specification of the Fresnel rhomb was completedThe word ""relative"" is critical: as the wavelength is very small compared with the dimensions of typical apparatus, the ""individual"" phase advances suffered by the ""s"" and ""p"" components are not readily observable, but the ""[SEP]What did Fresnel predict and verify with regards to total internal reflections?","['E', 'D', 'C']",1.0
What is the relationship between the Wigner function and the density matrix operator?,"Under the Wigner map, the density matrix transforms into the equivalent Wigner function, : W(x,p) \,\ \stackrel{\mathrm{def}}{=}\ \, \frac{1}{\pi\hbar} \int_{-\infty}^\infty \psi^*(x + y) \psi(x - y) e^{2ipy/\hbar} \,dy. The equation for the time evolution of the Wigner function, known as Moyal equation, is then the Wigner- transform of the above von Neumann equation, :\frac{\partial W(x, p, t)}{\partial t} = -\\{\\{W(x, p, t), H(x, p)\\}\\}, where H(x,p) is the Hamiltonian, and \\{\\{\cdot,\cdot\\}\\} is the Moyal bracket, the transform of the quantum commutator. For a more general Hamiltonian, if G(t) is the wavefunction propagator over some interval, then the time evolution of the density matrix over that same interval is given by : \rho(t) = G(t) \rho(0) G(t)^\dagger. == Wigner functions and classical analogies == The density matrix operator may also be realized in phase space. Most importantly, the Wigner quasi-probability distribution is the Wigner transform of the quantum density matrix, and, conversely, the density matrix is the Weyl transform of the Wigner function. In the limit of vanishing Planck's constant \hbar, W(x,p,t) reduces to the classical Liouville probability density function in phase space. == Example applications == Density matrices are a basic tool of quantum mechanics, and appear at least occasionally in almost any type of quantum- mechanical calculation. :Note: the Wigner distribution function is abbreviated here as WD rather than WDF as used at Wigner distribution function A Modified Wigner distribution function is a variation of the Wigner distribution function (WD) with reduced or removed cross-terms. The original WD, the spectrogram, and the modified WDs all belong to the Cohen's class of bilinear time-frequency representations : :C_x(t, f)=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}W_x(\theta, u) \Pi(t - \theta,f - u)\, d\theta\, d u \quad = [W_x\,\ast\,\Pi] (t,f) where \Pi \left(t, f\right) is Cohen's kernel function, which is often a low-pass function, and normally serves to mask out the interference in the original Wigner representation. == Mathematical definition == *Wigner distribution : W_x(t,f) = \int_{-\infty}^\infty x(t+\tau/2) x^*(t-\tau/2) e^{-j2\pi\tau f} \, d\tau Cohen's kernel function : \Pi (t,f) = \delta_{(0,0)} (t,f) *Spectrogram :SP_x (t,f) = |ST_x (t,f)|^2 = ST_x (t,f)\,ST_x^* (t,f) where ST_x is the short-time Fourier transform of x. In quantum mechanics, the Wigner–Weyl transform or Weyl–Wigner transform (after Hermann Weyl and Eugene Wigner) is the invertible mapping between functions in the quantum phase space formulation and Hilbert space operators in the Schrödinger picture. Expectation values in phase-space quantization are obtained isomorphically to tracing operator observables with the density matrix in Hilbert space: they are obtained by phase-space integrals of observables such as the above with the Wigner quasi-probability distribution effectively serving as a measure. Note that the pseudo Wigner can also be written as the Fourier transform of the “spectral-correlation” of the STFT : PW_x(t,f) = \int_{-\infty}^\infty ST_x(t, f+ u/2) ST_x^*(t, f- u/2) e^{j2\pi u\,t} \, d u *Smoothed pseudo Wigner distribution : In the pseudo Wigner the time windowing acts as a frequency direction smoothing. Regardless, the Weyl–Wigner transform is a well-defined integral transform between the phase- space and operator representations, and yields insight into the workings of quantum mechanics. Hence, the polynomial Wigner–Ville distribution was proposed as a generalized form of the conventional Wigner–Ville distribution, which is able to deal with signals with nonlinear phase. == Definition == The polynomial Wigner–Ville distribution W^g_z(t, f) is defined as : W^g_z(t, f)=\mathcal{F}_{\tau\to f}\left[K^g_z(t, \tau)\right] where \mathcal{F}_{\tau\to f} denotes the Fourier transform with respect to \tau, and K^g_z(t, \tau) is the polynomial kernel given by : K^g_z(t, \tau)=\prod_{k=-\frac{q}{2}}^{\frac{q}{2}} \left[z\left(t+c_k\tau\right)\right]^{b_k} where z(t) is the input signal and q is an even number. Often the mapping from functions on phase space to operators is called the Weyl transform or Weyl quantization, whereas the inverse mapping, from operators to functions on phase space, is called the Wigner transform. Antisymmetrization of this ★-product yields the Moyal bracket, the proper quantum deformation of the Poisson bracket, and the phase-space isomorph (Wigner transform) of the quantum commutator in the more usual Hilbert-space formulation of quantum mechanics. # If the signal is time shifted x(t-t0) , then its LWD is time shifted as well, LWD: W_x(t-t0,f) # The LWD of a modulated signal x(t)\exp(j\omega_0 t) is shifted in frequency LWD: W_x(t,f-f0) # Is the signal x(t) is time limited, i.e., x(t)=0 for \left\vert t \right\vert >T, then the L-Wigner distribution is time limited, LWD: W_x(t,f)=0 for\left\vert t \right\vert >T # If the signal x(t) is band limited with f_m ( F(f)=0 for \left\vert f \right\vert > f_m ), then LWD: W_x(t,f) is limited in the frequency domain by f_m as well. In turn, the Weyl map of the Wigner map is summarized by Groenewold's formula, :\Phi [f] = h \iint \,da\,db ~e^{iaQ+ibP} \operatorname{Tr} ( e^{-iaQ-ibP} \Phi). ===The Weyl quantization of polynomial observables=== While the above formulas give a nice understanding of the Weyl quantization of a very general observable on phase space, they are not very convenient for computing on simple observables, such as those that are polynomials in q and p. The name density matrix itself relates to its classical correspondence to a phase-space probability measure (probability distribution of position and momentum) in classical statistical mechanics, which was introduced by Wigner in 1932. # D_x(t,f)=G_x(t,f)\times W_x(t,f) # D_x(t,f)=\min\left\\{|G_x(t,f)|^2,|W_x(t,f)|\right\\} # D_x(t,f)=W_x(t,f)\times \\{|G_x(t,f)|>0.25\\} # D_x(t,f)=G_x^{2.6}(t,f)W_x^{0.7}(t,f) ==See also== * Time-frequency representation * Short-time Fourier transform * Gabor transform * Wigner distribution function ==References== Category:Integral transforms In signal processing, the polynomial Wigner–Ville distribution is a quasiprobability distribution that generalizes the Wigner distribution function. This can be verified by applying the convolution property of the Wigner distribution function. ","The Wigner function W(x, p) is the Wigner transform of the density matrix operator ρ̂, and the trace of an operator with the density matrix Wigner-transforms to the equivalent phase-space integral overlap of g(x,p) with the Wigner function.","The Wigner function W(x, p) is a source function used for the density matrix operator ρ̂ and the product of these two functions creates the phase space wave function g(x, p).","The Wigner function W(x, p) is the derivative of the density matrix operator ρ̂ with respect to the phase space coordinate.","The Wigner function W(x, p) represents the Hamiltonian H(x,p) of the density matrix operator ρ̂, while the Moyal bracket {{⋅, ⋅}} represents the Poisson bracket in the phase space.","The Wigner function W(x, p) is the time derivative of the density matrix operator ρ̂ with respect to the phase space coordinate.",A,kaggle200,"The equation for the time evolution of the Wigner function, known as Moyal equation, is then the Wigner-transform of the above von Neumann equation,
Those operators whose eigenvalues formula_80 are non-negative and sum to a finite number can be mapped to density matrices, i.e., to some physical states. The Wigner function is an image of the density matrix, so the Wigner functions admit a similar decomposition:
The Wigner function discussed here is thus seen to be the Wigner transform of the density matrix operator ""ρ̂"". Thus the trace of an operator with the density matrix Wigner-transforms to the equivalent phase-space integral overlap of with the Wigner function.
The density matrix operator may also be realized in phase space. Under the Wigner map, the density matrix transforms into the equivalent Wigner function,","It is symmetric in x and p: W(x,p)=1πℏ∫−∞∞φ∗(p+q)φ(p−q)e−2ixq/ℏdq, where φ is the normalized momentum-space wave function, proportional to the Fourier transform of ψ.
In 3D, W(r→,p→)=1(2π)3∫ψ∗(r→+ℏs→/2)ψ(r→−ℏs→/2)eip→⋅s→d3s.
In the general case, which includes mixed states, it is the Wigner transform of the density matrix: where ⟨x|ψ⟩ = ψ(x). This Wigner transformation (or map) is the inverse of the Weyl transform, which maps phase-space functions to Hilbert-space operators, in Weyl quantization.
Thus, the Wigner function is the cornerstone of quantum mechanics in phase space.
In 1949, José Enrique Moyal elucidated how the Wigner function provides the integration measure (analogous to a probability density function) in phase space, to yield expectation values from phase-space c-number functions g(x, p) uniquely associated to suitably ordered operators Ĝ through Weyl's transform (see Wigner–Weyl transform and property 7 below), in a manner evocative of classical probability theory.
Specifically, an operator's Ĝ expectation value is a ""phase-space average"" of the Wigner transform of that operator:
The density matrix operator may also be realized in phase space. Under the Wigner map, the density matrix transforms into the equivalent Wigner function, W(x,p)=def1πℏ∫−∞∞ψ∗(x+y)ψ(x−y)e2ipy/ℏdy.
The equation for the time evolution of the Wigner function, known as Moyal equation, is then the Wigner-transform of the above von Neumann equation, ∂W(x,p,t)∂t=−{{W(x,p,t),H(x,p)}}, where H(x,p) is the Hamiltonian, and {{⋅,⋅}} is the Moyal bracket, the transform of the quantum commutator.
The evolution equation for the Wigner function is then analogous to that of its classical limit, the Liouville equation of classical physics. In the limit of vanishing Planck's constant ℏ , W(x,p,t) reduces to the classical Liouville probability density function in phase space.
The Wigner transformation is a general invertible transformation of an operator Ĝ on a Hilbert space to a function g(x, p) on phase space and is given by g(x,p)=∫−∞∞dseips/ℏ⟨x−s2|G^|x+s2⟩.
Hermitian operators map to real functions. The inverse of this transformation, from phase space to Hilbert space, is called the Weyl transformation: ⟨x|G^|y⟩=∫−∞∞dpheip(x−y)/ℏg(x+y2,p) (not to be confused with the distinct Weyl transformation in differential geometry).
The Wigner function W(x, p) discussed here is thus seen to be the Wigner transform of the density matrix operator ρ̂. Thus the trace of an operator with the density matrix Wigner-transforms to the equivalent phase-space integral overlap of g(x, p) with the Wigner function.","The Wigner function is an image of the density matrix, so the Wigner functions admit a similar decomposition:
The Wigner function discussed here is thus seen to be the Wigner transform of the density matrix operator ""ρ̂""Under the Wigner map, the density matrix transforms into the equivalent Wigner function,Thus the trace of an operator with the density matrix Wigner-transforms to the equivalent phase-space integral overlap of with the Wigner function.
The density matrix operator may also be realized in phase spaceThus the trace of an operator with the density matrix Wigner-transforms to the equivalent phase-space integral overlap of g(x, p) with the Wigner functionThe inverse of this transformation, from phase space to Hilbert space, is called the Weyl transformation: ⟨x|G^|y⟩=∫−∞∞dpheip(x−y)/ℏg(x+y2,p) (not to be confused with the distinct Weyl transformation in differential geometry).
The Wigner function W(x, p) discussed here is thus seen to be the Wigner transform of the density matrix operator ρ̂Under the Wigner map, the density matrix transforms into the equivalent Wigner function, W(x,p)=def1πℏ∫−∞∞ψ∗(x+y)ψ(x−y)e2ipy/ℏdy.
The equation for the time evolution of the Wigner function, known as Moyal equation, is then the Wigner-transform of the above von Neumann equation, ∂W(x,p,t)∂t=−{{W(x,p,t),H(x,p)}}, where H(x,p) is the Hamiltonian, and {{⋅,⋅}} is the Moyal bracket, the transform of the quantum commutator.
The evolution equation for the Wigner function is then analogous to that of its classical limit, the Liouville equation of classical physicsThis Wigner transformation (or map) is the inverse of the Weyl transform, which maps phase-space functions to Hilbert-space operators, in Weyl quantization.
Thus, the Wigner function is the cornerstone of quantum mechanics in phase space.
In 1949, José Enrique Moyal elucidated how the Wigner function provides the integration measure (analogous to a probability density function) in phase space, to yield expectation values from phase-space c-number functions g(x, p) uniquely associated to suitably ordered operators Ĝ ","The Wigner function is an image of the density matrix, so the Wigner functions admit a similar decomposition:
The Wigner function discussed here is thus seen to be the Wigner transform of the density matrix operator ""ρ̂""Under the Wigner map, the density matrix transforms into the equivalent Wigner function,Thus the trace of an operator with the density matrix Wigner-transforms to the equivalent phase-space integral overlap of with the Wigner function.
The density matrix operator may also be realized in phase spaceThus the trace of an operator with the density matrix Wigner-transforms to the equivalent phase-space integral overlap of g(x, p) with the Wigner functionThe inverse of this transformation, from phase space to Hilbert space, is called the Weyl transformation: ⟨x|G^|y⟩=∫−∞∞dpheip(x−y)/ℏg(x+y2,p) (not to be confused with the distinct Weyl transformation in differential geometry).
The Wigner function W(x, p) discussed here is thus seen to be the Wigner transform of the density matrix operator ρ̂Under the Wigner map, the density matrix transforms into the equivalent Wigner function, W(x,p)=def1πℏ∫−∞∞ψ∗(x+y)ψ(x−y)e2ipy/ℏdy.
The equation for the time evolution of the Wigner function, known as Moyal equation, is then the Wigner-transform of the above von Neumann equation, ∂W(x,p,t)∂t=−{{W(x,p,t),H(x,p)}}, where H(x,p) is the Hamiltonian, and {{⋅,⋅}} is the Moyal bracket, the transform of the quantum commutator.
The evolution equation for the Wigner function is then analogous to that of its classical limit, the Liouville equation of classical physicsThis Wigner transformation (or map) is the inverse of the Weyl transform, which maps phase-space functions to Hilbert-space operators, in Weyl quantization.
Thus, the Wigner function is the cornerstone of quantum mechanics in phase space.
In 1949, José Enrique Moyal elucidated how the Wigner function provides the integration measure (analogous to a probability density function) in phase space, to yield expectation values from phase-space c-number functions g(x, p) uniquely associated to suitably ordered operators Ĝ [SEP]What is the relationship between the Wigner function and the density matrix operator?","['A', 'E', 'B']",1.0
What is one of the examples of the models proposed by cosmologists and theoretical physicists without the cosmological or Copernican principles that can be used to address specific issues in the Lambda-CDM model and distinguish between current models and other possible models?,"The current standard model of cosmology is the Lambda-CDM model, wherein the Universe is governed by general relativity, began with a Big Bang and today is a nearly-flat universe that consists of approximately 5% baryons, 27% cold dark matter, and 68% dark energy.See the Planck Collaboration's 2015 data release. The standard cosmological model is known as the Lambda-CDM model. ===Equations of motion=== Within the standard cosmological model, the equations of motion governing the universe as a whole are derived from general relativity with a small, positive cosmological constant. The effect on cosmology of the dark energy that these models describe is given by the dark energy's equation of state, which varies depending upon the theory. These proposals typically modify some of the main features of Lambda-CDM, but do not reject the Big Bang. ===Anisotropic universe=== Isotropicity – the idea that the universe looks the same in all directions – is one of the core assumptions that enters into the Friedmann equations. However, the final announcement (in April 1992) of COBE satellite data corrected the earlier contradiction of the Big Bang; the popularity of plasma cosmology has since fallen. == Alternatives and extensions to Lambda-CDM == The standard model of cosmology today, the Lambda-CDM model, has been extremely successful at providing a theoretical framework for structure formation, the anisotropies in the cosmic microwave background, and the accelerating expansion of the universe. Yet other theories attempt to explain dark matter and dark energy as different facets of the same underlying fluid (see dark fluid), or hypothesize that dark matter could decay into dark energy. ===Exotic dark energy=== In Lambda-CDM, dark energy is an unknown form of energy that tends to accelerate the expansion of the universe. Modern physical cosmology is dominated by the Big Bang Theory which attempts to bring together observational astronomy and particle physics;""Cosmology"" Oxford Dictionaries more specifically, a standard parameterization of the Big Bang with dark matter and dark energy, known as the Lambda-CDM model. The assumptions that the current standard model of cosmology relies upon are: # the universality of physical laws – that the laws of physics don't change from one place and time to another, # the cosmological principle – that the universe is roughly homogeneous and isotropic in space though not necessarily in time, and # the Copernican principle – that we are not observing the universe from a preferred locale. Such theories include alternative models of dark energy, such as quintessence, phantom energy and some ideas in brane cosmology; alternative models of dark matter, such as modified Newtonian dynamics; alternatives or extensions to inflation such as chaotic inflation and the ekpyrotic model; and proposals to supplement the universe with a first cause, such as the Hartle–Hawking boundary condition, the cyclic model, and the string landscape. Cosmologists cannot explain all cosmic phenomena exactly, such as those related to the accelerating expansion of the universe, using conventional forms of energy. A cosmological model, or simply cosmology, provides a description of the largest-scale structures and dynamics of the universe and allows study of fundamental questions about its origin, structure, evolution, and ultimate fate.For an overview, see Cosmology as a science originated with the Copernican principle, which implies that celestial bodies obey identical physical laws to those on Earth, and Newtonian mechanics, which first allowed those physical laws to be understood. The detection is controversial, and other scientists have found that the universe is isotropic to a great degree. ===Exotic dark matter === In Lambda-CDM, dark matter is an extremely inert form of matter that does not interact with both ordinary matter (baryons) and light, but still exerts gravitational effects. Work continues on this model (most notably by Jayant V. Narlikar), although it has not gained widespread mainstream acceptance. ===Proposals based on observational skepticism=== As the observational cosmology began to develop, certain astronomers began to offer alternative speculations regarding the interpretation of various phenomena that occasionally became parts of non-standard cosmologies. ====Tired light==== Tired light theories challenge the common interpretation of Hubble's Law as a sign the universe is expanding. The simplest explanation of dark energy is the cosmological constant (the 'Lambda' in Lambda-CDM). Dramatic advances in observational cosmology since the 1990s, including the cosmic microwave background, distant supernovae and galaxy redshift surveys, have led to the development of a standard model of cosmology. This model requires the universe to contain large amounts of dark matter and dark energy whose nature is currently not well understood, but the model gives detailed predictions that are in excellent agreement with many diverse observations. Cosmology draws heavily on the work of many disparate areas of research in theoretical and applied physics. Some cosmologists have proposed that Big Bang nucleosynthesis suggests there is a fourth ""sterile"" species of neutrino. ====Standard model of Big Bang cosmology==== The ΛCDM (Lambda cold dark matter) or Lambda-CDM model is a parametrization of the Big Bang cosmological model in which the universe contains a cosmological constant, denoted by Lambda (Greek Λ), associated with dark energy, and cold dark matter (abbreviated CDM). Quantum field theory predicts a cosmological constant (CC) much like dark energy, but 120 orders of magnitude larger than that observed. As a consequence of introducing an arbitrary function, there may be freedom to explain the accelerated expansion and structure formation of the Universe without adding unknown forms of dark energy or dark matter. ","The Copernican principle, which proposes that Earth, the Solar System, and the Milky Way are not at the centre of the universe, but instead, the universe is expanding equally in all directions. This principle is a modification of the Lambda-CDM model and has been shown to explain several observational results.","Inhomogeneous cosmology, which states that the universe is entirely homogeneous and isotropic, directly proportional to the density of matter and radiation. This model proposes that everything in the universe is completely uniform, but it does not match observations.","Inhomogeneous cosmology, which models the universe as an extremely large, low-density void, instead of using the concept of dark energy. According to the model, this theory can match the observed accelerating universe and cosmological constant, but it contradicts the Copernican principle.","The cosmological principle, which proposes that Earth, the Solar System, and the Milky Way are at the centre of the universe. This principle is a modification of the Lambda-CDM model and has been shown to explain several observational results.","The principle of dark energy, which proposes that a new form of energy, not previously detected, is responsible for the acceleration of the expansion of the universe. This principle is a modification of the Lambda-CDM model and has been shown to explain several observational results.",C,kaggle200,"Hubble's observations of redshift in light from distant galaxies indicated that the universe was expanding and acentric. As a result, galactocentrism was abandoned in favor of the Big Bang model of the acentric expanding universe. Further assumptions, such as the Copernican principle, the cosmological principle, dark energy, and dark matter, eventually lead to the current model of cosmology, Lambda-CDM.
The ΛCDM model has been shown to satisfy the cosmological principle, which states that, on a large-enough scale, the universe looks the same in all directions (isotropy) and from every location (homogeneity); ""the universe looks the same whoever and wherever you are."" The cosmological principle exists because when the predecessors of the ΛCDM model were first being developed, there wasn't sufficient data available to distinguish between more complex anisotropic or inhomogeneous models, so homogeneity and isotropy were assumed to simplify the models, and the assumptions were carried over into the ΛCDM model. However, recent findings have suggested that violations of the cosmological principle, especially of isotropy, exist. These violations have called the ΛCDM model into question, with some authors suggesting that the cosmological principle is now obsolete or that the Friedmann–Lemaître–Robertson–Walker metric breaks down in the late universe. This has additional implications for the validity of the cosmological constant in the ΛCDM model, as dark energy is implied by observations only if the cosmological principle is true.
A prominent example in this context is inhomogeneous cosmology, to model the observed accelerating universe and cosmological constant. Instead of using the current accepted idea of dark energy, this model proposes the universe is much more inhomogeneous than currently assumed, and instead, we are in an extremely large low-density void. To match observations we would have to be very close to the centre of this void, immediately contradicting the Copernican principle.
The standard model of cosmology, the Lambda-CDM model, assumes the Copernican principle and the more general cosmological principle. Some cosmologists and theoretical physicists have created models without the cosmological or Copernican principles to constrain the values of observational results, to address specific known issues in the Lambda-CDM model, and to propose tests to distinguish between current models and other possible models.","Hermann Bondi named the principle after Copernicus in the mid-20th century, although the principle itself dates back to the 16th-17th century paradigm shift away from the Ptolemaic system, which placed Earth at the center of the universe. Copernicus proposed that the motion of the planets could be explained by reference to an assumption that the Sun is centrally located and stationary in contrast to the geocentrism. He argued that the apparent retrograde motion of the planets is an illusion caused by Earth's movement around the Sun, which the Copernican model placed at the centre of the universe. Copernicus himself was mainly motivated by technical dissatisfaction with the earlier system and not by support for any mediocrity principle. In fact, although the Copernican heliocentric model is often described as ""demoting"" Earth from its central role it had in the Ptolemaic geocentric model, it was successors to Copernicus, notably the 16th century Giordano Bruno, who adopted this new perspective. The Earth's central position had been interpreted as being in the ""lowest and filthiest parts"". Instead, as Galileo said, the Earth is part of the ""dance of the stars"" rather than the ""sump where the universe's filth and ephemera collect"". In the late 20th Century, Carl Sagan asked, ""Who are we? We find that we live on an insignificant planet of a humdrum star lost in a galaxy tucked away in some forgotten corner of a universe in which there are far more galaxies than people.""While the Copernican principle is derived from the negation of past assumptions, such as geocentrism, heliocentrism, or galactocentrism which state that humans are at the center of the universe, the Copernican principle is stronger than acentrism, which merely states that humans are not at the center of the universe. The Copernican principle assumes acentrism and also states that human observers or observations from Earth are representative of observations from the average position in the universe. Michael Rowan-Robinson emphasizes the Copernican principle as the threshold test for modern thought, asserting that: ""It is evident that in the post-Copernican era of human history, no well-informed and rational person can imagine that the Earth occupies a unique position in the universe.""Most modern cosmology is based on the assumption that the cosmological principle is almost, but not exactly, true on the largest scales. The Copernican principle represents the irreducible philosophical assumption needed to justify this, when combined with the observations. If one assumes the Copernican principle and observes that the universe appears isotropic or the same in all directions from the vantage point of Earth, then one can infer that the universe is generally homogeneous or the same everywhere (at any given time) and is also isotropic about any given point. These two conditions make up the cosmological principle.In practice, astronomers observe that the universe has heterogeneous or non-uniform structures up to the scale of galactic superclusters, filaments and great voids. In the current Lambda-CDM model, the predominant model of cosmology in the modern era, the universe is predicted to become more and more homogeneous and isotropic when observed on larger and larger scales, with little detectable structure on scales of more than about 260 million parsecs. However, recent evidence from galaxy clusters, quasars, and type Ia supernovae suggests that isotropy is violated on large scales. Furthermore, various large-scale structures have been discovered, such as the Clowes–Campusano LQG, the Sloan Great Wall, U1.11, the Huge-LQG, the Hercules–Corona Borealis Great Wall, and the Giant Arc, all which indicate that homogeneity might be violated.
A prominent example in this context is inhomogeneous cosmology, to model the observed accelerating universe and cosmological constant. Instead of using the current accepted idea of dark energy, this model proposes the universe is much more inhomogeneous than currently assumed, and instead, we are in an extremely large low-density void. To match observations we would have to be very close to the centre of this void, immediately contradicting the Copernican principle.
The standard model of cosmology, the Lambda-CDM model, assumes the Copernican principle and the more general cosmological principle. Some cosmologists and theoretical physicists have created models without the cosmological or Copernican principles to constrain the values of observational results, to address specific known issues in the Lambda-CDM model, and to propose tests to distinguish between current models and other possible models.","Some cosmologists and theoretical physicists have created models without the cosmological or Copernican principles to constrain the values of observational results, to address specific known issues in the Lambda-CDM model, and to propose tests to distinguish between current models and other possible modelsSome cosmologists and theoretical physicists have created models without the cosmological or Copernican principles to constrain the values of observational results, to address specific known issues in the Lambda-CDM model, and to propose tests to distinguish between current models and other possible models.These violations have called the ΛCDM model into question, with some authors suggesting that the cosmological principle is now obsolete or that the Friedmann–Lemaître–Robertson–Walker metric breaks down in the late universeFurther assumptions, such as the Copernican principle, the cosmological principle, dark energy, and dark matter, eventually lead to the current model of cosmology, Lambda-CDM.
The ΛCDM model has been shown to satisfy the cosmological principle, which states that, on a large-enough scale, the universe looks the same in all directions (isotropy) and from every location (homogeneity); ""the universe looks the same whoever and wherever you are."" The cosmological principle exists because when the predecessors of the ΛCDM model were first being developed, there wasn't sufficient data available to distinguish between more complex anisotropic or inhomogeneous models, so homogeneity and isotropy were assumed to simplify the models, and the assumptions were carried over into the ΛCDM modelThis has additional implications for the validity of the cosmological constant in the ΛCDM model, as dark energy is implied by observations only if the cosmological principle is true.
A prominent example in this context is inhomogeneous cosmology, to model the observed accelerating universe and cosmological constantIn the current Lambda-CDM model, the predominant model of cosmology in the modern era, the universe is predicted to become more and more homogeneous and is","Some cosmologists and theoretical physicists have created models without the cosmological or Copernican principles to constrain the values of observational results, to address specific known issues in the Lambda-CDM model, and to propose tests to distinguish between current models and other possible modelsSome cosmologists and theoretical physicists have created models without the cosmological or Copernican principles to constrain the values of observational results, to address specific known issues in the Lambda-CDM model, and to propose tests to distinguish between current models and other possible models.These violations have called the ΛCDM model into question, with some authors suggesting that the cosmological principle is now obsolete or that the Friedmann–Lemaître–Robertson–Walker metric breaks down in the late universeFurther assumptions, such as the Copernican principle, the cosmological principle, dark energy, and dark matter, eventually lead to the current model of cosmology, Lambda-CDM.
The ΛCDM model has been shown to satisfy the cosmological principle, which states that, on a large-enough scale, the universe looks the same in all directions (isotropy) and from every location (homogeneity); ""the universe looks the same whoever and wherever you are."" The cosmological principle exists because when the predecessors of the ΛCDM model were first being developed, there wasn't sufficient data available to distinguish between more complex anisotropic or inhomogeneous models, so homogeneity and isotropy were assumed to simplify the models, and the assumptions were carried over into the ΛCDM modelThis has additional implications for the validity of the cosmological constant in the ΛCDM model, as dark energy is implied by observations only if the cosmological principle is true.
A prominent example in this context is inhomogeneous cosmology, to model the observed accelerating universe and cosmological constantIn the current Lambda-CDM model, the predominant model of cosmology in the modern era, the universe is predicted to become more and more homogeneous and is[SEP]What is one of the examples of the models proposed by cosmologists and theoretical physicists without the cosmological or Copernican principles that can be used to address specific issues in the Lambda-CDM model and distinguish between current models and other possible models?","['A', 'C', 'E']",0.5
What is the Roche limit?,"In celestial mechanics, the Roche limit, also called Roche radius, is the distance from a celestial body within which a second celestial body, held together only by its own force of gravity, will disintegrate because the first body's tidal forces exceed the second body's self-gravitation. But note that, as defined above, the Roche limit refers to a body held together solely by the gravitational forces which cause otherwise unconnected particles to coalesce, thus forming the body in question. The Roche limit for a rigid spherical satellite is the distance, d, from the primary at which the gravitational force on a test mass at the surface of the object is exactly equal to the tidal force pulling the mass away from the object:see calculation in Frank H. Shu, The Physical Universe: an Introduction to Astronomy, p. 431, University Science Books (1982), . : d = R_M\left(2 \frac {\rho_M} {\rho_m} \right)^{\frac{1}{3}} where R_M is the radius of the primary, \rho_M is the density of the primary, and \rho_m is the density of the satellite. The Roche limit is also usually calculated for the case of a circular orbit, although it is straightforward to modify the calculation to apply to the case (for example) of a body passing the primary on a parabolic or hyperbolic trajectory. === Rigid satellites === The rigid-body Roche limit is a simplified calculation for a spherical satellite. Since, within the Roche limit, tidal forces overwhelm the gravitational forces that might otherwise hold the satellite together, no satellite can gravitationally coalesce out of smaller particles within that limit. Inside the Roche limit, orbiting material disperses and forms rings, whereas outside the limit, material tends to coalesce. The term is named after Édouard Roche (, ), the French astronomer who first calculated this theoretical limit in 1848. == Explanation == The Roche limit typically applies to a satellite's disintegrating due to tidal forces induced by its primary, the body around which it orbits. The Roche radius depends on the radius of the first body and on the ratio of the bodies' densities. Roche himself derived the following approximate solution for the Roche limit: : d \approx 2.44R\left( \frac {\rho_M} {\rho_m} \right)^{1/3} However, a better approximation that takes into account the primary's oblateness and the satellite's mass is: : d \approx 2.423 R\left( \frac {\rho_M} {\rho_m} \right)^{1/3} \left( \frac{(1+\frac{m}{3M})+\frac{c}{3R}(1+\frac{m}{M})}{1-c/R} \right)^{1/3} where c/R is the oblateness of the primary. Some real satellites, both natural and artificial, can orbit within their Roche limits because they are held together by forces other than gravitation. * Roche Limit Description from NASA Category:Gravity Category:Space science Category:Tidal forces Category:Planetary rings Category:Equations of astronomy This is the orbital distance inside of which loose material (e.g. regolith) on the surface of the satellite closest to the primary would be pulled away, and likewise material on the side opposite the primary will also go away from, rather than toward, the satellite. === Fluid satellites === A more accurate approach for calculating the Roche limit takes the deformation of the satellite into account. The Roche limit is not the only factor that causes comets to break apart. Chandrasekhar, Ellipsoidal figures of equilibrium (New Haven: Yale University Press, 1969), Chapter 8: The Roche ellipsoids (189–240). * == External links == * Discussion of the Roche Limit * Audio: Cain/Gay – Astronomy Cast Tidal Forces Across the Universe – August 2007. A weaker satellite, such as a comet, could be broken up when it passes within its Roche limit. Indeed, almost all known planetary rings are located within their Roche limit. Roché is a surname and given name. For instance, comet Shoemaker–Levy 9's decaying orbit around Jupiter passed within its Roche limit in July 1992, causing it to fragment into a number of smaller pieces. These two rings could possibly be remnants from the planet's proto-planetary accretion disc that failed to coalesce into moonlets, or conversely have formed when a moon passed within its Roche limit and broke apart.) At one extreme, a completely rigid satellite will maintain its shape until tidal forces break it apart. ",The Roche limit is the distance at which tidal effects would cause an object to rotate since the forces exerted by two massive bodies produce a torque on a third object.,The Roche limit is the distance at which tidal effects would cause an object to unite since differential force from a planet results in parts becoming attracted to one another.,The Roche limit is the distance at which tidal effects would cause a planet to disintegrate since differential force from an object overcomes the planet's core.,The Roche limit is the distance at which tidal effects would cause an object to disintegrate since differential force from a planet overcomes the attraction of the parts between them.,"The Roche limit is the distance at which tidal effects would cause an object to break apart due to differential force from the planet overcoming the attraction of the parts of the object for one another, which depends on the object's density and composition, as well as the mass and size of the planet.",D,kaggle200,"Since, within the Roche limit, tidal forces overwhelm the gravitational forces that might otherwise hold the satellite together, no satellite can gravitationally coalesce out of smaller particles within that limit. Indeed, almost all known planetary rings are located within their Roche limit. (Notable exceptions are Saturn's E-Ring and Phoebe ring. These two rings could possibly be remnants from the planet's proto-planetary accretion disc that failed to coalesce into moonlets, or conversely have formed when a moon passed within its Roche limit and broke apart.)
The Roche lobe is different from the Roche sphere, which approximates the gravitational sphere of influence of one astronomical body in the face of perturbations from a more massive body around which it orbits. It is also different from the Roche limit, which is the distance at which an object held together only by gravity begins to break up due to tidal forces. The Roche lobe, Roche limit, and Roche sphere are named after the French astronomer Édouard Roche.
The separation between the A ring and the F Ring has been named the Roche Division in honor of the French physicist Édouard Roche. The Roche Division should not be confused with the Roche limit which is the distance at which a large object is so close to a planet (such as Saturn) that the planet's tidal forces will pull it apart. Lying at the outer edge of the main ring system, the Roche Division is in fact close to Saturn's Roche limit, which is why the rings have been unable to accrete into a moon.
When a body (body 1) is acted on by the gravity of another body (body 2), the field can vary significantly on body 1 between the side of the body facing body 2 and the side facing away from body 2. Figure 4 shows the differential force of gravity on a spherical body (body 1) exerted by another body (body 2). These so-called ""tidal forces"" cause strains on both bodies and may distort them or even, in extreme cases, break one or the other apart. The Roche limit is the distance from a planet at which tidal effects would cause an object to disintegrate because the differential force of gravity from the planet overcomes the attraction of the parts of the object for one another. These strains would not occur if the gravitational field were uniform, because a uniform field only causes the entire body to accelerate together in the same direction and at the same rate.","In celestial mechanics, the Roche limit, also called Roche radius, is the distance from a celestial body within which a second celestial body, held together only by its own force of gravity, will disintegrate because the first body's tidal forces exceed the second body's self-gravitation. Inside the Roche limit, orbiting material disperses and forms rings, whereas outside the limit, material tends to coalesce. The Roche radius depends on the radius of the first body and on the ratio of the bodies' densities.
The term is named after Édouard Roche (French: [ʁɔʃ], English: ROSH), the French astronomer who first calculated this theoretical limit in 1848.
When a body (body 1) is acted on by the gravity of another body (body 2), the field can vary significantly on body 1 between the side of the body facing body 2 and the side facing away from body 2. Figure 4 shows the differential force of gravity on a spherical body (body 1) exerted by another body (body 2). These so-called tidal forces cause strains on both bodies and may distort them or even, in extreme cases, break one or the other apart. The Roche limit is the distance from a planet at which tidal effects would cause an object to disintegrate because the differential force of gravity from the planet overcomes the attraction of the parts of the object for one another. These strains would not occur if the gravitational field were uniform, because a uniform field only causes the entire body to accelerate together in the same direction and at the same rate.","In celestial mechanics, the Roche limit, also called Roche radius, is the distance from a celestial body within which a second celestial body, held together only by its own force of gravity, will disintegrate because the first body's tidal forces exceed the second body's self-gravitationThe Roche limit is the distance from a planet at which tidal effects would cause an object to disintegrate because the differential force of gravity from the planet overcomes the attraction of the parts of the object for one anotherThe Roche Division should not be confused with the Roche limit which is the distance at which a large object is so close to a planet (such as Saturn) that the planet's tidal forces will pull it apartInside the Roche limit, orbiting material disperses and forms rings, whereas outside the limit, material tends to coalesceThe Roche radius depends on the radius of the first body and on the ratio of the bodies' densities.
The term is named after Édouard Roche (French: [ʁɔʃ], English: ROSH), the French astronomer who first calculated this theoretical limit in 1848.
When a body (body 1) is acted on by the gravity of another body (body 2), the field can vary significantly on body 1 between the side of the body facing body 2 and the side facing away from body 2Indeed, almost all known planetary rings are located within their Roche limitIt is also different from the Roche limit, which is the distance at which an object held together only by gravity begins to break up due to tidal forcesLying at the outer edge of the main ring system, the Roche Division is in fact close to Saturn's Roche limit, which is why the rings have been unable to accrete into a moon.
When a body (body 1) is acted on by the gravity of another body (body 2), the field can vary significantly on body 1 between the side of the body facing body 2 and the side facing away from body 2The Roche lobe, Roche limit, and Roche sphere are named after the French astronomer Édouard Roche.
The separation between the A ring and the F Ring has been named the Roche Division in honor of the French physicist Éd","In celestial mechanics, the Roche limit, also called Roche radius, is the distance from a celestial body within which a second celestial body, held together only by its own force of gravity, will disintegrate because the first body's tidal forces exceed the second body's self-gravitationThe Roche limit is the distance from a planet at which tidal effects would cause an object to disintegrate because the differential force of gravity from the planet overcomes the attraction of the parts of the object for one anotherThe Roche Division should not be confused with the Roche limit which is the distance at which a large object is so close to a planet (such as Saturn) that the planet's tidal forces will pull it apartInside the Roche limit, orbiting material disperses and forms rings, whereas outside the limit, material tends to coalesceThe Roche radius depends on the radius of the first body and on the ratio of the bodies' densities.
The term is named after Édouard Roche (French: [ʁɔʃ], English: ROSH), the French astronomer who first calculated this theoretical limit in 1848.
When a body (body 1) is acted on by the gravity of another body (body 2), the field can vary significantly on body 1 between the side of the body facing body 2 and the side facing away from body 2Indeed, almost all known planetary rings are located within their Roche limitIt is also different from the Roche limit, which is the distance at which an object held together only by gravity begins to break up due to tidal forcesLying at the outer edge of the main ring system, the Roche Division is in fact close to Saturn's Roche limit, which is why the rings have been unable to accrete into a moon.
When a body (body 1) is acted on by the gravity of another body (body 2), the field can vary significantly on body 1 between the side of the body facing body 2 and the side facing away from body 2The Roche lobe, Roche limit, and Roche sphere are named after the French astronomer Édouard Roche.
The separation between the A ring and the F Ring has been named the Roche Division in honor of the French physicist Éd[SEP]What is the Roche limit?","['E', 'D', 'C']",0.5
What is Martin Heidegger's view on the relationship between time and human existence?,"===Time=== Heidegger believes that time finds its meaning in death, according to Michael Kelley. Being and Time () is the 1927 magnum opus of German philosopher Martin Heidegger and a key document of existentialism. The Genesis of Heidegger's Being and Time (Berkeley, Los Angeles, London: University of California Press, 1995), p. 568. Although Heidegger did not complete the project outlined in Being and Time, later works explicitly addressed the themes and concepts of Being and Time. Being-in-the-world: A Commentary on Heidegger's Being and Time, División I, MIT Press Almost all central concepts of Being and Time are derived from Augustine, Luther, and Kierkegaard, according to Christian Lotz.Luther’s influence on Heidegger. * Michael Gelven, A Commentary on Heidegger's ""Being and Time"" (Northern Illinois University Press; Revised edition, 1989). In this vein, Robert J. Dostal asserts that ""if we do not see how much it is the case that Husserlian phenomenology provides the framework for Heidegger's approach,"" then it's impossible to exactly understand Being and Time.Robert J. Dostal, ""Time and Phenomenology in Husserl and Heidegger"", in Charles Guignon (ed.), The Cambridge Companion to Heidegger (Cambridge & New York: Cambridge University Press, 1993), p. 142. The book instead provides ""an answer to the question of what it means to be human"" (Critchley).Critchley, S., ""Heidegger's Being and Time, part 8: Temporality"", The Guardian, July 27, 2009. * Taylor Carman, Heidegger's Analytic: Interpretation, Discourse, and Authenticity in ""Being and Time"" (Cambridge: Cambridge University Press, 2003). In Being and Time, the philosopher Martin Heidegger made the distinction between ontical and ontological, or between beings and being as such. * * Theodore Kisiel, The Genesis of Heidegger's Being and Time (Berkeley & Los Angeles: University of California Press, 1993). This was Heidegger's most direct confrontation with Being and Time. ""The present is the nodal moment which makes past and future intelligible,"" writes Lilian Alweiss.Alweiss, L., ""Heidegger and 'the concept of time'"", History of the Human Sciences, Vol. 15, Nr. 3, 2002. Simon Critchley writes (2009) that it is impossible to understand developments in continental philosophy after Heidegger without understanding Being and Time. ==Related work== Being and Time is the major achievement of Heidegger's early career, but he produced other important works during this period: *The publication in 1992 of the early lecture course, Platon: Sophistes (Plato's Sophist, 1924), made clear the way in which Heidegger's reading of Aristotle's Nicomachean Ethics was crucial to the formulation of the thought expressed in Being and Time. * Hubert Dreyfus, Being-in-the-World: A Commentary on Heidegger's Being and Time, Division I (Cambridge, Massachusetts, & London: MIT Press, 1990). Jean-Paul Sartre's existentialism (of 1943) has been described as merely ""a version of Being and Time"". On the Origin of Time is a 2023 book by physicist Thomas Hertog about the theories of Stephen Hawking. The unwritten “second half” was to include a critique of Western philosophy.Sein und Zeit, pp. 39–40. ==Summary== ===Dasein=== Being and Time explicitly rejects Descartes' notion of the human being as a subjective spectator of objects, according to Marcella Horrigan-Kelly (et al.).Understanding the Key Tenets of Heidegger’s Philosophy for Interpretive Phenomenological Research Marcella Horrigan-Kelly , Michelle Millar , and Maura Dowling, International Journal of Qualitative Methods January–December 2016: 1–8 https://journals.sagepub.com/doi/pdf/10.1177/1609406916680634 The book instead holds that both subject and object are inseparable. He says this ""ontological inquiry"" is required to understand the basis of the sciences.Martin Heidegger, Being and Time, §3. ==Ontology, phenomenology, and the ontological difference== Traditional ontology asks ""Why is there anything?"" * William D. Blattner, Heidegger's Temporal Idealism (Cambridge: Cambridge University Press, 1999). ","Martin Heidegger believes that humans exist within a time continuum that is infinite and does not have a defined beginning or end. The relationship to the past involves acknowledging it as a historical era, and the relationship to the future involves creating a world that will endure beyond one's own time.","Martin Heidegger believes that humans do not exist inside time, but that they are time. The relationship to the past is a present awareness of having been, and the relationship to the future involves anticipating a potential possibility, task, or engagement.","Martin Heidegger does not believe in the existence of time or that it has any effect on human consciousness. The relationship to the past and the future is insignificant, and human existence is solely based on the present.",Martin Heidegger believes that the relationship between time and human existence is cyclical. The past and present are interconnected and the future is predetermined. Human beings do not have free will.,"Martin Heidegger believes that time is an illusion, and the past, present, and future are all happening simultaneously. Humans exist outside of this illusion and are guided by a higher power.",B,kaggle200,"German philosopher Martin Heidegger (1889-1976) discusses ""facticity"" as the ""thrownness"" (""Geworfenheit"") of individual existence, which is to say we are ""thrown into the world."" By this, he is not only referring to a brute fact, or the factuality of a concrete historical situation, e.g., ""born in the '80s."" Facticity is something that already informs and has been taken up in existence, even if it is unnoticed or left unattended. As such, facticity is not something we come across and directly behold. In moods, for example, facticity has an enigmatic appearance, which involves both turning toward and away from it. For Heidegger, moods are conditions of thinking and willing to which they must in some way respond. The ""thrownness"" of human existence (or ""Dasein"") is accordingly disclosed through moods.
Other significant postmodern figures whom Ulmer references include Martin Heidegger, Michel Foucault, Algirdas Greimas, Terry Eagleton, Gilles Deleuze, and Giorgio Agamben.
Martin Heidegger attacked Sartre's concept of existential humanism in his ""Letter on Humanism"" of 1946, accusing Sartre of elevating Reason above Being.
According to Martin Heidegger we do not exist inside time, we ""are"" time. Hence, the relationship to the past is a present awareness of ""having been"", which allows the past to exist in the present. The relationship to the future is the state of anticipating a potential possibility, task, or engagement. It is related to the human propensity for caring and being concerned, which causes ""being ahead of oneself"" when thinking of a pending occurrence. Therefore, this concern for a potential occurrence also allows the future to exist in the present. The present becomes an experience, which is qualitative instead of quantitative. Heidegger seems to think this is the way that a linear relationship with time, or temporal existence, is broken or transcended.","Heidegger claims that there are three “concealments” of the abandonment of being: calculation, acceleration, and the claim of massiveness.
Calculation: Heidegger characterizes this as the machination of technicity, or the belief that one fully understands scientific data and experiments and in so doing places their full faith in those concepts. Heidegger believes that this is a parallel to the belief in God, because there is no longer need for questioning this concept that has become own-most to truth.
Acceleration: The mania for what is new or surprising, especially technologically. Heidegger believed that this overpowered the truth and questioning of abandonment because the excitement sweeps one away and gets one caught up in the quantitative enhancement of status of accomplishment, according to both Heidegger and Nietzsche a false moral governing.
The outbreak of massiveness: An idea that the rare and unique quality of abandonment, is compromised by the beliefs of the masses, not only in the overwhelming societal numbers of people but in the beliefs and “moral identities” that are common to the many and the all.
Before Sartre defined abandonment as abandonment by, or of the idea of, a higher omnipotent power, philosopher Martin Heidegger wrote about the abandonment of self in much the same way. Deriving his ideas from Nietzsche's work, Heidegger theorized that the abandonment of being is the cause of “the distress of lack of distress,” under the belief that a person's distress is the opening of the mind to the truth of existence, especially the truth that one's existence is meaningless. Therefore, a person's truest state, one in which being comes before meaning, is also one of extreme distress. Heidegger also summarizes this concept as the abandonment of being. He claims it is brought on by the darkness of the world in “modern” times and derangement of the West; the death of the moral (echoing Nietzsche).The importance of abandonment theory is that it, according to Heidegger, determines an epoch in the historical search for “be-ing.” It is the disownment of the surety of being as less useful than the constant questioning of being, the magnitude of the non-form that reveals the “truth” of life better than transparent and empty platitudes.
Henri Bergson believed that time was neither a real homogeneous medium nor a mental construct, but possesses what he referred to as Duration. Duration, in Bergson's view, was creativity and memory as an essential component of reality.According to Martin Heidegger we do not exist inside time, we are time. Hence, the relationship to the past is a present awareness of having been, which allows the past to exist in the present. The relationship to the future is the state of anticipating a potential possibility, task, or engagement. It is related to the human propensity for caring and being concerned, which causes ""being ahead of oneself"" when thinking of a pending occurrence. Therefore, this concern for a potential occurrence also allows the future to exist in the present. The present becomes an experience, which is qualitative instead of quantitative. Heidegger seems to think this is the way that a linear relationship with time, or temporal existence, is broken or transcended.","Heidegger seems to think this is the way that a linear relationship with time, or temporal existence, is broken or transcendedHeidegger seems to think this is the way that a linear relationship with time, or temporal existence, is broken or transcended.The ""thrownness"" of human existence (or ""Dasein"") is accordingly disclosed through moods.
Other significant postmodern figures whom Ulmer references include Martin Heidegger, Michel Foucault, Algirdas Greimas, Terry Eagleton, Gilles Deleuze, and Giorgio Agamben.
Martin Heidegger attacked Sartre's concept of existential humanism in his ""Letter on Humanism"" of 1946, accusing Sartre of elevating Reason above Being.
According to Martin Heidegger we do not exist inside time, we ""are"" timeDuration, in Bergson's view, was creativity and memory as an essential component of reality.According to Martin Heidegger we do not exist inside time, we are timeHe claims it is brought on by the darkness of the world in “modern” times and derangement of the West; the death of the moral (echoing Nietzsche).The importance of abandonment theory is that it, according to Heidegger, determines an epoch in the historical search for “be-ing.” It is the disownment of the surety of being as less useful than the constant questioning of being, the magnitude of the non-form that reveals the “truth” of life better than transparent and empty platitudes.
Henri Bergson believed that time was neither a real homogeneous medium nor a mental construct, but possesses what he referred to as DurationHeidegger also summarizes this concept as the abandonment of beingFor Heidegger, moods are conditions of thinking and willing to which they must in some way respondHeidegger claims that there are three “concealments” of the abandonment of being: calculation, acceleration, and the claim of massiveness.
Calculation: Heidegger characterizes this as the machination of technicity, or the belief that one fully understands scientific data and experiments and in so doing places their full faith in those conceptsHeidegger believes that this is a parallel to the belief in Go","Heidegger seems to think this is the way that a linear relationship with time, or temporal existence, is broken or transcendedHeidegger seems to think this is the way that a linear relationship with time, or temporal existence, is broken or transcended.The ""thrownness"" of human existence (or ""Dasein"") is accordingly disclosed through moods.
Other significant postmodern figures whom Ulmer references include Martin Heidegger, Michel Foucault, Algirdas Greimas, Terry Eagleton, Gilles Deleuze, and Giorgio Agamben.
Martin Heidegger attacked Sartre's concept of existential humanism in his ""Letter on Humanism"" of 1946, accusing Sartre of elevating Reason above Being.
According to Martin Heidegger we do not exist inside time, we ""are"" timeDuration, in Bergson's view, was creativity and memory as an essential component of reality.According to Martin Heidegger we do not exist inside time, we are timeHe claims it is brought on by the darkness of the world in “modern” times and derangement of the West; the death of the moral (echoing Nietzsche).The importance of abandonment theory is that it, according to Heidegger, determines an epoch in the historical search for “be-ing.” It is the disownment of the surety of being as less useful than the constant questioning of being, the magnitude of the non-form that reveals the “truth” of life better than transparent and empty platitudes.
Henri Bergson believed that time was neither a real homogeneous medium nor a mental construct, but possesses what he referred to as DurationHeidegger also summarizes this concept as the abandonment of beingFor Heidegger, moods are conditions of thinking and willing to which they must in some way respondHeidegger claims that there are three “concealments” of the abandonment of being: calculation, acceleration, and the claim of massiveness.
Calculation: Heidegger characterizes this as the machination of technicity, or the belief that one fully understands scientific data and experiments and in so doing places their full faith in those conceptsHeidegger believes that this is a parallel to the belief in Go[SEP]What is Martin Heidegger's view on the relationship between time and human existence?","['B', 'E', 'A']",1.0
"What is the ""ultraviolet catastrophe""?","The ultraviolet catastrophe, also called the Rayleigh–Jeans catastrophe, was the prediction of late 19th century/early 20th century classical physics that an ideal black body at thermal equilibrium would emit an unbounded quantity of energy as wavelength decreased into the ultraviolet range.The term ""ultraviolet catastrophe"" was first used in 1911 by Paul Ehrenfest, but the concept originated with the 1900 statistical derivation of the Rayleigh–Jeans law. The ""ultraviolet catastrophe"" is the expression of the fact that the formula misbehaves at higher frequencies, i.e. B_{ u}(T) \to \infty as u \to \infty. Ultraviolet (UV) is a form of electromagnetic radiation with wavelength shorter than that of visible light, but longer than X-rays. Ultraviolet is a novelization of the science fiction film of the same name. Ultraviolet astronomy is the observation of electromagnetic radiation at ultraviolet wavelengths between approximately 10 and 320 nanometres; shorter wavelengths--higher energy photons--are studied by X-ray astronomy and gamma- ray astronomy. UV‑C is the highest-energy, most- dangerous type of ultraviolet radiation, and causes adverse effects that can variously be mutagenic or carcinogenic. Although long-wavelength ultraviolet is not considered an ionizing radiation because its photons lack the energy to ionize atoms, it can cause chemical reactions and causes many substances to glow or fluoresce. Ultraviolet radiation is the signature of hotter objects, typically in the early and late stages of their evolution. As the theory diverged from empirical observations when these frequencies reached the ultraviolet region of the electromagnetic spectrum, there was a problem. An ultraviolet detector (also known as UV detector or UV-Vis detector) is a type of non-destructive chromatography detector which measures the amount of ultraviolet or visible light absorbed by components of the mixture being eluted off the chromatography column. At still shorter wavelengths of UV, damage continues to happen, but the overt effects are not as great with so little penetrating the atmosphere. Short-wave ultraviolet radiation can destroy DNA in living microorganisms. Extreme UV (EUV or sometimes XUV) is characterized by a transition in the physics of interaction with matter. The Sun emits ultraviolet radiation at all wavelengths, including the extreme ultraviolet where it crosses into X-rays at 10 nm. This standard shows that most sunburn happens due to UV at wavelengths near the boundary of the UV‑A and UV‑B bands. ==== Skin damage ==== Overexposure to UV‑B radiation not only can cause sunburn but also some forms of skin cancer. In 1960, the effect of ultraviolet radiation on DNA was established.James Bolton, Christine Colton, The Ultraviolet Disinfection Handbook, American Water Works Association, 2008 , pp. 3–4 The discovery of the ultraviolet radiation with wavelengths below 200 nm, named ""vacuum ultraviolet"" because it is strongly absorbed by the oxygen in air, was made in 1893 by German physicist Victor Schumann.The ozone layer also protects living beings from this. ==Subtypes== The electromagnetic spectrum of ultraviolet radiation (UVR), defined most broadly as 10–400 nanometers, can be subdivided into a number of ranges recommended by the ISO standard ISO 21348: Name Abbreviation Wavelength (nm) Photon energy (eV, aJ) Notes/alternative names Ultraviolet A UV‑A 315–400 Long-wave UV, blacklight, not absorbed by the ozone layer: soft UV. Synchrotron light sources can also produce all wavelengths of UV, including those at the boundary of the UV and X‑ray spectra at 10 nm. ==Human health-related effects== The impact of ultraviolet radiation on human health has implications for the risks and benefits of sun exposure and is also implicated in issues such as fluorescent lamps and health. Ultraviolet has a higher frequency (thus a shorter wavelength) than violet light. Ultraviolet lasers have applications in industry (laser engraving), medicine (dermatology, and keratectomy), chemistry (MALDI), free-air secure communications, computing (optical storage), and manufacture of integrated circuits. ===Tunable vacuum ultraviolet (VUV)=== The vacuum ultraviolet (V‑UV) band (100–200 nm) can be generated by non-linear 4 wave mixing in gases by sum or difference frequency mixing of 2 or more longer wavelength lasers. Hence photobiology entertains some, but not all, of the UV spectrum. ==See also== * Biological effects of high-energy visible light * Infrared * Ultraviolet astronomy * Ultraviolet catastrophe * Ultraviolet index * UV marker * UV stabilizers in plastics * Weather testing of polymers ==References== ==Further reading== * * * * == External links == * * Category:Electromagnetic radiation Category:Electromagnetic spectrum Category:Ultraviolet radiation ",It is a phenomenon that occurs only in multi-mode vibration.,It is the misbehavior of a formula for higher frequencies.,It is the standing wave of a string in harmonic resonance.,It is a flaw in classical physics that results in the misallocation of energy.,It is a disproven theory about the distribution of electromagnetic radiation.,B,kaggle200,"The term ""ultraviolet catastrophe"" was first used in 1911 by Paul Ehrenfest, but the concept originated with the 1900 statistical derivation of the Rayleigh–Jeans law. The phrase refers to the fact that the Rayleigh–Jeans law accurately predicts experimental results at radiative frequencies below 100 THz, but begins to diverge from empirical observations as these frequencies reach the ultraviolet region of the electromagnetic spectrum.
An example, from Mason's ""A History of the Sciences"", illustrates multi-mode vibration via a piece of string. As a natural vibrator, the string will oscillate with specific modes (the standing waves of a string in harmonic resonance), dependent on the length of the string. In classical physics, a radiator of energy will act as a natural vibrator. Additionally, since each mode will have the same energy, most of the energy in a natural vibrator will be in the smaller wavelengths and higher frequencies, where most of the modes are.
The ultraviolet catastrophe, also called the Rayleigh–Jeans catastrophe, was the prediction of late 19th century/early 20th century classical physics that an ideal black body at thermal equilibrium would emit an unbounded quantity of energy as wavelength decreased into the ultraviolet range.
The ""ultraviolet catastrophe"" is the expression of the fact that the formula misbehaves at higher frequencies, i.e. formula_10 as formula_11.","Since the first use of this term, it has also been used for other predictions of a similar nature, as in quantum electrodynamics and such cases as ultraviolet divergence.
The ultraviolet catastrophe, also called the Rayleigh–Jeans catastrophe, was the prediction of late 19th century/early 20th century classical physics that an ideal black body at thermal equilibrium would emit an unbounded quantity of energy as wavelength decreased into the ultraviolet range.: 6–7 The term ""ultraviolet catastrophe"" was first used in 1911 by Paul Ehrenfest, but the concept originated with the 1900 statistical derivation of the Rayleigh–Jeans law. The phrase refers to the fact that the empirically derived Rayleigh–Jeans law, which accurately predicted experimental results at large wavelengths, failed to do so for short wavelengths. (See the image for further elaboration.) As the theory diverged from empirical observations when these frequencies reached the ultraviolet region of the electromagnetic spectrum, there was a problem. This problem was later found to be due to a property of quanta as proposed by Max Planck: There could be no fraction of a discrete energy package already carrying minimal energy.
This formula is obtained from the equipartition theorem of classical statistical mechanics which states that all harmonic oscillator modes (degrees of freedom) of a system at equilibrium have an average energy of kBT The ""ultraviolet catastrophe"" is the expression of the fact that the formula misbehaves at higher frequencies, i.e. Bν(T)→∞ as ν→∞ An example, from Mason's A History of the Sciences, illustrates multi-mode vibration via a piece of string. As a natural vibrator, the string will oscillate with specific modes (the standing waves of a string in harmonic resonance), dependent on the length of the string. In classical physics, a radiator of energy will act as a natural vibrator. Additionally, since each mode will have the same energy, most of the energy in a natural vibrator will be in the smaller wavelengths and higher frequencies, where most of the modes are.","- The term ""ultraviolet catastrophe"" was first used in 1911 by Paul Ehrenfest, but the concept originated with the 1900 statistical derivation of the Rayleigh–Jeans lawSince the first use of this term, it has also been used for other predictions of a similar nature, as in quantum electrodynamics and such cases as ultraviolet divergence.
The ultraviolet catastrophe, also called the Rayleigh–Jeans catastrophe, was the prediction of late 19th century/early 20th century classical physics that an ideal black body at thermal equilibrium would emit an unbounded quantity of energy as wavelength decreased into the ultraviolet range.: 6–7 The term ""ultraviolet catastrophe"" was first used in 1911 by Paul Ehrenfest, but the concept originated with the 1900 statistical derivation of the Rayleigh–Jeans lawAdditionally, since each mode will have the same energy, most of the energy in a natural vibrator will be in the smaller wavelengths and higher frequencies, where most of the modes are.
The ultraviolet catastrophe, also called the Rayleigh–Jeans catastrophe, was the prediction of late 19th century/early 20th century classical physics that an ideal black body at thermal equilibrium would emit an unbounded quantity of energy as wavelength decreased into the ultraviolet range.
The ""ultraviolet catastrophe"" is the expression of the fact that the formula misbehaves at higher frequencies, i.eThis problem was later found to be due to a property of quanta as proposed by Max Planck: There could be no fraction of a discrete energy package already carrying minimal energy.
This formula is obtained from the equipartition theorem of classical statistical mechanics which states that all harmonic oscillator modes (degrees of freedom) of a system at equilibrium have an average energy of kBT The ""ultraviolet catastrophe"" is the expression of the fact that the formula misbehaves at higher frequencies, i.e(See the image for further elaboration.) As the theory diverged from empirical observations when these frequencies reached the ultraviolet region of the electromagnetic spectrum, there was a pr","- The term ""ultraviolet catastrophe"" was first used in 1911 by Paul Ehrenfest, but the concept originated with the 1900 statistical derivation of the Rayleigh–Jeans lawSince the first use of this term, it has also been used for other predictions of a similar nature, as in quantum electrodynamics and such cases as ultraviolet divergence.
The ultraviolet catastrophe, also called the Rayleigh–Jeans catastrophe, was the prediction of late 19th century/early 20th century classical physics that an ideal black body at thermal equilibrium would emit an unbounded quantity of energy as wavelength decreased into the ultraviolet range.: 6–7 The term ""ultraviolet catastrophe"" was first used in 1911 by Paul Ehrenfest, but the concept originated with the 1900 statistical derivation of the Rayleigh–Jeans lawAdditionally, since each mode will have the same energy, most of the energy in a natural vibrator will be in the smaller wavelengths and higher frequencies, where most of the modes are.
The ultraviolet catastrophe, also called the Rayleigh–Jeans catastrophe, was the prediction of late 19th century/early 20th century classical physics that an ideal black body at thermal equilibrium would emit an unbounded quantity of energy as wavelength decreased into the ultraviolet range.
The ""ultraviolet catastrophe"" is the expression of the fact that the formula misbehaves at higher frequencies, i.eThis problem was later found to be due to a property of quanta as proposed by Max Planck: There could be no fraction of a discrete energy package already carrying minimal energy.
This formula is obtained from the equipartition theorem of classical statistical mechanics which states that all harmonic oscillator modes (degrees of freedom) of a system at equilibrium have an average energy of kBT The ""ultraviolet catastrophe"" is the expression of the fact that the formula misbehaves at higher frequencies, i.e(See the image for further elaboration.) As the theory diverged from empirical observations when these frequencies reached the ultraviolet region of the electromagnetic spectrum, there was a pr[SEP]What is the ""ultraviolet catastrophe""?","['B', 'D', 'E']",1.0
What is the most popular explanation for the shower-curtain effect?,"However, the shower-curtain effect persists when cold water is used, implying that this cannot be the only mechanism at work. ===Bernoulli effect hypothesis === The most popular explanation given for the shower-curtain effect is Bernoulli's principle. The shower-curtain effect in physics describes the phenomenon of a shower curtain being blown inward when a shower is running. If air is moving across the inside surface of the shower curtain, Bernoulli's principle says the air pressure there will drop. In a steady state the steam will be replaced by new steam delivered by the shower but in reality the water temperature will fluctuate and lead to times when the net steam production is negative. ===Air pressure=== Colder dense air outside and hot less dense air inside causes higher air pressure on the outside to force the shower curtain inwards to equalise the air pressure, this can be observed simply when the bathroom door is open allowing cold air into the bathroom. ==Solutions== Many shower curtains come with features to reduce the shower-curtain effect. By pushing the curtain in towards the shower, the (short range) vortex and Coandă effects become more significant. This theory presumes that the water flowing out of a shower head causes the air through which the water moves to start flowing in the same direction as the water. There are a few alternative solutions that either attach to the shower curtain directly, attach to the shower rod or attach to the wall. ==References== ==External links== * Scientific American: Why does the shower curtain move toward the water? Professor Schmidt is adamant that this was done ""for fun"" in his own free time without the use of grants. ===Coandă effect=== The Coandă effect, also known as ""boundary layer attachment"", is the tendency of a moving fluid to adhere to an adjacent wall. ===Condensation=== A hot shower will produce steam that condenses on the shower side of the curtain, lowering the pressure there. This would result in a pressure differential between the inside and outside, causing the curtain to move inward. * The Straight Dope: Why does the shower curtain blow in despite the water pushing it out (revisited)? * 2001 Ig Nobel Prize Winners * Fluent NEWS: Shower Curtain Grabs Scientist – But He Lives to Tell Why * Arggh, Why Does the Shower Curtain Attack Me? by Joe Palca. Hanging the curtain rod higher or lower, or especially further away from the shower head, can reduce the effect. The shower-curtain effect may also be used to describe the observation how nearby phase front distortions of an optical wave are more severe than remote distortions of the same amplitude. ==Hypotheses == ===Buoyancy hypothesis === Also called Chimney effect or Stack effect, observes that warm air (from the hot shower) rises out over the shower curtain as cooler air (near the floor) pushes in under the curtain to replace the rising air. This movement would be parallel to the plane of the shower curtain. * Why does the shower curtain blow up and in instead of down and out? It would be strongest when the gap between the bather and the curtain is smallest, resulting in the curtain attaching to the bather. ===Horizontal vortex hypothesis === A computer simulation of a typical bathroom found that none of the above theories pan out in their analysis, but instead found that the spray from the shower-head drives a horizontal vortex. Hanging the weight low against the curtain just above the rim of the shower pan or tub makes it an effective billowing deterrent without allowing the weight to hit the pan or tub and damage it. A (convex) curved shower rod can also be used to hold the curtain against the inside wall of a tub. Curtains help control the ambiance and flow of natural light into the room. Bernoulli's principle states that an increase in velocity results in a decrease in pressure. They may have adhesive suction cups on the bottom edges of the curtain, which are then pushed onto the sides of the shower when in use. ",The pressure differential between the inside and outside of the shower,The decrease in velocity resulting in an increase in pressure,The movement of air across the outside surface of the shower curtain,The use of cold water,Bernoulli's principle,E,kaggle200,"The shower-curtain effect in physics describes the phenomenon of a shower curtain being blown inward when a shower is running. The problem of identifying the cause of this effect has been featured in ""Scientific American"" magazine, with several theories given to explain the phenomenon but no definite conclusion.
Also called Chimney effect or Stack effect, observes that warm air (from the hot shower) rises out over the shower curtain as cooler air (near the floor) pushes in under the curtain to replace the rising air. By pushing the curtain in towards the shower, the (short range) vortex and Coandă effects become more significant. However, the shower-curtain effect persists when cold water is used, implying that this cannot be the only mechanism at work.
Many shower curtains come with features to reduce the shower-curtain effect. They may have adhesive suction cups on the bottom edges of the curtain, which are then pushed onto the sides of the shower when in use. Others may have magnets at the bottom, though these are not effective on acrylic or fiberglass tubs.
The most popular explanation given for the shower-curtain effect is Bernoulli's principle. Bernoulli's principle states that an increase in velocity results in a decrease in pressure. This theory presumes that the water flowing out of a shower head causes the air through which the water moves to start flowing in the same direction as the water. This movement would be parallel to the plane of the shower curtain. If air is moving across the inside surface of the shower curtain, Bernoulli's principle says the air pressure there will drop. This would result in a pressure differential between the inside and outside, causing the curtain to move inward. It would be strongest when the gap between the bather and the curtain is smallest, resulting in the curtain attaching to the bather.","Buoyancy hypothesis Also called Chimney effect or Stack effect, observes that warm air (from the hot shower) rises out over the shower curtain as cooler air (near the floor) pushes in under the curtain to replace the rising air. By pushing the curtain in towards the shower, the (short range) vortex and Coandă effects become more significant. However, the shower-curtain effect persists when cold water is used, implying that this cannot be the only mechanism at work.
The shower-curtain effect in physics describes the phenomenon of a shower curtain being blown inward when a shower is running. The problem of identifying the cause of this effect has been featured in Scientific American magazine, with several theories given to explain the phenomenon but no definite conclusion.
The shower-curtain effect may also be used to describe the observation how nearby phase front distortions of an optical wave are more severe than remote distortions of the same amplitude.
Bernoulli effect hypothesis The most popular explanation given for the shower-curtain effect is Bernoulli's principle. Bernoulli's principle states that an increase in velocity results in a decrease in pressure. This theory presumes that the water flowing out of a shower head causes the air through which the water moves to start flowing in the same direction as the water. This movement would be parallel to the plane of the shower curtain. If air is moving across the inside surface of the shower curtain, Bernoulli's principle says the air pressure there will drop. This would result in a pressure differential between the inside and outside, causing the curtain to move inward. It would be strongest when the gap between the bather and the curtain is smallest, resulting in the curtain attaching to the bather.","- The shower-curtain effect in physics describes the phenomenon of a shower curtain being blown inward when a shower is runningThe problem of identifying the cause of this effect has been featured in Scientific American magazine, with several theories given to explain the phenomenon but no definite conclusion.
The shower-curtain effect may also be used to describe the observation how nearby phase front distortions of an optical wave are more severe than remote distortions of the same amplitude.
Bernoulli effect hypothesis The most popular explanation given for the shower-curtain effect is Bernoulli's principleHowever, the shower-curtain effect persists when cold water is used, implying that this cannot be the only mechanism at work.
The shower-curtain effect in physics describes the phenomenon of a shower curtain being blown inward when a shower is runningThe problem of identifying the cause of this effect has been featured in ""Scientific American"" magazine, with several theories given to explain the phenomenon but no definite conclusion.
Also called Chimney effect or Stack effect, observes that warm air (from the hot shower) rises out over the shower curtain as cooler air (near the floor) pushes in under the curtain to replace the rising airBy pushing the curtain in towards the shower, the (short range) vortex and Coandă effects become more significantBy pushing the curtain in towards the shower, the (short range) vortex and Coandă effects become more significantOthers may have magnets at the bottom, though these are not effective on acrylic or fiberglass tubs.
The most popular explanation given for the shower-curtain effect is Bernoulli's principleHowever, the shower-curtain effect persists when cold water is used, implying that this cannot be the only mechanism at work.
Many shower curtains come with features to reduce the shower-curtain effectThis would result in a pressure differential between the inside and outside, causing the curtain to move inwardBuoyancy hypothesis Also called Chimney effect or Stack effect, observes that warm air (from the hot shower) ","- The shower-curtain effect in physics describes the phenomenon of a shower curtain being blown inward when a shower is runningThe problem of identifying the cause of this effect has been featured in Scientific American magazine, with several theories given to explain the phenomenon but no definite conclusion.
The shower-curtain effect may also be used to describe the observation how nearby phase front distortions of an optical wave are more severe than remote distortions of the same amplitude.
Bernoulli effect hypothesis The most popular explanation given for the shower-curtain effect is Bernoulli's principleHowever, the shower-curtain effect persists when cold water is used, implying that this cannot be the only mechanism at work.
The shower-curtain effect in physics describes the phenomenon of a shower curtain being blown inward when a shower is runningThe problem of identifying the cause of this effect has been featured in ""Scientific American"" magazine, with several theories given to explain the phenomenon but no definite conclusion.
Also called Chimney effect or Stack effect, observes that warm air (from the hot shower) rises out over the shower curtain as cooler air (near the floor) pushes in under the curtain to replace the rising airBy pushing the curtain in towards the shower, the (short range) vortex and Coandă effects become more significantBy pushing the curtain in towards the shower, the (short range) vortex and Coandă effects become more significantOthers may have magnets at the bottom, though these are not effective on acrylic or fiberglass tubs.
The most popular explanation given for the shower-curtain effect is Bernoulli's principleHowever, the shower-curtain effect persists when cold water is used, implying that this cannot be the only mechanism at work.
Many shower curtains come with features to reduce the shower-curtain effectThis would result in a pressure differential between the inside and outside, causing the curtain to move inwardBuoyancy hypothesis Also called Chimney effect or Stack effect, observes that warm air (from the hot shower) [SEP]What is the most popular explanation for the shower-curtain effect?","['E', 'D', 'A']",1.0
What is the butterfly effect?,"The butterfly effect describes a phenomenon in chaos theory whereby a minor change in circumstances can cause a large change in outcome. In chaos theory, the butterfly effect is the sensitive dependence on initial conditions in which a small change in one state of a deterministic nonlinear system can result in large differences in a later state. Whereas the classical butterfly effect considers the effect of a small change in the position and/or velocity of an object in a given Hamiltonian system, the quantum butterfly effect considers the effect of a small change in the Hamiltonian system with a given initial position and velocity. He noted that the butterfly effect is derived from the metaphorical example of the details of a tornado (the exact time of formation, the exact path taken) being influenced by minor perturbations such as a distant butterfly flapping its wings several weeks earlier. The butterfly effect concept has since been used outside the context of weather science as a broad term for any situation where a small change is supposed to be the cause of larger consequences. ==History== In The Vocation of Man (1800), Johann Gottlieb Fichte says ""you could not remove a single grain of sand from its place without thereby ... changing something throughout all parts of the immeasurable whole"". A comparison of the two kinds of butterfly effects and the third kind of butterfly effect has been documented. While the ""butterfly effect"" is often explained as being synonymous with sensitive dependence on initial conditions of the kind described by Lorenz in his 1963 paper (and previously observed by Poincaré), the butterfly metaphor was originally applied to work he published in 1969 which took the idea a step further. In the book entitled The Essence of Chaos published in 1993, Lorenz defined butterfly effect as: ""The phenomenon that a small alteration in the state of a dynamical system will cause subsequent states to differ greatly from the states that would have followed without the alteration."" The two kinds of butterfly effects, including the sensitive dependence on initial conditions, and the ability of a tiny perturbation to create an organized circulation at large distances, are not exactly the same. According to science journalist Peter Dizikes, the films Havana and The Butterfly Effect mischaracterize the butterfly effect by asserting the effect can be calculated with certainty, because this is the opposite of its scientific meaning in chaos theory as it relates to the unpredictability of certain physical systems; Dizikes writes in 2008, ""The larger meaning of the butterfly effect is not that we can readily track such connections, but that we can't."" In recent studies, it was reported that both meteorological and non- meteorological linear models have shown that instability plays a role in producing a butterfly effect, which is characterized by brief but significant exponential growth resulting from a small disturbance. Other authors suggest that the butterfly effect can be observed in quantum systems. The phrase refers to the idea that a butterfly's wings might create tiny changes in the atmosphere that may ultimately alter the path of a tornado or delay, accelerate, or even prevent the occurrence of a tornado in another location. A short documentary that explains the ""butterfly effect"" in context of Lorenz's work. This quantum butterfly effect has been demonstrated experimentally. Although a butterfly flapping its wings has remained constant in the expression of this concept, the location of the butterfly, the consequences, and the location of the consequences have varied widely. In the 1993 movie Jurassic Park, Dr. Ian Malcolm (played by Jeff Goldblum) attempts to explain chaos theory to Dr. Ellie Sattler (played by Laura Dern), specifically referencing the butterfly effect, by stating ""It simply deals with unpredictability in complex systems"", and ""The shorthand is 'the butterfly effect.' The concept has been widely adopted by popular culture, and interpreted to mean that small events have a rippling effect that cause much larger events to occur, and has become a common reference. ==Examples== ===""A Sound of Thunder"" === The 1952 short story ""A Sound of Thunder"" by Ray Bradbury explores the concept of how the death of a butterfly in the past could have drastic changes in the future, and has been used as an example of ""the butterfly effect"" and how to consider chaos theory and the physics of time travel. The butterfly effect presents an obvious challenge to prediction, since initial conditions for a system such as the weather can never be known to complete accuracy. The butterfly effect was also used as a justification for the suppression of news in China about the death of Li Wenliang. ==See also== * Alternate history * Time travel in fiction * List of time travel works of fiction ==References== ==External links== * ""The meaning of the butterfly: Why pop culture loves the 'butterfly effect,' and gets it totally wrong"", Peter Dizikes, The Boston Globe, 8 June 2008 Category:Chaos theory Category:Science in popular culture Category:Topics in popular culture ","The butterfly effect is a physical cause that occurs when a massive sphere is caused to roll down a slope starting from a point of unstable equilibrium, and its velocity is assumed to be caused by the force of gravity accelerating it.",The butterfly effect is a distributed causality that opens up the opportunity to understand the relationship between necessary and sufficient conditions in classical (Newtonian) physics.,The butterfly effect is a proportionality between the cause and the effect of a physical phenomenon in classical (Newtonian) physics.,The butterfly effect is a small push that is needed to set a massive sphere into motion when it is caused to roll down a slope starting from a point of unstable equilibrium.,The butterfly effect is a phenomenon that highlights the difference between the application of the notion of causality in physics and a more general use of causality as represented by Mackie's INUS conditions.,E,kaggle200,"During the COVID-19 pandemic, ""the butterfly effect"" was also used to describe the impact of increased waiting times within the health care system in the UK, i.e. ""The knock-on effect this would have on my day – the beating of a butterfly's wings in the morning causing tornadoes by the afternoon"", and as a justification for the suppression of news in China about the death of Li Wenliang.
In the 1990 film ""Havana"", the character played by Robert Redford states, ""A butterfly can flutter its wings over a flower in China and cause a hurricane in the Caribbean"", and scientists ""can even calculate the odds"". According to science journalist Peter Dizikes, the films ""Havana"" and ""The Butterfly Effect"" mischaracterize the butterfly effect by asserting the effect can be calculated with certainty, because this is the opposite of its scientific meaning in chaos theory as it relates to the unpredictability of certain physical systems; Dizikes writes in 2008, ""The larger meaning of the butterfly effect is not that we can readily track such connections, but that we can't.""
""My Butterfly"", an episode from the TV show ""Scrubs"", features two separate timelines, each influenced by the butterfly effect. The season four premiere episodes of ""Ugly Betty"" are named ""The Butterfly Effect Part 1"" and ""The Butterfly Effect Part 2"", and a review of the episodes in ""Vulture"" states, """"Ugly Betty"" is certainly invested in the physics of the Butterfly Effect, too: One small change can indeed cause large-scale effects.""
A related way to interpret the butterfly effect is to see it as highlighting the difference between the application of the notion of causality in physics and a more general use of causality as represented by Mackie's INUS conditions. In classical (Newtonian) physics, in general, only those conditions are (explicitly) taken into account, that are both necessary and sufficient. For instance, when a massive sphere is caused to roll down a slope starting from a point of unstable equilibrium, then its velocity is assumed to be caused by the force of gravity accelerating it; the small push that was needed to set it into motion is not explicitly dealt with as a cause. In order to be a physical cause there must be a certain proportionality with the ensuing effect. A distinction is drawn between triggering and causation of the ball's motion. By the same token the butterfly can be seen as triggering a tornado, its cause being assumed to be seated in the atmospherical energies already present beforehand, rather than in the movements of a butterfly.","Other popular culture During the COVID-19 pandemic, doctor and journalist Peter Endicott used the butterfly effect to describe the impact of increased waiting times within the health care system in the UK, i.e. ""The knock-on effect this would have on my day – the beating of a butterfly's wings in the morning causing tornadoes by the afternoon."" The butterfly effect was also used as a justification for the suppression of news in China about the death of Li Wenliang.
Films The influence of the concept can be seen in the films The Terminator, Back to the Future, X-Men: Days of Future Past, Maheshinte Prathikaram and Cloud Atlas.In the 1990 film Havana, the character played by Robert Redford states, ""A butterfly can flutter its wings over a flower in China and cause a hurricane in the Caribbean"", and scientists ""can even calculate the odds"". According to science journalist Peter Dizikes, the films Havana and The Butterfly Effect mischaracterize the butterfly effect by asserting the effect can be calculated with certainty, because this is the opposite of its scientific meaning in chaos theory as it relates to the unpredictability of certain physical systems; Dizikes writes in 2008, ""The larger meaning of the butterfly effect is not that we can readily track such connections, but that we can't.""In the 1993 movie Jurassic Park, Dr. Ian Malcolm (played by Jeff Goldblum) attempts to explain chaos theory to Dr. Ellie Sattler (played by Laura Dern), specifically referencing the butterfly effect, by stating ""It simply deals with unpredictability in complex systems"", and ""The shorthand is 'the butterfly effect.' A butterfly can flap its wings in Peking, and in Central Park, you get rain instead of sunshine.""Other examples include Terry Pratchett's novel Interesting Times, which tells of the magical ""Quantum Weather Butterfly"" with the ability to manipulate weather patterns. The 2009 film Mr. Nobody incorporates the butterfly effect and the concept of smaller events that result in larger changes altering a person's life.The 2020 - 2021 miniseries of short films Explaining the Pandemic to my Past Self by Julie Nolke incorporates the butterfly effect as a limitation on how much she can explain to her past self.The 2021 film Needle in a Timestack is described in a review by The Guardian as having a plot where the character played by Leslie Odom Jr. ""sets off a calamitous butterfly effect that results in, not the survival of dinosaurs, not a deadly plague, not an Allied loss of the second world war, but him being married to Freida Pinto instead of Cynthia Erivo."" Television The concept is referenced in a Treehouse of Horror episode of the television series The Simpsons.""My Butterfly"", an episode from the TV show Scrubs, features two separate timelines, each influenced by the butterfly effect. The season four premiere episodes of Ugly Betty are named ""The Butterfly Effect Part 1"" and ""The Butterfly Effect Part 2"", and a review of the episodes in Vulture states, ""Ugly Betty is certainly invested in the physics of the Butterfly Effect, too: One small change can indeed cause large-scale effects.""The miniseries Black Bird (2022) begins with a narration about the butterfly effect.
A related way to interpret the butterfly effect is to see it as highlighting the difference between the application of the notion of causality in physics and a more general use of causality as represented by Mackie's INUS conditions. In classical (Newtonian) physics, in general, only those conditions are (explicitly) taken into account, that are both necessary and sufficient. For instance, when a massive sphere is caused to roll down a slope starting from a point of unstable equilibrium, then its velocity is assumed to be caused by the force of gravity accelerating it; the small push that was needed to set it into motion is not explicitly dealt with as a cause. In order to be a physical cause there must be a certain proportionality with the ensuing effect. A distinction is drawn between triggering and causation of the ball's motion. By the same token the butterfly can be seen as triggering a tornado, its cause being assumed to be seated in the atmospherical energies already present beforehand, rather than in the movements of a butterfly.","The season four premiere episodes of Ugly Betty are named ""The Butterfly Effect Part 1"" and ""The Butterfly Effect Part 2"", and a review of the episodes in Vulture states, ""Ugly Betty is certainly invested in the physics of the Butterfly Effect, too: One small change can indeed cause large-scale effects.""The miniseries Black Bird (2022) begins with a narration about the butterfly effect.
A related way to interpret the butterfly effect is to see it as highlighting the difference between the application of the notion of causality in physics and a more general use of causality as represented by Mackie's INUS conditionsAccording to science journalist Peter Dizikes, the films ""Havana"" and ""The Butterfly Effect"" mischaracterize the butterfly effect by asserting the effect can be calculated with certainty, because this is the opposite of its scientific meaning in chaos theory as it relates to the unpredictability of certain physical systems; Dizikes writes in 2008, ""The larger meaning of the butterfly effect is not that we can readily track such connections, but that we can't.""
""My Butterfly"", an episode from the TV show ""Scrubs"", features two separate timelines, each influenced by the butterfly effectThe season four premiere episodes of ""Ugly Betty"" are named ""The Butterfly Effect Part 1"" and ""The Butterfly Effect Part 2"", and a review of the episodes in ""Vulture"" states, """"Ugly Betty"" is certainly invested in the physics of the Butterfly Effect, too: One small change can indeed cause large-scale effects.""
A related way to interpret the butterfly effect is to see it as highlighting the difference between the application of the notion of causality in physics and a more general use of causality as represented by Mackie's INUS conditions- During the COVID-19 pandemic, ""the butterfly effect"" was also used to describe the impact of increased waiting times within the health care system in the UK, i.eBy the same token the butterfly can be seen as triggering a tornado, its cause being assumed to be seated in the atmospherical energies already present beforehand, rather than in t","The season four premiere episodes of Ugly Betty are named ""The Butterfly Effect Part 1"" and ""The Butterfly Effect Part 2"", and a review of the episodes in Vulture states, ""Ugly Betty is certainly invested in the physics of the Butterfly Effect, too: One small change can indeed cause large-scale effects.""The miniseries Black Bird (2022) begins with a narration about the butterfly effect.
A related way to interpret the butterfly effect is to see it as highlighting the difference between the application of the notion of causality in physics and a more general use of causality as represented by Mackie's INUS conditionsAccording to science journalist Peter Dizikes, the films ""Havana"" and ""The Butterfly Effect"" mischaracterize the butterfly effect by asserting the effect can be calculated with certainty, because this is the opposite of its scientific meaning in chaos theory as it relates to the unpredictability of certain physical systems; Dizikes writes in 2008, ""The larger meaning of the butterfly effect is not that we can readily track such connections, but that we can't.""
""My Butterfly"", an episode from the TV show ""Scrubs"", features two separate timelines, each influenced by the butterfly effectThe season four premiere episodes of ""Ugly Betty"" are named ""The Butterfly Effect Part 1"" and ""The Butterfly Effect Part 2"", and a review of the episodes in ""Vulture"" states, """"Ugly Betty"" is certainly invested in the physics of the Butterfly Effect, too: One small change can indeed cause large-scale effects.""
A related way to interpret the butterfly effect is to see it as highlighting the difference between the application of the notion of causality in physics and a more general use of causality as represented by Mackie's INUS conditions- During the COVID-19 pandemic, ""the butterfly effect"" was also used to describe the impact of increased waiting times within the health care system in the UK, i.eBy the same token the butterfly can be seen as triggering a tornado, its cause being assumed to be seated in the atmospherical energies already present beforehand, rather than in t[SEP]What is the butterfly effect?","['E', 'D', 'C']",1.0
What is the 'reactive Leidenfrost effect' observed in non-volatile materials?,"The new phenomenon of a 'reactive Leidenfrost (RL) effect' was characterized by a dimensionless quantity, (φRL= τconv/τrxn), which relates the time constant of solid particle heat transfer to the time constant of particle reaction, with the reactive Leidenfrost effect occurring for 10−1< φRL< 10+1. The Leidenfrost temperatures for glycerol and common alcohols are significantly smaller because of their lower surface tension values (density and viscosity differences are also contributing factors.) == Reactive Leidenfrost effect == thumb|Reactive Leidenfrost effect of cellulose on silica, Non-volatile materials were discovered in 2015 to also exhibit a 'reactive Leidenfrost effect', whereby solid particles were observed to float above hot surfaces and skitter around erratically. When the temperature exceeds the Leidenfrost point, the Leidenfrost effect appears. The temperature of the solid surface beyond which the liquid undergoes the Leidenfrost phenomenon is termed the Leidenfrost temperature. The Leidenfrost effect has also been used as a means to promote chemical change of various organic liquids through their conversion by thermal decomposition into various products. Conversely, the inverse Leidenfrost effect lets drops of relatively warm liquid levitate on a bath of liquid nitrogen. == Leidenfrost point == thumb|A water droplet experiencing Leidenfrost effect on a hot stove plate The Leidenfrost point signifies the onset of stable film boiling. Droplets of different liquids with different boiling temperatures will also exhibit a Leidenfrost effect with respect to each other and repel each other. Leidenfrost effect occurs after transition boiling. Since the Leidenfrost phenomenon is a special case of film boiling, the Leidenfrost temperature is related to the minimum film boiling temperature via a relation which factors in the properties of the solid being used. thumb|Leidenfrost droplet thumb|Demonstration of the Leidenfrost effect thumb|Leidenfrost effect of a single drop of water The Leidenfrost effect is a physical phenomenon in which a liquid, close to a surface that is significantly hotter than the liquid's boiling point, produces an insulating vapor layer that keeps the liquid from boiling rapidly. * High speed photography of the reactive Leidenfrost effect of cellulose on porous surfaces (macroporous alumina) was also shown to suppress the reactive Leidenfrost effect and enhance overall heat transfer rates to the particle from the surface. If the pan's temperature is at or above the Leidenfrost point, which is approximately for water, the water skitters across the pan and takes longer to evaporate than it would take if the water droplets had been sprinkled onto a cooler pan. thumb|Cooling performances of traditional structured surface and STA at T = 1000 °C == Details == thumb|A video clip demonstrating the Leidenfrost effect The effect can be seen as drops of water are sprinkled onto a pan at various times as it heats up. Detailed characterization of the reactive Leidenfrost effect was completed for small particles of cellulose (~0.5 mm) on high temperature polished surfaces by high speed photography. The effect happens because, at temperatures at or above the Leidenfrost point, the bottom part of the water droplet vaporizes immediately on contact with the hot pan. The temperature at which the Leidenfrost effect appears is difficult to predict. Henry developed a model for Leidenfrost phenomenon which includes transient wetting and microlayer evaporation. While the Leidenfrost temperature is not directly related to the surface tension of the fluid, it is indirectly dependent on it through the film boiling temperature. In the 2009 season 7 finale of MythBusters, ""Mini Myth Mayhem"", the team demonstrated that a person can wet their hand and briefly dip it into molten lead without injury, using the Leidenfrost effect as the scientific basis. == See also == * Critical heat flux * Region-beta paradox == References == == External links == * Essay about the effect and demonstrations by Jearl Walker (PDF) * Site with high- speed video, pictures and explanation of film-boiling by Heiner Linke at the University of Oregon, USA * ""Scientists make water run uphill"" by BBC News about using the Leidenfrost effect for cooling of computer chips. Reactive flash volatilization (RFV) is a chemical process that rapidly converts nonvolatile solids and liquids to volatile compounds by thermal decomposition for integration with catalytic chemistries. == Chemistry == right|300px The utilization of heavy fossil fuels or biomass rich in carbohydrates, (C6H10O5)n, for fuels or chemicals requires an initial thermochemical process called pyrolysis which fractures large polymers to mixtures of small volatile organic compounds (VOCs). In 1756, Leidenfrost observed that water droplets supported by the vapor film slowly evaporate as they move about on the hot surface. ","The 'reactive Leidenfrost effect' is a phenomenon where solid particles float above hot surfaces and move erratically, observed in non-volatile materials.","The 'reactive Leidenfrost effect' is a phenomenon where solid particles float above hot surfaces and move erratically, observed in volatile materials.","The 'reactive Leidenfrost effect' is a phenomenon where solid particles sink into hot surfaces and move slowly, observed in non-volatile materials.","The 'reactive Leidenfrost effect' is a phenomenon where solid particles float above cold surfaces and move erratically, observed in non-volatile materials.","The 'reactive Leidenfrost effect' is a phenomenon where solid particles sink into cold surfaces and move slowly, observed in non-volatile materials.",A,kaggle200,"The temperature at which the Leidenfrost effect appears is difficult to predict. Even if the volume of the drop of liquid stays the same, the Leidenfrost point may be quite different, with a complicated dependence on the properties of the surface, as well as any impurities in the liquid. Some research has been conducted into a theoretical model of the system, but it is quite complicated.
Droplets of different liquids with different boiling temperatures will also exhibit a Leidenfrost effect with respect to each other and repel each other.
A heat engine based on the Leidenfrost effect has been prototyped; it has the advantage of extremely low friction.
The Leidenfrost point may also be taken to be the temperature for which the hovering droplet lasts longest.","In Jules Verne's 1876 book Michael Strogoff, the protagonist is saved from being blinded with a hot blade by evaporating tears.In the 2009 season 7 finale of MythBusters, ""Mini Myth Mayhem"", the team demonstrated that a person can wet their hand and briefly dip it into molten lead without injury, using the Leidenfrost effect as the scientific basis.
The Leidenfrost point may also be taken to be the temperature for which the hovering droplet lasts longest.It has been demonstrated that it is possible to stabilize the Leidenfrost vapor layer of water by exploiting superhydrophobic surfaces. In this case, once the vapor layer is established, cooling never collapses the layer, and no nucleate boiling occurs; the layer instead slowly relaxes until the surface is cooled.Droplets of different liquids with different boiling temperatures will also exhibit a Leidenfrost effect with respect to each other and repel each other.The Leidenfrost effect has been used for the development of high sensitivity ambient mass spectrometry. Under the influence of the Leidenfrost condition, the levitating droplet does not release molecules, and the molecules are enriched inside the droplet. At the last moment of droplet evaporation, all the enriched molecules release in a short time period and thereby increase the sensitivity.A heat engine based on the Leidenfrost effect has been prototyped; it has the advantage of extremely low friction.The effect also applies when the surface is at room temperature but the liquid is cryogenic, allowing liquid nitrogen droplets to harmlessly roll off exposed skin. Conversely, the inverse Leidenfrost effect lets drops of relatively warm liquid levitate on a bath of liquid nitrogen.
Non-volatile materials were discovered in 2015 to also exhibit a 'reactive Leidenfrost effect', whereby solid particles were observed to float above hot surfaces and skitter around erratically. Detailed characterization of the reactive Leidenfrost effect was completed for small particles of cellulose (~0.5 mm) on high temperature polished surfaces by high speed photography. Cellulose was shown to decompose to short-chain oligomers which melt and wet smooth surfaces with increasing heat transfer associated with increasing surface temperature. Above 675 °C (1,247 °F), cellulose was observed to exhibit transition boiling with violent bubbling and associated reduction in heat transfer. Liftoff of the cellulose droplet (depicted at the right) was observed to occur above about 750 °C (1,380 °F), associated with a dramatic reduction in heat transfer.High speed photography of the reactive Leidenfrost effect of cellulose on porous surfaces (macroporous alumina) was also shown to suppress the reactive Leidenfrost effect and enhance overall heat transfer rates to the particle from the surface. The new phenomenon of a 'reactive Leidenfrost (RL) effect' was characterized by a dimensionless quantity, (φRL= τconv/τrxn), which relates the time constant of solid particle heat transfer to the time constant of particle reaction, with the reactive Leidenfrost effect occurring for 10−1< φRL< 10+1. The reactive Leidenfrost effect with cellulose will occur in numerous high temperature applications with carbohydrate polymers, including biomass conversion to biofuels, preparation and cooking of food, and tobacco use.The Leidenfrost effect has also been used as a means to promote chemical change of various organic liquids through their conversion by thermal decomposition into various products. Examples include decomposition of ethanol, diethyl carbonate, and glycerol.","The new phenomenon of a 'reactive Leidenfrost (RL) effect' was characterized by a dimensionless quantity, (φRL= τconv/τrxn), which relates the time constant of solid particle heat transfer to the time constant of particle reaction, with the reactive Leidenfrost effect occurring for 10−1< φRL< 10+1Conversely, the inverse Leidenfrost effect lets drops of relatively warm liquid levitate on a bath of liquid nitrogen.
Non-volatile materials were discovered in 2015 to also exhibit a 'reactive Leidenfrost effect', whereby solid particles were observed to float above hot surfaces and skitter around erraticallyIn this case, once the vapor layer is established, cooling never collapses the layer, and no nucleate boiling occurs; the layer instead slowly relaxes until the surface is cooled.Droplets of different liquids with different boiling temperatures will also exhibit a Leidenfrost effect with respect to each other and repel each other.The Leidenfrost effect has been used for the development of high sensitivity ambient mass spectrometry- The temperature at which the Leidenfrost effect appears is difficult to predictDetailed characterization of the reactive Leidenfrost effect was completed for small particles of cellulose (~0.5 mm) on high temperature polished surfaces by high speed photographyThe reactive Leidenfrost effect with cellulose will occur in numerous high temperature applications with carbohydrate polymers, including biomass conversion to biofuels, preparation and cooking of food, and tobacco use.The Leidenfrost effect has also been used as a means to promote chemical change of various organic liquids through their conversion by thermal decomposition into various productsSome research has been conducted into a theoretical model of the system, but it is quite complicated.
Droplets of different liquids with different boiling temperatures will also exhibit a Leidenfrost effect with respect to each other and repel each other.
A heat engine based on the Leidenfrost effect has been prototyped; it has the advantage of extremely low friction.
The Leidenfrost point may a","The new phenomenon of a 'reactive Leidenfrost (RL) effect' was characterized by a dimensionless quantity, (φRL= τconv/τrxn), which relates the time constant of solid particle heat transfer to the time constant of particle reaction, with the reactive Leidenfrost effect occurring for 10−1< φRL< 10+1Conversely, the inverse Leidenfrost effect lets drops of relatively warm liquid levitate on a bath of liquid nitrogen.
Non-volatile materials were discovered in 2015 to also exhibit a 'reactive Leidenfrost effect', whereby solid particles were observed to float above hot surfaces and skitter around erraticallyIn this case, once the vapor layer is established, cooling never collapses the layer, and no nucleate boiling occurs; the layer instead slowly relaxes until the surface is cooled.Droplets of different liquids with different boiling temperatures will also exhibit a Leidenfrost effect with respect to each other and repel each other.The Leidenfrost effect has been used for the development of high sensitivity ambient mass spectrometry- The temperature at which the Leidenfrost effect appears is difficult to predictDetailed characterization of the reactive Leidenfrost effect was completed for small particles of cellulose (~0.5 mm) on high temperature polished surfaces by high speed photographyThe reactive Leidenfrost effect with cellulose will occur in numerous high temperature applications with carbohydrate polymers, including biomass conversion to biofuels, preparation and cooking of food, and tobacco use.The Leidenfrost effect has also been used as a means to promote chemical change of various organic liquids through their conversion by thermal decomposition into various productsSome research has been conducted into a theoretical model of the system, but it is quite complicated.
Droplets of different liquids with different boiling temperatures will also exhibit a Leidenfrost effect with respect to each other and repel each other.
A heat engine based on the Leidenfrost effect has been prototyped; it has the advantage of extremely low friction.
The Leidenfrost point may a[SEP]What is the 'reactive Leidenfrost effect' observed in non-volatile materials?","['A', 'B', 'C']",1.0
What is reciprocal length or inverse length?,"Reciprocal length or inverse length is a quantity or measurement used in several branches of science and mathematics. As the reciprocal of length, common units used for this measurement include the reciprocal metre or inverse metre (symbol: m−1), the reciprocal centimetre or inverse centimetre (symbol: cm−1). Quantities measured in reciprocal length include: *absorption coefficient or attenuation coefficient, in materials science *curvature of a line, in mathematics *gain, in laser physics *magnitude of vectors in reciprocal space, in crystallography *more generally any spatial frequency e.g. in cycles per unit length *optical power of a lens, in optics *rotational constant of a rigid rotor, in quantum mechanics *wavenumber, or magnitude of a wavevector, in spectroscopy *density of a linear feature in hydrology and other fields; see kilometre per square kilometre In optics, the dioptre is a unit equivalent to reciprocal metre. ==Measure of energy== In some branches of physics, the universal constants c, the speed of light, and ħ, the reduced Planck constant, are treated as being unity (i.e. that c = ħ = 1), which leads to mass, energy, momentum, frequency and reciprocal length all having the same unit. As a result, reciprocal length is used as a measure of energy. The energy is inversely proportional to the size of the unit of which the reciprocal is used, and is proportional to the number of reciprocal length units. For example, in terms of energy, one reciprocal metre equals (one hundredth) as much as a reciprocal centimetre. Five reciprocal metres are five times as much energy as one reciprocal metre. ==See also== * Reciprocal second ==Further reading== * Category:Length Category:Physical quantities Category:SI derived units For example, a kilometre is . ===Non-SI=== In the centimetre–gram–second system of units, the basic unit of length is the centimetre, or of a metre. thumb|right|A ruler, depicting two customary units of length, the centimetre and the inch A unit of length refers to any arbitrarily chosen and accepted reference standard for measurement of length. In physics, length scale is a particular length or distance determined with the precision of at most a few orders of magnitude. *The Planck length (Planck scale) is much shorter yet - about \ell_{P}\sim 10^{-35} meters (10^{19} GeV ^{-1} in natural units), and is derived from Newton's gravitational constant which has units of length squared. Some common natural units of length are included in this table: Atomic property Symbol Length, in metres Reference The classical electron radius re The Compton wavelength of the electron λC The reduced Compton wavelength of the electron C The Compton wavelength (or reduced Compton wavelength) of any fundamental particle x The Bohr radius of the hydrogen atom (Atomic unit of length) a0 The reduced wavelength of hydrogen radiation 1 / R∞ The Planck length 𝓁P Stoney unit of length lS Quantum chromodynamics (QCD) unit of length lQCD Natural units based on the electronvolt 1 eV−1 ==Archaic== Archaic units of distance include: *cana *cubit *rope *league *li (China) *pace (the ""double pace"" of about 5 feet used in Ancient Rome) *verst (Russia) ==Informal== In everyday conversation, and in informal literature, it is common to see lengths measured in units of objects of which everyone knows the approximate width. Common examples are: *Double-decker bus (9.5–11 metres in length) *Football field (100 yards in length) *Thickness of a human hair (around 80 micrometres) ==Other== Horse racing and other equestrian activities keep alive: * furlong ≈ *horse length ≈ ==See also== * *List of examples of lengths * *Medieval weights and measures *Orders of magnitude (length) *System of measurement *Units of measurement ==References== ==Further reading== * Length scales are usually the operative scale (or at least one of the scales) in dimensional analysis. For example, the reciprocal centimetre, , is an energy unit equal to the energy of a photon with a wavelength of 1 cm. The metric system is sub-divided into SI and non-SI units. ==Metric system== ===SI=== The base unit in the International System of Units (SI) is the metre, defined as ""the length of the path travelled by light in vacuum during a time interval of seconds."" The concept of length scale is particularly important because physical phenomena of different length scales cannot affect each other and are said to decouple. Other SI units are derived from the metre by adding prefixes, as in millimetre or kilometre, thus producing systematic decimal multiples and submultiples of the base unit that span many orders of magnitude. Common imperial units and U.S. customary units of length include: * thou or mil ( of an inch) * inch () * foot (12 inches, 0.3048 m) * yard (3 feet, 0.9144 m) * (terrestrial) mile (5280 feet, 1609.344 m) * (land) league ==Marine== In addition, the following are used by sailors: * fathom (for depth; only in non-metric countries) (2 yards = 1.8288 m) * nautical mile (one minute of arc of latitude = ) ==Aviation== Aviators use feet for altitude worldwide (except in Russia and China) and nautical miles for distance. ==Surveying== thumb|right|Determination of the rod, using the length of the left foot of 16 randomly chosen people coming from church service Surveyors in the United States continue to use: * chain (22 yards, or ) * rod (also called pole or perch) (quarter of a chain, 5 yards, or ) ==Science== ===Astronomy=== Astronomical measure uses: *Earth radius ≈ 6,371 km * Lunar distance LD ≈ . In electrical engineering, electrical length is a dimensionless parameter equal to the physical length of an electrical conductor such as a cable or wire, divided by the wavelength of alternating current at a given frequency traveling through the conductor. ","Reciprocal length or inverse length is a quantity or measurement used in physics and chemistry. It is the reciprocal of time, and common units used for this measurement include the reciprocal second or inverse second (symbol: s−1), the reciprocal minute or inverse minute (symbol: min−1).","Reciprocal length or inverse length is a quantity or measurement used in geography and geology. It is the reciprocal of area, and common units used for this measurement include the reciprocal square metre or inverse square metre (symbol: m−2), the reciprocal square kilometre or inverse square kilometre (symbol: km−2).","Reciprocal length or inverse length is a quantity or measurement used in biology and medicine. It is the reciprocal of mass, and common units used for this measurement include the reciprocal gram or inverse gram (symbol: g−1), the reciprocal kilogram or inverse kilogram (symbol: kg−1).","Reciprocal length or inverse length is a quantity or measurement used in economics and finance. It is the reciprocal of interest rate, and common units used for this measurement include the reciprocal percent or inverse percent (symbol: %−1), the reciprocal basis point or inverse basis point (symbol: bp−1).","Reciprocal length or inverse length is a quantity or measurement used in several branches of science and mathematics. It is the reciprocal of length, and common units used for this measurement include the reciprocal metre or inverse metre (symbol: m−1), the reciprocal centimetre or inverse centimetre (symbol: cm−1).",E,kaggle200,"Every real or complex number excluding zero has a reciprocal, and reciprocals of certain irrational numbers can have important special properties. Examples include the reciprocal of ""e"" (≈ 0.367879) and the golden ratio's reciprocal (≈ 0.618034). The first reciprocal is special because no other positive number can produce a lower number when put to the power of itself; formula_25 is the global minimum of formula_26. The second number is the only positive number that is equal to its reciprocal plus one:formula_27. Its additive inverse is the only negative number that is equal to its reciprocal minus one:formula_28.
The trigonometric functions are related by the reciprocal identity: the cotangent is the reciprocal of the tangent; the secant is the reciprocal of the cosine; the cosecant is the reciprocal of the sine.
The energy is inversely proportional to the size of the unit of which the reciprocal is used, and is proportional to the number of reciprocal length units. For example, in terms of energy, one reciprocal metre equals (one hundredth) as much as a reciprocal centimetre. Five reciprocal metres are five times as much energy as one reciprocal metre.
Reciprocal length or inverse length is a quantity or measurement used in several branches of science and mathematics. As the reciprocal of length, common units used for this measurement include the reciprocal metre or inverse metre (symbol: m), the reciprocal centimetre or inverse centimetre (symbol: cm).","Quantities measured in reciprocal length include: absorption coefficient or attenuation coefficient, in materials science curvature of a line, in mathematics gain, in laser physics magnitude of vectors in reciprocal space, in crystallography more generally any spatial frequency e.g. in cycles per unit length optical power of a lens, in optics rotational constant of a rigid rotor, in quantum mechanics wavenumber, or magnitude of a wavevector, in spectroscopy density of a linear feature in hydrology and other fields; see kilometre per square kilometre surface area to volume ratioIn optics, the dioptre is a unit equivalent to reciprocal metre.
The energy is inversely proportional to the size of the unit of which the reciprocal is used, and is proportional to the number of reciprocal length units. For example, in terms of energy, one reciprocal metre equals 10−2 (one hundredth) as much as a reciprocal centimetre. Five reciprocal metres are five times as much energy as one reciprocal metre.
Reciprocal length or inverse length is a quantity or measurement used in several branches of science and mathematics. As the reciprocal of length, common units used for this measurement include the reciprocal metre or inverse metre (symbol: m−1), the reciprocal centimetre or inverse centimetre (symbol: cm−1).","Five reciprocal metres are five times as much energy as one reciprocal metre.
Reciprocal length or inverse length is a quantity or measurement used in several branches of science and mathematicsQuantities measured in reciprocal length include: absorption coefficient or attenuation coefficient, in materials science curvature of a line, in mathematics gain, in laser physics magnitude of vectors in reciprocal space, in crystallography more generally any spatial frequency e.gAs the reciprocal of length, common units used for this measurement include the reciprocal metre or inverse metre (symbol: m), the reciprocal centimetre or inverse centimetre (symbol: cm).As the reciprocal of length, common units used for this measurement include the reciprocal metre or inverse metre (symbol: m−1), the reciprocal centimetre or inverse centimetre (symbol: cm−1)For example, in terms of energy, one reciprocal metre equals 10−2 (one hundredth) as much as a reciprocal centimetreIts additive inverse is the only negative number that is equal to its reciprocal minus one:formula_28.
The trigonometric functions are related by the reciprocal identity: the cotangent is the reciprocal of the tangent; the secant is the reciprocal of the cosine; the cosecant is the reciprocal of the sine.
The energy is inversely proportional to the size of the unit of which the reciprocal is used, and is proportional to the number of reciprocal length unitsFor example, in terms of energy, one reciprocal metre equals (one hundredth) as much as a reciprocal centimetrein cycles per unit length optical power of a lens, in optics rotational constant of a rigid rotor, in quantum mechanics wavenumber, or magnitude of a wavevector, in spectroscopy density of a linear feature in hydrology and other fields; see kilometre per square kilometre surface area to volume ratioIn optics, the dioptre is a unit equivalent to reciprocal metre.
The energy is inversely proportional to the size of the unit of which the reciprocal is used, and is proportional to the number of reciprocal length units- Every real or complex number excludi","Five reciprocal metres are five times as much energy as one reciprocal metre.
Reciprocal length or inverse length is a quantity or measurement used in several branches of science and mathematicsQuantities measured in reciprocal length include: absorption coefficient or attenuation coefficient, in materials science curvature of a line, in mathematics gain, in laser physics magnitude of vectors in reciprocal space, in crystallography more generally any spatial frequency e.gAs the reciprocal of length, common units used for this measurement include the reciprocal metre or inverse metre (symbol: m), the reciprocal centimetre or inverse centimetre (symbol: cm).As the reciprocal of length, common units used for this measurement include the reciprocal metre or inverse metre (symbol: m−1), the reciprocal centimetre or inverse centimetre (symbol: cm−1)For example, in terms of energy, one reciprocal metre equals 10−2 (one hundredth) as much as a reciprocal centimetreIts additive inverse is the only negative number that is equal to its reciprocal minus one:formula_28.
The trigonometric functions are related by the reciprocal identity: the cotangent is the reciprocal of the tangent; the secant is the reciprocal of the cosine; the cosecant is the reciprocal of the sine.
The energy is inversely proportional to the size of the unit of which the reciprocal is used, and is proportional to the number of reciprocal length unitsFor example, in terms of energy, one reciprocal metre equals (one hundredth) as much as a reciprocal centimetrein cycles per unit length optical power of a lens, in optics rotational constant of a rigid rotor, in quantum mechanics wavenumber, or magnitude of a wavevector, in spectroscopy density of a linear feature in hydrology and other fields; see kilometre per square kilometre surface area to volume ratioIn optics, the dioptre is a unit equivalent to reciprocal metre.
The energy is inversely proportional to the size of the unit of which the reciprocal is used, and is proportional to the number of reciprocal length units- Every real or complex number excludi[SEP]What is reciprocal length or inverse length?","['E', 'D', 'C']",1.0
Which of the following statements is true about the categorization of planetary systems according to their orbital dynamics?,"There are other properties of orbits that allow for different classifications. Following that is the Sun, then Mars, Jupiter and Saturn. thumb|Conceptual framework for classical astrology The astrological descriptions attached to the seven classical planets have been preserved since ancient times. In astrology, planets have a meaning different from the astronomical understanding of what a planet is. The qualities inherited from the planets by their children are as follows: ; Saturn: industrious, melancholic, and tranquil ; Jupiter: charming and hunting ; Mars: soldiering and warfare ; Sun: music and athleticism ; Moon: shy and tenderness ; Mercury: prudent, crafty, lovable, and commerce ; Venus: amorousness and passion. ==Classical planets== The seven classical planets are those easily seen with the naked eye, and were thus known to ancient astrologers. The social or transpersonal planets are Jupiter and Saturn. Astrologers call the seven classical planets ""the seven personal and social planets"", because they are said to represent the basic human drives of every individual. Planetary means relating to a planet or planets. The planets are also related to each other in the form of aspects. Astrologers retain this definition of the 7 Classical Planets today. Modeling the Solar System is a case of the n-body problem of physics, which is generally unsolvable except by numerical simulation. === Resonance === An orbital resonance happens when any two periods have a simple numerical ratio. Another common form of resonance in the Solar System is spin–orbit resonance, where the rotation period (the time it takes the planet or moon to rotate once about its axis) has a simple numerical relationship with its orbital period. Astrologers consider the ""extra- Saturnian"" planets to be ""impersonal"" or generational planets, meaning their effects are felt more across whole generations of society. Both are erratic phenomena, and are rarely visible to the naked-eye; they are ignored by most modern astrologers. ==Fictitious and hypothetical planets== Some astrologers have hypothesized about the existence of unseen or undiscovered planets. Another example is Mercury, which is in a 3:2 spin–orbit resonance with the Sun. === Predictability === The planets' orbits are chaotic over longer timescales, in such a way that the whole Solar System possesses a Lyapunov time in the range of 2–230 million years. Astrologers differ on the signs associated with each planet's exaltation. ==Planetary symbolism== This table shows the astrological planets (as distinct from the astronomical) and the Greek and Roman deities associated with them. The Classical planets fit neatly into the theories of Aristotle and Ptolemy, they each are part of a Celestial sphere. The outer modern planets Uranus, Neptune and Pluto are often called the collective or transcendental planets. The personal planets are the Sun, Moon, Mercury, Venus and Mars. For instance, the description of Mars is masculine, impulsive, and active. An orbit can also be chaotic. ",Planetary systems cannot be categorized based on their orbital dynamics.,"Planetary systems can be categorized as resonant, non-resonant-interacting, hierarchical, or some combination of these, but only based on the number of planets in the system.",Planetary systems can only be categorized as resonant or non-resonant-interacting.,"Planetary systems can be categorized as resonant, non-resonant-interacting, hierarchical, or some combination of these.",Planetary systems can only be categorized as hierarchical or non-hierarchical.,D,kaggle200,"Some astronomers search for extrasolar planets that may be conducive to life, narrowing the search to terrestrial planets within the habitable zones of their stars. Since 1992, over four thousand exoplanets have been discovered ( planets in planetary systems including multiple planetary systems as of ).
There are at least four sdB stars which may possess planetary systems. However, in all four cases, subsequent research has indicated that the evidence for the planets' existence was not as strong as previously believed, and whether or not the planetary systems exist is not proven either way.
The planetary systems of stars other than the Sun and the Solar System are a staple element in many works of the science fiction genre.
Planetary systems can be categorized according to their orbital dynamics as resonant, non-resonant-interacting, hierarchical, or some combination of these. In resonant systems the orbital periods of the planets are in integer ratios. The Kepler-223 system contains four planets in an 8:6:4:3 orbital resonance.","Terran Trade Authority (1978–1980), novels by Stewart Cowley. Proxima Centauri is the home system of the Proximans, adversaries of Terrans and Alphans during the Proximan War.
BattleTech (1984), wargame and related products launched by The FASA Corporation. The Mizar system hosts a habitable planet noted for its luxurious resorts and vain inhabitants.
Orbital dynamics Planetary systems can be categorized according to their orbital dynamics as resonant, non-resonant-interacting, hierarchical, or some combination of these. In resonant systems the orbital periods of the planets are in integer ratios. The Kepler-223 system contains four planets in an 8:6:4:3 orbital resonance.
Giant planets are found in mean-motion resonances more often than smaller planets.
In interacting systems the planets orbits are close enough together that they perturb the orbital parameters. The Solar System could be described as weakly interacting. In strongly interacting systems Kepler's laws do not hold.
In hierarchical systems the planets are arranged so that the system can be gravitationally considered as a nested system of two-bodies, e.g. in a star with a close-in hot jupiter with another gas giant much further out, the star and hot jupiter form a pair that appears as a single object to another planet that is far enough out.
Other, as yet unobserved, orbital possibilities include: double planets; various co-orbital planets such as quasi-satellites, trojans and exchange orbits; and interlocking orbits maintained by precessing orbital planes.","However, in all four cases, subsequent research has indicated that the evidence for the planets' existence was not as strong as previously believed, and whether or not the planetary systems exist is not proven either way.
The planetary systems of stars other than the Sun and the Solar System are a staple element in many works of the science fiction genre.
Planetary systems can be categorized according to their orbital dynamics as resonant, non-resonant-interacting, hierarchical, or some combination of theseThe Mizar system hosts a habitable planet noted for its luxurious resorts and vain inhabitants.
Orbital dynamics Planetary systems can be categorized according to their orbital dynamics as resonant, non-resonant-interacting, hierarchical, or some combination of theseIn strongly interacting systems Kepler's laws do not hold.
In hierarchical systems the planets are arranged so that the system can be gravitationally considered as a nested system of two-bodies, e.gThe Kepler-223 system contains four planets in an 8:6:4:3 orbital resonance.
Giant planets are found in mean-motion resonances more often than smaller planets.
In interacting systems the planets orbits are close enough together that they perturb the orbital parametersIn resonant systems the orbital periods of the planets are in integer ratiosin a star with a close-in hot jupiter with another gas giant much further out, the star and hot jupiter form a pair that appears as a single object to another planet that is far enough out.
Other, as yet unobserved, orbital possibilities include: double planets; various co-orbital planets such as quasi-satellites, trojans and exchange orbits; and interlocking orbits maintained by precessing orbital planesThe Kepler-223 system contains four planets in an 8:6:4:3 orbital resonance.Since 1992, over four thousand exoplanets have been discovered ( planets in planetary systems including multiple planetary systems as of ).
There are at least four sdB stars which may possess planetary systemsThe Solar System could be described as weakly interacting- Some astronomers search for","However, in all four cases, subsequent research has indicated that the evidence for the planets' existence was not as strong as previously believed, and whether or not the planetary systems exist is not proven either way.
The planetary systems of stars other than the Sun and the Solar System are a staple element in many works of the science fiction genre.
Planetary systems can be categorized according to their orbital dynamics as resonant, non-resonant-interacting, hierarchical, or some combination of theseThe Mizar system hosts a habitable planet noted for its luxurious resorts and vain inhabitants.
Orbital dynamics Planetary systems can be categorized according to their orbital dynamics as resonant, non-resonant-interacting, hierarchical, or some combination of theseIn strongly interacting systems Kepler's laws do not hold.
In hierarchical systems the planets are arranged so that the system can be gravitationally considered as a nested system of two-bodies, e.gThe Kepler-223 system contains four planets in an 8:6:4:3 orbital resonance.
Giant planets are found in mean-motion resonances more often than smaller planets.
In interacting systems the planets orbits are close enough together that they perturb the orbital parametersIn resonant systems the orbital periods of the planets are in integer ratiosin a star with a close-in hot jupiter with another gas giant much further out, the star and hot jupiter form a pair that appears as a single object to another planet that is far enough out.
Other, as yet unobserved, orbital possibilities include: double planets; various co-orbital planets such as quasi-satellites, trojans and exchange orbits; and interlocking orbits maintained by precessing orbital planesThe Kepler-223 system contains four planets in an 8:6:4:3 orbital resonance.Since 1992, over four thousand exoplanets have been discovered ( planets in planetary systems including multiple planetary systems as of ).
There are at least four sdB stars which may possess planetary systemsThe Solar System could be described as weakly interacting- Some astronomers search for[SEP]Which of the following statements is true about the categorization of planetary systems according to their orbital dynamics?","['D', 'E', 'B']",1.0
What is the propagation constant in sinusoidal waves?,"The phase of the sinusoid varies with distance which results in the propagation constant being a complex number, the imaginary part being caused by the phase change. ==Alternative names== The term ""propagation constant"" is somewhat of a misnomer as it usually varies strongly with ω. The propagation constant of a sinusoidal electromagnetic wave is a measure of the change undergone by the amplitude and phase of the wave as it propagates in a given direction. The propagation constant itself measures the change per unit length, but it is otherwise dimensionless. Thus they are directly proportional to the frequency. :\alpha_d={{\pi}\sqrt{\varepsilon_r}\over{\lambda}}{\tan \delta} ===Optical fibre=== The attenuation constant for a particular propagation mode in an optical fiber is the real part of the axial propagation constant. ==Phase constant== In electromagnetic theory, the phase constant, also called phase change constant, parameter or coefficient is the imaginary component of the propagation constant for a plane wave. Note that in the field of transmission lines, the term transmission coefficient has a different meaning despite the similarity of name: it is the companion of the reflection coefficient. ==Definition== The propagation constant, symbol , for a given system is defined by the ratio of the complex amplitude at the source of the wave to the complex amplitude at some distance , such that, : \frac{A_0}{A_x} = e^{\gamma x} Since the propagation constant is a complex quantity we can write: : \gamma = \alpha + i \beta\ where * , the real part, is called the attenuation constant * , the imaginary part, is called the phase constant * i \equiv j \equiv \sqrt{ -1\ }\ ; more often is used for electrical circuits. It is the real part of the propagation constant and is measured in nepers per metre. The propagation constant for conducting lines can be calculated from the primary line coefficients by means of the relationship : \gamma= \sqrt{ Z Y\ } where : Z = R + i\ \omega L\ , the series impedance of the line per unit length and, : Y = G + i\ \omega C\ , the shunt admittance of the line per unit length. ===Plane wave=== The propagation factor of a plane wave traveling in a linear media in the direction is given by P = e^{-\gamma x} where * \gamma = \alpha + i\ \beta = \sqrt{i\ \omega\ \mu\ (\sigma + i\ \omega \varepsilon)\ }\ * x = distance traveled in the direction * \alpha =\ attenuation constant in the units of nepers/meter * \beta =\ phase constant in the units of radians/meter * \omega=\ frequency in radians/second * \sigma =\ conductivity of the media * \varepsilon = \varepsilon' - i\ \varepsilon \ = complex permitivity of the media * \mu = \mu' - i\ \mu \; = complex permeability of the media * i \equiv \sqrt{-1\ } The sign convention is chosen for consistency with propagation in lossy media. The propagation constant's value is expressed logarithmically, almost universally to the base e, rather than the more usual base 10 that is used in telecommunications in other situations. In the context of two-port networks and their cascades, propagation constant measures the change undergone by the source quantity as it propagates from one port to the next. Attenuation constant can be defined by the amplitude ratio :\left|\frac{A_0}{A_x}\right|=e^{\alpha x} The propagation constant per unit length is defined as the natural logarithm of the ratio of the sending end current or voltage to the receiving end current or voltage. ===Conductive lines=== The attenuation constant for conductive lines can be calculated from the primary line coefficients as shown above. Wavelength, phase velocity, and skin depth have simple relationships to the components of the propagation constant: \lambda = \frac {2 \pi}{\beta} \qquad v_p = \frac{\omega}{\beta} \qquad \delta = \frac{1}{\alpha} ==Attenuation constant== In telecommunications, the term attenuation constant, also called attenuation parameter or attenuation coefficient, is the attenuation of an electromagnetic wave propagating through a medium per unit distance from the source. The term sinusoidal thereby collectively refers to both sine waves and cosine waves with any phase offset. == Occurrence == thumb|400px|Illustrating the cosine wave's fundamental relationship to the circle. thumb|3D complex plane model to visualize usefulness for translation of domains This wave pattern occurs often in nature, including wind waves, sound waves, and light waves. These include transmission parameter, transmission function, propagation parameter, propagation coefficient and transmission constant. It represents the change in phase per unit length along the path travelled by the wave at any instant and is equal to the real part of the angular wavenumber of the wave. In a cascaded topology, the propagation constant, attenuation constant and phase constant of individual sections may be simply added to find the total propagation constant etc. ===Cascaded networks=== The ratio of output to input voltage for each network is given byMatthaei et al pp51-52 :\frac{V_1}{V_2}=\sqrt{\frac{Z_{I1}}{Z_{I2}}}e^{\gamma_1} :\frac{V_2}{V_3}=\sqrt{\frac{Z_{I2}}{Z_{I3}}}e^{\gamma_2} :\frac{V_3}{V_4}=\sqrt{\frac{Z_{I3}}{Z_{I4}}}e^{\gamma_3} The terms \sqrt{\frac{Z_{In}}{Z_{Im}}} are impedance scaling termsMatthaei et al pp37-38 and their use is explained in the image impedance article. The imaginary phase constant, , can be added directly to the attenuation constant, , to form a single complex number that can be handled in one mathematical operation provided they are to the same base. This property leads to its importance in Fourier analysis and makes it acoustically unique. == General form == In general, the function may also have: * a spatial variable x that represents the position on the dimension on which the wave propagates, and a characteristic parameter k called wave number (or angular wave number), which represents the proportionality between the angular frequency ω and the linear speed (speed of propagation) ν; * a non-zero center amplitude, D which is *y(x, t) = A\sin(kx - \omega t + \varphi) + D, if the wave is moving to the right *y(x, t) = A\sin(kx + \omega t + \varphi) + D, if the wave is moving to the left. The formula of a sinusoidal plane wave can be written in several other ways: *: F(\vec x,t)=A \cos (2\pi[(\vec x \cdot \hat n)/\lambda - t/T] + \varphi) :Here \lambda = 1/ u is the wavelength, the distance between two wavefronts where the field is equal to the amplitude A; and T = \lambda/c is the period of the field's variation over time, seen at any fixed point in space. A sine wave, sinusoidal wave, or just sinusoid is a mathematical curve defined in terms of the sine trigonometric function, of which it is the graph. The phase velocity equals :v_p=\frac{\omega}{\beta}=\frac{c}{\sqrt{1-\frac{\omega_\mathrm{c}^2}{\omega^2}}}>c ==Filters and two-port networks== The term propagation constant or propagation function is applied to filters and other two-port networks used for signal processing. ",The propagation constant is a measure of the amplitude of the sinusoidal wave that varies with distance.,The propagation constant is a real number that remains constant with distance due to the phase change in the sinusoidal wave.,The propagation constant is a real number that varies with distance due to the phase change in the sinusoidal wave.,The propagation constant is a complex number that varies with distance due to the phase change in the sinusoidal wave.,The propagation constant is a complex number that remains constant with distance due to the phase change in the sinusoidal wave.,D,kaggle200,"Wavelength, phase velocity, and skin depth have simple relationships to the components of the propagation constant:
The attenuation constant for a particular propagation mode in an optical fiber is the real part of the axial propagation constant.
The propagation constant is a useful concept in filter design which invariably uses a cascaded section topology. In a cascaded topology, the propagation constant, attenuation constant and phase constant of individual sections may be simply added to find the total propagation constant etc.
The propagation constant's value is expressed logarithmically, almost universally to the base ""e"", rather than the more usual base 10 that is used in telecommunications in other situations. The quantity measured, such as voltage, is expressed as a sinusoidal phasor. The phase of the sinusoid varies with distance which results in the propagation constant being a complex number, the imaginary part being caused by the phase change.","Propagation constant The propagation constant of the sinusoidal electromagnetic wave is a measure of the change undergone by the amplitude and phase of the wave as it propagates in a given direction. The quantity being measured can be the voltage, the current in a circuit, or a field vector such as electric field strength or flux density. The propagation constant itself measures the change per unit length, but it is otherwise dimensionless. In the context of two-port networks and their cascades, propagation constant measures the change undergone by the source quantity as it propagates from one port to the next.
The propagation constant of a sinusoidal electromagnetic wave is a measure of the change undergone by the amplitude and phase of the wave as it propagates in a given direction. The quantity being measured can be the voltage, the current in a circuit, or a field vector such as electric field strength or flux density. The propagation constant itself measures the change per unit length, but it is otherwise dimensionless. In the context of two-port networks and their cascades, propagation constant measures the change undergone by the source quantity as it propagates from one port to the next.
The propagation constant's value is expressed logarithmically, almost universally to the base e, rather than the more usual base 10 that is used in telecommunications in other situations. The quantity measured, such as voltage, is expressed as a sinusoidal phasor. The phase of the sinusoid varies with distance which results in the propagation constant being a complex number, the imaginary part being caused by the phase change.","Propagation constant The propagation constant of the sinusoidal electromagnetic wave is a measure of the change undergone by the amplitude and phase of the wave as it propagates in a given directionThe phase of the sinusoid varies with distance which results in the propagation constant being a complex number, the imaginary part being caused by the phase change.The phase of the sinusoid varies with distance which results in the propagation constant being a complex number, the imaginary part being caused by the phase changeIn the context of two-port networks and their cascades, propagation constant measures the change undergone by the source quantity as it propagates from one port to the next.
The propagation constant of a sinusoidal electromagnetic wave is a measure of the change undergone by the amplitude and phase of the wave as it propagates in a given directionThe propagation constant itself measures the change per unit length, but it is otherwise dimensionless- Wavelength, phase velocity, and skin depth have simple relationships to the components of the propagation constant:
The attenuation constant for a particular propagation mode in an optical fiber is the real part of the axial propagation constant.
The propagation constant is a useful concept in filter design which invariably uses a cascaded section topologyIn a cascaded topology, the propagation constant, attenuation constant and phase constant of individual sections may be simply added to find the total propagation constant etc.
The propagation constant's value is expressed logarithmically, almost universally to the base ""e"", rather than the more usual base 10 that is used in telecommunications in other situationsIn the context of two-port networks and their cascades, propagation constant measures the change undergone by the source quantity as it propagates from one port to the next.
The propagation constant's value is expressed logarithmically, almost universally to the base e, rather than the more usual base 10 that is used in telecommunications in other situationsThe quantity measured, such as voltag","Propagation constant The propagation constant of the sinusoidal electromagnetic wave is a measure of the change undergone by the amplitude and phase of the wave as it propagates in a given directionThe phase of the sinusoid varies with distance which results in the propagation constant being a complex number, the imaginary part being caused by the phase change.The phase of the sinusoid varies with distance which results in the propagation constant being a complex number, the imaginary part being caused by the phase changeIn the context of two-port networks and their cascades, propagation constant measures the change undergone by the source quantity as it propagates from one port to the next.
The propagation constant of a sinusoidal electromagnetic wave is a measure of the change undergone by the amplitude and phase of the wave as it propagates in a given directionThe propagation constant itself measures the change per unit length, but it is otherwise dimensionless- Wavelength, phase velocity, and skin depth have simple relationships to the components of the propagation constant:
The attenuation constant for a particular propagation mode in an optical fiber is the real part of the axial propagation constant.
The propagation constant is a useful concept in filter design which invariably uses a cascaded section topologyIn a cascaded topology, the propagation constant, attenuation constant and phase constant of individual sections may be simply added to find the total propagation constant etc.
The propagation constant's value is expressed logarithmically, almost universally to the base ""e"", rather than the more usual base 10 that is used in telecommunications in other situationsIn the context of two-port networks and their cascades, propagation constant measures the change undergone by the source quantity as it propagates from one port to the next.
The propagation constant's value is expressed logarithmically, almost universally to the base e, rather than the more usual base 10 that is used in telecommunications in other situationsThe quantity measured, such as voltag[SEP]What is the propagation constant in sinusoidal waves?","['D', 'E', 'C']",1.0
What is the gravitomagnetic interaction?,"Gravitomagnetism is a widely used term referring specifically to the kinetic effects of gravity, in analogy to the magnetic effects of moving electric charge. This can be expressed as an attractive or repulsive gravitomagnetic component. Gravitoelectromagnetism, abbreviated GEM, refers to a set of formal analogies between the equations for electromagnetism and relativistic gravitation; specifically: between Maxwell's field equations and an approximation, valid under certain conditions, to the Einstein field equations for general relativity. The main consequence of the gravitomagnetic field, or velocity-dependent acceleration, is that a moving object near a massive, non-axisymmetric, rotating object will experience acceleration not predicted by a purely Newtonian (gravitoelectric) gravity field. A group at Stanford University is currently analyzing data from the first direct test of GEM, the Gravity Probe B satellite experiment, to see whether they are consistent with gravitomagnetism.Gravitomagnetism in Quantum Mechanics, 2014 https://www.slac.stanford.edu/pubs/slacpubs/14750/slac-pub-14775.pdf The Apache Point Observatory Lunar Laser-ranging Operation also plans to observe gravitomagnetism effects. ==Equations== According to general relativity, the gravitational field produced by a rotating object (or any rotating mass–energy) can, in a particular limiting case, be described by equations that have the same form as in classical electromagnetism. All of those observed properties could be explained in terms of gravitomagnetic effects. In theories of quantum gravity, the graviton is the hypothetical quantum of gravity, an elementary particle that mediates the force of gravitational interaction. In physics, gravity () is a fundamental interaction which causes mutual attraction between all things with mass or energy. (See Relativistic wave equations for more on ""spin-1"" and ""spin-2"" fields). ==Higher-order effects== Some higher-order gravitomagnetic effects can reproduce effects reminiscent of the interactions of more conventional polarized charges. In nuclear physics and particle physics, the weak interaction, which is also often called the weak force or weak nuclear force, is one of the four known fundamental interactions, with the others being electromagnetism, the strong interaction, and gravitation. Modelling this complex behaviour as a curved spacetime problem has yet to be done and is believed to be very difficult. ==Gravitomagnetic fields of astronomical objects== The formula for the gravitomagnetic field Bg near a rotating body can be derived from the GEM equations. When such fast motion and such strong gravitational fields exist in a system, the simplified approach of separating gravitomagnetic and gravitoelectric forces can be applied only as a very rough approximation. == Lack of invariance == While Maxwell's equations are invariant under Lorentz transformations, the GEM equations are not. Additionally, it can be shown that any massless spin-2 field would give rise to a force indistinguishable from gravitation, because a massless spin-2 field would couple to the stress–energy tensor in the same way that gravitational interactions do. This represents a ""special case"" in which gravitomagnetic effects generate a chiral corkscrew-like gravitational field around the object. This apparent field may be described by two components that act respectively like the electric and magnetic fields of electromagnetism, and by analogy these are called the gravitoelectric and gravitomagnetic fields, since these arise in the same way around a mass that a moving electric charge is the source of electric and magnetic fields. The electromagnetic force arises from an exchange of virtual photons, where the QFT description of gravity is that there is an exchange of virtual gravitons. An interaction occurs when two particles (typically, but not necessarily, half-integer spin fermions) exchange integer-spin, force-carrying bosons. However, gravity is the most significant interaction between objects at the macroscopic scale, and it determines the motion of planets, stars, galaxies, and even light. The weak interaction does not produce bound states, nor does it involve binding energy something that gravity does on an astronomical scale, the electromagnetic force does at the molecular and atomic levels, and the strong nuclear force does only at the subatomic level, inside of nuclei. * Gravitomagnetic London Moment – New test of General Relativity? ",The gravitomagnetic interaction is a force that is produced by the rotation of atoms in materials with linear properties that enhance time-varying gravitational fields.,"The gravitomagnetic interaction is a force that acts against gravity, produced by materials that have nonlinear properties that enhance time-varying gravitational fields.","The gravitomagnetic interaction is a new force of nature generated by rotating matter, whose intensity is proportional to the rate of spin, according to the general theory of relativity.","The gravitomagnetic interaction is a force that occurs in neutron stars, producing a gravitational analogue of the Meissner effect.",The gravitomagnetic interaction is a force that is produced by the rotation of atoms in materials of different gravitational permeability.,C,kaggle200,"Some higher-order gravitomagnetic effects can reproduce effects reminiscent of the interactions of more conventional polarized charges. For instance, if two wheels are spun on a common axis, the mutual gravitational attraction between the two wheels will be greater if they spin in opposite directions than in the same direction. This can be expressed as an attractive or repulsive gravitomagnetic component.
According to general relativity, in its weak-field and slow-motion linearized approximation, a slowly spinning body induces an additional component of the gravitational field that acts on a freely-falling test particle with a non-central, gravitomagnetic Lorentz-like force.
Particles orbiting in opposite directions experience gravitomagnetic corrections ""T"" with opposite signs, so that the difference of their orbital periods would cancel the standard Keplerian terms and would add the gravitomagnetic ones.
The use of zero-point energy for space travel is speculative and does not form part of the mainstream scientific consensus. A complete quantum theory of gravitation (that would deal with the role of quantum phenomena like zero-point energy) does not yet exist. Speculative papers explaining a relationship between zero-point energy and gravitational shielding effects have been proposed, but the interaction (if any) is not yet fully understood. Most serious scientific research in this area depends on the theorized anti-gravitational properties of antimatter (currently being tested at the alpha experiment at CERN) and/or the effects of non-Newtonian forces such as the gravitomagnetic field under specific quantum conditions. According to the general theory of relativity, rotating matter can generate a new force of nature, known as the gravitomagnetic interaction, whose intensity is proportional to the rate of spin. In certain conditions the gravitomagnetic field can be repulsive. In neutrons stars for example it can produce a gravitational analogue of the Meissner effect, but the force produced in such an example is theorized to be exceedingly weak.","According to general relativity, in its weak-field and slow-motion linearized approximation, a slowly spinning body induces an additional component of the gravitational field that acts on a freely-falling test particle with a non-central, gravitomagnetic Lorentz-like force.
It is popular in some circles to use the gravitomagnetic approach to the linearized field equations. The reason for this popularity should be immediately evident below, by contrasting it to the difficulties of working with the equations above. The linearized metric hμν=gμν−ημν can be read off from the Lense–Thirring metric given above, where ds2=gμνdxμdxν , and ημνdxμdxν=c2dt2−dx2−dy2−dz2 . In this approach, one writes the linearized metric, given in terms of the gravitomagnetic potentials ϕ and A→ is 00 =−2ϕc2 and h0i=2Aic2, where ϕ=−GMr is the gravito-electric potential, and A→=Gr3cS→×r→ is the gravitomagnetic potential. Here r→ is the 3D spatial coordinate of the observer, and S→ is the angular momentum of the rotating body, exactly as defined above. The corresponding fields are E→=−∇ϕ−12c∂A→∂t for the gravito-electric field, and B→=12∇→×A→ is the gravitomagnetic field. It is then a matter of substitution and rearranging to obtain B→=−G2cr3[S→−3(S→⋅r→)r→r2] as the gravitomagnetic field. Note that it is half the Lense–Thirring precession frequency. In this context, Lense–Thirring precession can essentially be viewed as a form of Larmor precession. The factor of 1/2 suggests that the correct gravitomagnetic analog of the gyromagnetic ratio is (curiously!) two.
In physics, the gravitomagnetic clock effect is a deviation from Kepler's third law that, according to the weak-field and slow-motion approximation of general relativity, will be suffered by a particle in orbit around a (slowly) spinning body, such as a typical planet or star.","- Some higher-order gravitomagnetic effects can reproduce effects reminiscent of the interactions of more conventional polarized chargesIn certain conditions the gravitomagnetic field can be repulsiveAccording to the general theory of relativity, rotating matter can generate a new force of nature, known as the gravitomagnetic interaction, whose intensity is proportional to the rate of spinAccording to general relativity, in its weak-field and slow-motion linearized approximation, a slowly spinning body induces an additional component of the gravitational field that acts on a freely-falling test particle with a non-central, gravitomagnetic Lorentz-like force.
It is popular in some circles to use the gravitomagnetic approach to the linearized field equationsIn this approach, one writes the linearized metric, given in terms of the gravitomagnetic potentials ϕ and A→ is 00 =−2ϕc2 and h0i=2Aic2, where ϕ=−GMr is the gravito-electric potential, and A→=Gr3cS→×r→ is the gravitomagnetic potentialMost serious scientific research in this area depends on the theorized anti-gravitational properties of antimatter (currently being tested at the alpha experiment at CERN) and/or the effects of non-Newtonian forces such as the gravitomagnetic field under specific quantum conditionsThe corresponding fields are E→=−∇ϕ−12c∂A→∂t for the gravito-electric field, and B→=12∇→×A→ is the gravitomagnetic fieldIt is then a matter of substitution and rearranging to obtain B→=−G2cr3[S→−3(S→⋅r→)r→r2] as the gravitomagnetic fieldThe factor of 1/2 suggests that the correct gravitomagnetic analog of the gyromagnetic ratio is (curiously!) two.
In physics, the gravitomagnetic clock effect is a deviation from Kepler's third law that, according to the weak-field and slow-motion approximation of general relativity, will be suffered by a particle in orbit around a (slowly) spinning body, such as a typical planet or starThis can be expressed as an attractive or repulsive gravitomagnetic component.
According to general relativity, in its weak-field and slow-motion linearized approximation, a slowly spinnin","- Some higher-order gravitomagnetic effects can reproduce effects reminiscent of the interactions of more conventional polarized chargesIn certain conditions the gravitomagnetic field can be repulsiveAccording to the general theory of relativity, rotating matter can generate a new force of nature, known as the gravitomagnetic interaction, whose intensity is proportional to the rate of spinAccording to general relativity, in its weak-field and slow-motion linearized approximation, a slowly spinning body induces an additional component of the gravitational field that acts on a freely-falling test particle with a non-central, gravitomagnetic Lorentz-like force.
It is popular in some circles to use the gravitomagnetic approach to the linearized field equationsIn this approach, one writes the linearized metric, given in terms of the gravitomagnetic potentials ϕ and A→ is 00 =−2ϕc2 and h0i=2Aic2, where ϕ=−GMr is the gravito-electric potential, and A→=Gr3cS→×r→ is the gravitomagnetic potentialMost serious scientific research in this area depends on the theorized anti-gravitational properties of antimatter (currently being tested at the alpha experiment at CERN) and/or the effects of non-Newtonian forces such as the gravitomagnetic field under specific quantum conditionsThe corresponding fields are E→=−∇ϕ−12c∂A→∂t for the gravito-electric field, and B→=12∇→×A→ is the gravitomagnetic fieldIt is then a matter of substitution and rearranging to obtain B→=−G2cr3[S→−3(S→⋅r→)r→r2] as the gravitomagnetic fieldThe factor of 1/2 suggests that the correct gravitomagnetic analog of the gyromagnetic ratio is (curiously!) two.
In physics, the gravitomagnetic clock effect is a deviation from Kepler's third law that, according to the weak-field and slow-motion approximation of general relativity, will be suffered by a particle in orbit around a (slowly) spinning body, such as a typical planet or starThis can be expressed as an attractive or repulsive gravitomagnetic component.
According to general relativity, in its weak-field and slow-motion linearized approximation, a slowly spinnin[SEP]What is the gravitomagnetic interaction?","['C', 'B', 'E']",1.0
What did Newton's manuscripts of the 1660s show?,"Newton was well-versed in both classics and modern languages. Richard Newton (19 May 1777 – 8 December 1798) was an English caricaturist, miniaturist and book illustrator. == Life and works == Born in London, Newton published his first caricature at thirteen. Sir Henry Newton (1651-1715) was a British judge and diplomat. Peter Anthony Newton (1935–1987) was a British academic and collector specialising in medieval stained glass. == Education == Newton studied history of art at the Courtauld Institute. When in 1734 Newton wrote an open letter to the Vice-Chancellor William Holmes complaining of obstruction by Exeter College, Conybeare responded with Calumny Refuted: Or, an Answer to the Personal Slanders Published by Dr. Richard Newton (1735); Newton responded with The Grounds of the Complaint of the Principal of Hart Hall (1735). The year 1660 in science and technology involved some significant events. ==Events== * November 28 – At Gresham College in London, twelve men, including Christopher Wren, Robert Boyle, John Wilkins, and Robert Moray, meet after a lecture by Wren and resolve to found ""a College for the Promoting of Physico- Mathematicall Experimentall Learning"", which will become the Royal Society. ==Botany== * John Ray publishes Catalogus plantarum circa Cantabrigiam nascentium in Cambridge, the first flora of an English county. ==Mathematics== * The popular English-language edition by Isaac Barrow of Euclid's Elements is published in London. ==Physics== * Robert Boyle publishes New Experiments Physico-Mechanicall, Touching the Spring of the Air and its Effects (the second edition in 1662 will contain Boyle's Law). ==Births== * February 19 – Friedrich Hoffmann, German physician and chemist (died 1742) * April 16 – Hans Sloane, Ulster Scots-born collector and physician (died 1753) * March 15 – Olof Rudbeck the Younger, Swedish naturalist (died 1740) * May 27 (bapt.) Newton became a canon of Christ Church, Oxford in January 1753. In 1794, Holland published an edition of Laurence Sterne's A Sentimental Journey Through France and Italy with twelve plates by Newton. During his time at York, Newton worked to establish the Wormald Library as a memorial to his former tutor, Francis Wormald. Newton built, at a cost of nearly £1,500, one- fourth part of a large quadrangle, consisting of a chapel, consecrated by John Potter, then Bishop of Oxford, on 25 November 1716, and an angle, containing fifteen single rooms; purchased the adjoining property at a cost of £160 more, and endowed the new institution with an annuity of £53 6s. 8d. paid from his estate at Lavendon. Newton died of typhus in London at the age of 21. ==Books illustrated by Richard Newton== * Henry Fielding Tom Jones (1799) * Laurence Sterne A Sentimental Journey through France and Italy (1794) == Notes == ==References== * * * * (Vol. VI, Vol. VII, 1942; Vol. VIII, 1947) == External links == * British Museum Bio for Richard Newton * https://www.lambiek.net/artists/n/newton_richard.htm Category:English illustrators Category:English cartoonists Category:English caricaturists Category:English satirists Category:Artists from London Category:1777 births Category:1798 deaths Category:Deaths from typhus – Francis Hauksbee, English scientific instrument maker and experimentalist (died 1713) * approx. date – Edward Lhuyd, Welsh naturalist (died 1709) * Date unknown – Jeanne Dumée, French astronomer (born 1660) ==Deaths== * May 29 – Frans van Schooten, Dutch Cartesian mathematician (born 1615) * June 30 – William Oughtred, English mathematician who invented the slide rule (born 1574) * Jean-Jacques Chifflet, French physician and antiquary (born 1588) * Walter Rumsey, Welsh judge and amateur scientist (born 1584) ==References== Category:17th century in science Category:1660s in science For these long-continued exertions Newton incurred the charge of being 'founder-mad.' In his will, published after his death in 1987, he left his collection to the library of the university on the condition that this material was kept together in the King's Manor Library. == Selected works == *Peter A. Newton and Jill Kerr. He was knighted in 1715, but died later the same year.Noble, Mark ""A Biographical History of England, From the Revolution to the End of George I's Reign"" pp. 175-176 Henry Newton had two daughters. As principal of the hall, Newton worked towards two aims. After many years Newton triumphed over all obstacles. Newton produced nearly 300 single sheet prints of which the British Museum's collection includes more than half. M. Dorothy George's ""Catalogue of Political and Personal Satires Preserved in the Department of Prints and Drawings in the British Museum"" lists 98 prints by Newton. He was awarded his doctorate in 1961 for his dissertation Schools of glass painting in the Midlands 1275–1430. == Academic career == Newton was appointed Mellon Lecturer in British Medieval Art at the University of York in 1965, the first experienced specialist to teach medieval stained glass at the university level. There are frequent sneers in the 'Terræ Filius' of Nicholas Amhurst and the pamphlets of the period at his economical system of living. ",Newton learned about tangential motion and radially directed force or endeavour from Hooke's work.,Newton's manuscripts did not show any evidence of combining tangential motion with the effects of radially directed force or endeavour.,Newton combined tangential motion with the effects of radially directed force or endeavour and expressed the concept of linear inertia.,Newton's manuscripts showed that he learned about the inverse square law from Hooke's private papers.,"Newton's manuscripts showed that he was indebted to Descartes' work, published in 1644, for the concept of linear inertia.",C,kaggle200,"Nicole Oresme's manuscripts from the 14th century show what may be one of the earliest uses of as a sign for plus.
In the 1660s Newton studied the motion of colliding bodies, and deduced that the centre of mass of two colliding bodies remains in uniform motion. Surviving manuscripts of the 1660s also show Newton's interest in planetary motion and that by 1669 he had shown, for a circular case of planetary motion, that the force he called ""endeavour to recede"" (now called centrifugal force) had an inverse-square relation with distance from the center. After his 1679–1680 correspondence with Hooke, described below, Newton adopted the language of inward or centripetal force. According to Newton scholar J. Bruce Brackenridge, although much has been made of the change in language and difference of point of view, as between centrifugal or centripetal forces, the actual computations and proofs remained the same either way. They also involved the combination of tangential and radial displacements, which Newton was making in the 1660s. The difference between the centrifugal and centripetal points of view, though a significant change of perspective, did not change the analysis. Newton also clearly expressed the concept of linear inertia in the 1660s: for this Newton was indebted to Descartes' work published 1644.
According to I Bernard Cohen, in his Guide to Newton’s Principia, ""The key to Newton’s reasoning was found in the 1880s, when the earl of Portsmouth gave his family’s vast collection of Newton’s scientific and mathematical papers to Cambridge University. Among Newton’s manuscripts they found the draft text of a letter, ... in which Newton elaborated his mathematical argument. [This]
Since the time of Newton and Hooke, scholarly discussion has also touched on the question of whether Hooke's 1679 mention of 'compounding the motions' provided Newton with something new and valuable, even though that was not a claim actually voiced by Hooke at the time. As described above, Newton's manuscripts of the 1660s do show him actually combining tangential motion with the effects of radially directed force or endeavour, for example in his derivation of the inverse square relation for the circular case. They also show Newton clearly expressing the concept of linear inertia—for which he was indebted to Descartes' work, published in 1644 (as Hooke probably was). These matters do not appear to have been learned by Newton from Hooke.","In regard to evidence that still survives of the earlier history, manuscripts written by Newton in the 1660s show that Newton himself had, by 1669, arrived at proofs that in a circular case of planetary motion, ""endeavour to recede"" (what was later called centrifugal force) had an inverse-square relation with distance from the center. After his 1679–1680 correspondence with Hooke, Newton adopted the language of inward or centripetal force. According to Newton scholar J. Bruce Brackenridge, although much has been made of the change in language and difference of point of view, as between centrifugal or centripetal forces, the actual computations and proofs remained the same either way. They also involved the combination of tangential and radial displacements, which Newton was making in the 1660s. The lesson offered by Hooke to Newton here, although significant, was one of perspective and did not change the analysis. This background shows there was basis for Newton to deny deriving the inverse square law from Hooke.
Newton's early work on motion In the 1660s Newton studied the motion of colliding bodies and deduced that the centre of mass of two colliding bodies remains in uniform motion. Surviving manuscripts of the 1660s also show Newton's interest in planetary motion and that by 1669 he had shown, for a circular case of planetary motion, that the force he called ""endeavour to recede"" (now called centrifugal force) had an inverse-square relation with distance from the center. After his 1679–1680 correspondence with Hooke, described below, Newton adopted the language of inward or centripetal force. According to Newton scholar J. Bruce Brackenridge, although much has been made of the change in language and difference of point of view, as between centrifugal or centripetal forces, the actual computations and proofs remained the same either way. They also involved the combination of tangential and radial displacements, which Newton was making in the 1660s. The difference between the centrifugal and centripetal points of view, though a significant change of perspective, did not change the analysis. Newton also clearly expressed the concept of linear inertia in the 1660s: for this Newton was indebted to Descartes' work published 1644.
Newton's acknowledgment On the other hand, Newton did accept and acknowledge, in all editions of the Principia, that Hooke (but not exclusively Hooke) had separately appreciated the inverse square law in the solar system. Newton acknowledged Wren, Hooke, and Halley in this connection in the Scholium to Proposition 4 in Book 1. Newton also acknowledged to Halley that his correspondence with Hooke in 1679–80 had reawakened his dormant interest in astronomical matters, but that did not mean, according to Newton, that Hooke had told Newton anything new or original: ""yet am I not beholden to him for any light into that business but only for the diversion he gave me from my other studies to think on these things & for his dogmaticalness in writing as if he had found the motion in the Ellipsis, which inclined me to try it ..."" Modern priority controversy Since the time of Newton and Hooke, scholarly discussion has also touched on the question of whether Hooke's 1679 mention of 'compounding the motions' provided Newton with something new and valuable, even though that was not a claim actually voiced by Hooke at the time. As described above, Newton's manuscripts of the 1660s do show him actually combining tangential motion with the effects of radially directed force or endeavour, for example in his derivation of the inverse square relation for the circular case. They also show Newton clearly expressing the concept of linear inertia—for which he was indebted to Descartes' work, published in 1644 (as Hooke probably was). These matters do not appear to have been learned by Newton from Hooke.","Among Newton’s manuscripts they found the draft text of a letter, ..According to Newton scholar JThey also show Newton clearly expressing the concept of linear inertia—for which he was indebted to Descartes' work, published in 1644 (as Hooke probably was)Newton also clearly expressed the concept of linear inertia in the 1660s: for this Newton was indebted to Descartes' work published 1644.
According to I Bernard Cohen, in his Guide to Newton’s Principia, ""The key to Newton’s reasoning was found in the 1880s, when the earl of Portsmouth gave his family’s vast collection of Newton’s scientific and mathematical papers to Cambridge Universityin which Newton elaborated his mathematical argument[This]
Since the time of Newton and Hooke, scholarly discussion has also touched on the question of whether Hooke's 1679 mention of 'compounding the motions' provided Newton with something new and valuable, even though that was not a claim actually voiced by Hooke at the timeAfter his 1679–1680 correspondence with Hooke, described below, Newton adopted the language of inward or centripetal forceNewton also acknowledged to Halley that his correspondence with Hooke in 1679–80 had reawakened his dormant interest in astronomical matters, but that did not mean, according to Newton, that Hooke had told Newton anything new or original: ""yet am I not beholden to him for any light into that business but only for the diversion he gave me from my other studies to think on these things & for his dogmaticalness in writing as if he had found the motion in the Ellipsis, which inclined me to try it ..."" Modern priority controversy Since the time of Newton and Hooke, scholarly discussion has also touched on the question of whether Hooke's 1679 mention of 'compounding the motions' provided Newton with something new and valuable, even though that was not a claim actually voiced by Hooke at the timeNewton acknowledged Wren, Hooke, and Halley in this connection in the Scholium to Proposition 4 in Book 1These matters do not appear to have been learned by Newton from HookeAfter his 1679–1680 correspond","Among Newton’s manuscripts they found the draft text of a letter, ..According to Newton scholar JThey also show Newton clearly expressing the concept of linear inertia—for which he was indebted to Descartes' work, published in 1644 (as Hooke probably was)Newton also clearly expressed the concept of linear inertia in the 1660s: for this Newton was indebted to Descartes' work published 1644.
According to I Bernard Cohen, in his Guide to Newton’s Principia, ""The key to Newton’s reasoning was found in the 1880s, when the earl of Portsmouth gave his family’s vast collection of Newton’s scientific and mathematical papers to Cambridge Universityin which Newton elaborated his mathematical argument[This]
Since the time of Newton and Hooke, scholarly discussion has also touched on the question of whether Hooke's 1679 mention of 'compounding the motions' provided Newton with something new and valuable, even though that was not a claim actually voiced by Hooke at the timeAfter his 1679–1680 correspondence with Hooke, described below, Newton adopted the language of inward or centripetal forceNewton also acknowledged to Halley that his correspondence with Hooke in 1679–80 had reawakened his dormant interest in astronomical matters, but that did not mean, according to Newton, that Hooke had told Newton anything new or original: ""yet am I not beholden to him for any light into that business but only for the diversion he gave me from my other studies to think on these things & for his dogmaticalness in writing as if he had found the motion in the Ellipsis, which inclined me to try it ..."" Modern priority controversy Since the time of Newton and Hooke, scholarly discussion has also touched on the question of whether Hooke's 1679 mention of 'compounding the motions' provided Newton with something new and valuable, even though that was not a claim actually voiced by Hooke at the timeNewton acknowledged Wren, Hooke, and Halley in this connection in the Scholium to Proposition 4 in Book 1These matters do not appear to have been learned by Newton from HookeAfter his 1679–1680 correspond[SEP]What did Newton's manuscripts of the 1660s show?","['C', 'E', 'D']",1.0
What is the decay energy for the free neutron decay process?,"The following diagram gives a summary sketch of the beta decay process according to the present level of understanding. () () \+ \+ : For diagrams at several levels of detail, see § Decay process, below. : ==Energy budget== For the free neutron, the decay energy for this process (based on the rest masses of the neutron, proton and electron) is . Decay energy is usually quoted in terms of the energy units MeV (million electronvolts) or keV (thousand electronvolts): : Q \text{ [MeV]} = -931.5 \Delta M \text{ [Da]},~~(\text{where }\Delta M = \Sigma M_\text{products} - \Sigma M_\text{reactants}). The decay energy is the energy change of a nucleus having undergone a radioactive decay. In this type of free neutron decay, in essence all of the neutron decay energy is carried off by the antineutrino (the other ""body""). In this example, the total decay energy is 1.16 MeV, so the antineutrino has the remaining energy: . The following table lists the Δ and Δ values for the first few values of : Forbiddenness Δ Δ Superallowed 0 Allowed 0, 1 First forbidden 0, 1, 2 Second forbidden 1, 2, 3 Third forbidden 2, 3, 4 ==Rare decay modes== ===Bound-state β− decay=== A very small minority of free neutron decays (about four per million) are so-called ""two-body decays"", in which the proton, electron and antineutrino are produced, but the electron fails to gain the 13.6 eV energy necessary to escape the proton, and therefore simply remains bound to it, as a neutral hydrogen atom.An Overview Of Neutron Decay J. Byrne in Quark-Mixing, CKM Unitarity (H. Abele and D. Mund, 2002), see p.XV In this type of beta decay, in essence all of the neutron decay energy is carried off by the antineutrino. In the figure to the right, an example of an electron with 0.40 MeV energy from the beta decay of 210Bi is shown. The total energy of the decay process is divided between the electron, the antineutrino, and the recoiling nuclide. The beta decay of the neutron described in this article can be notated at four slightly different levels of detail, as shown in four layers of Feynman diagrams in a section below. This results in 13 MeV (6.5% of the total fission energy) being deposited in the reactor core from delayed beta decay of fission products, at some time after any given fission reaction has occurred. The following is a detailed classification: === Thermal === A thermal neutron is a free neutron with a kinetic energy of about 0.025 eV (about 4.0×10−21 J or 2.4 MJ/kg, hence a speed of 2.19 km/s), which is the energy corresponding to the most probable speed at a temperature of 290 K (17 °C or 62 °F), the mode of the Maxwell–Boltzmann distribution for this temperature, Epeak = 1/2 k T. A small fraction (about 1 in 1,000) of free neutrons decay with the same products, but add an extra particle in the form of an emitted gamma ray: : This gamma ray may be thought of as a sort of ""internal bremsstrahlung"" that arises as the emitted beta particle (electron) interacts with the charge of the proton in an electromagnetic way. This isotope has one unpaired proton and one unpaired neutron, so either the proton or the neutron can decay. The neutron detection temperature, also called the neutron energy, indicates a free neutron's kinetic energy, usually given in electron volts. A very small minority of neutron decays (about four per million) are so-called ""two-body (neutron) decays"", in which a proton, electron and antineutrino are produced as usual, but the electron fails to gain the 13.6 eV necessary energy to escape the proton (the ionization energy of hydrogen), and therefore simply remains bound to it, as a neutral hydrogen atom (one of the ""two bodies""). The difference between these energies goes into the reaction of converting a proton into a neutron, a positron, and a neutrino and into the kinetic energy of these particles. In reactors, heavy water, light water, or graphite are typically used to moderate neutrons. ===Ultrafast=== :*Relativistic :*Greater than 20 MeV ===Other classifications=== ;Pile :*Neutrons of all energies present in nuclear reactors :*0.001 eV to 15 MeV. The generic equation is: : → + + This may be considered as the decay of a proton inside the nucleus to a neutron: :p → n + + However, decay cannot occur in an isolated proton because it requires energy, due to the mass of the neutron being greater than the mass of the proton. decay can only happen inside nuclei when the daughter nucleus has a greater binding energy (and therefore a lower total energy) than the mother nucleus. Qualitatively, the higher the temperature, the higher the kinetic energy of the free neutrons. However the range of neutrons from fission follows a Maxwell–Boltzmann distribution from 0 to about 14 MeV in the center of momentum frame of the disintegration, and the mode of the energy is only 0.75 MeV, meaning that fewer than half of fission neutrons qualify as ""fast"" even by the 1 MeV criterion.Byrne, J. Neutrons, Nuclei, and Matter, Dover Publications, Mineola, New York, 2011, (pbk.) ",0.013343 MeV,0.013 MeV,"1,000 MeV",0.782 MeV,0.782343 MeV,E,kaggle200,"As explained by Wolchover (2018), the beam test would be incorrect if there is a decay mode that does not produce a proton.
Decay energy is usually quoted in terms of the energy units MeV (million electronvolts) or keV (thousand electronvolts):
A very small minority of neutron decays (about four per million) are so-called ""two-body (neutron) decays"", in which a proton, electron and antineutrino are produced as usual, but the electron fails to gain the 13.6 eV necessary energy to escape the proton (the ionization energy of hydrogen), and therefore simply remains bound to it, as a neutral hydrogen atom (one of the ""two bodies""). In this type of free neutron decay, in essence all of the neutron decay energy is carried off by the antineutrino (the other ""body"").
undergoes β decay to zirconium-90 with a half-life of 64.1 hours and a decay energy of 2.28 MeV with an average beta energy of 0.9336 MeV. It also produces 0.01% 1.7 MeV photons during its decay process to the 0 state of Zr, followed by pair production. The interaction between emitted electrons and matter can lead to the emission of Bremsstrahlung radiation.","A very small minority of neutron decays (about four per million) are so-called ""two-body (neutron) decays"", in which a proton, electron and antineutrino are produced as usual, but the electron fails to gain the 13.6 eV necessary energy to escape the proton (the ionization energy of hydrogen), and therefore simply remains bound to it, as a neutral hydrogen atom (one of the ""two bodies""). In this type of free neutron decay, in essence all of the neutron decay energy is carried off by the antineutrino (the other ""body"").
For the free neutron the decay energy for this process (based on the masses of the neutron, proton, and electron) is 0.782343 MeV. The maximal energy of the beta decay electron (in the process wherein the neutrino receives a vanishingly small amount of kinetic energy) has been measured at 0.782±0.013 MeV. The latter number is not well-enough measured to determine the comparatively tiny rest mass of the neutrino (which must in theory be subtracted from the maximal electron kinetic energy) as well as neutrino mass is constrained by many other methods.
For the free neutron, the decay energy for this process (based on the rest masses of the neutron, proton and electron) is 0.782343 MeV. That is the difference between the rest mass of the neutron and the sum of the rest masses of the products. That difference has to be carried away as kinetic energy. The maximal energy of the beta decay electron (in the process wherein the neutrino receives a vanishingly small amount of kinetic energy) has been measured at 0.782±0.013 MeV. The latter number is not well-enough measured to determine the comparatively tiny rest mass of the neutrino (which must in theory be subtracted from the maximal electron kinetic energy); furthermore, neutrino mass is constrained by many other methods.","In this type of free neutron decay, in essence all of the neutron decay energy is carried off by the antineutrino (the other ""body"").
For the free neutron the decay energy for this process (based on the masses of the neutron, proton, and electron) is 0.782343 MeVIn this type of free neutron decay, in essence all of the neutron decay energy is carried off by the antineutrino (the other ""body"").
undergoes β decay to zirconium-90 with a half-life of 64.1 hours and a decay energy of 2.28 MeV with an average beta energy of 0.9336 MeVA very small minority of neutron decays (about four per million) are so-called ""two-body (neutron) decays"", in which a proton, electron and antineutrino are produced as usual, but the electron fails to gain the 13.6 eV necessary energy to escape the proton (the ionization energy of hydrogen), and therefore simply remains bound to it, as a neutral hydrogen atom (one of the ""two bodies"")The latter number is not well-enough measured to determine the comparatively tiny rest mass of the neutrino (which must in theory be subtracted from the maximal electron kinetic energy) as well as neutrino mass is constrained by many other methods.
For the free neutron, the decay energy for this process (based on the rest masses of the neutron, proton and electron) is 0.782343 MeV- As explained by Wolchover (2018), the beam test would be incorrect if there is a decay mode that does not produce a proton.
Decay energy is usually quoted in terms of the energy units MeV (million electronvolts) or keV (thousand electronvolts):
A very small minority of neutron decays (about four per million) are so-called ""two-body (neutron) decays"", in which a proton, electron and antineutrino are produced as usual, but the electron fails to gain the 13.6 eV necessary energy to escape the proton (the ionization energy of hydrogen), and therefore simply remains bound to it, as a neutral hydrogen atom (one of the ""two bodies"")The maximal energy of the beta decay electron (in the process wherein the neutrino receives a vanishingly small amount of kinetic energy) has been measured at 0","In this type of free neutron decay, in essence all of the neutron decay energy is carried off by the antineutrino (the other ""body"").
For the free neutron the decay energy for this process (based on the masses of the neutron, proton, and electron) is 0.782343 MeVIn this type of free neutron decay, in essence all of the neutron decay energy is carried off by the antineutrino (the other ""body"").
undergoes β decay to zirconium-90 with a half-life of 64.1 hours and a decay energy of 2.28 MeV with an average beta energy of 0.9336 MeVA very small minority of neutron decays (about four per million) are so-called ""two-body (neutron) decays"", in which a proton, electron and antineutrino are produced as usual, but the electron fails to gain the 13.6 eV necessary energy to escape the proton (the ionization energy of hydrogen), and therefore simply remains bound to it, as a neutral hydrogen atom (one of the ""two bodies"")The latter number is not well-enough measured to determine the comparatively tiny rest mass of the neutrino (which must in theory be subtracted from the maximal electron kinetic energy) as well as neutrino mass is constrained by many other methods.
For the free neutron, the decay energy for this process (based on the rest masses of the neutron, proton and electron) is 0.782343 MeV- As explained by Wolchover (2018), the beam test would be incorrect if there is a decay mode that does not produce a proton.
Decay energy is usually quoted in terms of the energy units MeV (million electronvolts) or keV (thousand electronvolts):
A very small minority of neutron decays (about four per million) are so-called ""two-body (neutron) decays"", in which a proton, electron and antineutrino are produced as usual, but the electron fails to gain the 13.6 eV necessary energy to escape the proton (the ionization energy of hydrogen), and therefore simply remains bound to it, as a neutral hydrogen atom (one of the ""two bodies"")The maximal energy of the beta decay electron (in the process wherein the neutrino receives a vanishingly small amount of kinetic energy) has been measured at 0[SEP]What is the decay energy for the free neutron decay process?","['D', 'E', 'C']",0.5
What is Hesse's principle of transfer in geometry?,"In geometry, Hesse's principle of transfer () states that if the points of the projective line P1 are depicted by a rational normal curve in Pn, then the group of the projective transformations of Pn that preserve the curve is isomorphic to the group of the projective transformations of P1 (this is a generalization of the original Hesse's principle, in a form suggested by Wilhelm Franz Meyer). ""Hesses's principle of transfer and the representation of lie algebras"", Archive for History of Exact Sciences, 39(1), pp. 41–73. ==References== ===Original reference=== *Hesse, L. O. (1866). In mathematics, projective geometry is the study of geometric properties that are invariant with respect to projective transformations. It was introduced by Colin Maclaurin and studied by ,. and is also known as Young's geometry, named after the later work of John Wesley Young on finite geometry. ==Description== The Hesse configuration has the same incidence relations as the lines and points of the affine plane over the field of 3 elements. Properties meaningful for projective geometry are respected by this new idea of transformation, which is more radical in its effects than can be expressed by a transformation matrix and translations (the affine transformations). Because a Euclidean geometry is contained within a projective geometry—with projective geometry having a simpler foundation—general results in Euclidean geometry may be derived in a more transparent manner, where separate but similar theorems of Euclidean geometry may be handled collectively within the framework of projective geometry. It was realised that the theorems that do apply to projective geometry are simpler statements. In geometry, Hesse's theorem, named for Otto Hesse, states that if two pairs of opposite vertices of a quadrilateral are conjugate with respect to some conic, then so is the third pair. Projective geometry, like affine and Euclidean geometry, can also be developed from the Erlangen program of Felix Klein; projective geometry is characterized by invariants under transformations of the projective group. It is a general theorem (a consequence of axiom (3)) that all coplanar lines intersect—the very principle Projective Geometry was originally intended to embody. The Hesse configuration shares with the Möbius–Kantor configuration the property of having a complex realization but not being realizable by points and straight lines in the Euclidean plane. Projective geometries are characterised by the ""elliptic parallel"" axiom, that any two planes always meet in just one line, or in the plane, any two lines always meet in just one point. The basic intuitions are that projective space has more points than Euclidean space, for a given dimension, and that geometric transformations are permitted that transform the extra points (called ""points at infinity"") to Euclidean points, and vice-versa. A quadrilateral with this property is called a Hesse quadrilateral. ==References== * Category:Projective geometry The parallel properties of elliptic, Euclidean and hyperbolic geometries contrast as follows: : Given a line and a point not on the line, ::; Elliptic : there exists no line through that does not meet ::; Euclidean : there exists exactly one line through that does not meet ::; Hyperbolic : there exists more than one line through that does not meet The parallel property of elliptic geometry is the key idea that leads to the principle of projective duality, possibly the most important property that all projective geometries have in common. ==Duality== In 1825, Joseph Gergonne noted the principle of duality characterizing projective plane geometry: given any theorem or definition of that geometry, substituting point for line, lie on for pass through, collinear for concurrent, intersection for join, or vice versa, results in another theorem or valid definition, the ""dual"" of the first. The Hesse configuration may in turn be augmented by adding four points, one for each triple of non-intersecting lines, and one line containing the four new points, to form a configuration of type 134134, the set of points and lines of the projective plane over the three-element field. ==Realizability== The Hesse configuration can be realized in the complex projective plane as the 9 inflection points of an elliptic curve and the 12 lines through triples of inflection points. The Hessian polyhedron is a representation of the Hesse configuration in the complex plane. : (If the conic degenerates into two straight lines, Pascal's becomes Pappus's theorem, which has no interesting dual, since the Brianchon point trivially becomes the two lines' intersection point.) ==Axioms of projective geometry== Any given geometry may be deduced from an appropriate set of axioms. In geometry, the Hesse configuration is a configuration of 9 points and 12 lines with three points per line and four lines through each point. *Projective Geometry. — free tutorial by Tom Davis. ",Hesse's principle of transfer is a concept in biology that explains the transfer of genetic information from one generation to another.,Hesse's principle of transfer is a concept in chemistry that explains the transfer of electrons between atoms in a chemical reaction.,Hesse's principle of transfer is a concept in physics that explains the transfer of energy from one object to another.,Hesse's principle of transfer is a concept in economics that explains the transfer of wealth from one individual to another.,"Hesse's principle of transfer is a concept in geometry that states that if the points of the projective line P1 are depicted by a rational normal curve in Pn, then the group of the projective transformations of Pn that preserve the curve is isomorphic to the group of the projective transformations of P1.",E,kaggle200,"The bag valve mask concept was developed in 1956 by the German engineer Holger Hesse and his partner, Danish anaesthetist Henning Ruben, following their initial work on a suction pump. Hesse's company was later renamed Ambu A/S, which has manufactured and marketed the device since 1956. An Ambu bag is a self-inflating bag resuscitator from Ambu A/S, which still manufactures and markets self-inflating bag resuscitators.
In the second edition Keisler introduces the extension principle and the transfer principle in the following form:
In geometry, Hesse's theorem, named for Otto Hesse, states that if two pairs of opposite vertices of a quadrilateral are conjugate with respect to some conic, then so is the third pair. A quadrilateral with this property is called a Hesse quadrilateral.
In geometry, Hesse's principle of transfer () states that if the points of the projective line P are depicted by a rational normal curve in P, then the group of the projective transformations of P that preserve the curve is isomorphic to the group of the projective transformations of P (this is a generalization of the original Hesse's principle, in a form suggested by Wilhelm Franz Meyer). It was originally introduced by Otto Hesse in 1866, in a more restricted form. It influenced Felix Klein in the development of the Erlangen program. Since its original conception, it was generalized by many mathematicians, including Klein, Fano, and Cartan.","A freshman-level accessible formulation of the transfer principle is Keisler's book Elementary Calculus: An Infinitesimal Approach.
Example Every real x satisfies the inequality where ⌊⋅⌋ is the integer part function. By a typical application of the transfer principle, every hyperreal x satisfies the inequality where ∗⌊⋅⌋ is the natural extension of the integer part function. If x is infinite, then the hyperinteger ∗⌊x⌋ is infinite, as well.
In mathematics, the syzygetic pencil or Hesse pencil, named for Otto Hesse, is a pencil (one-dimensional family) of cubic plane elliptic curves in the complex projective plane, defined by the equation 0.
In geometry, Hesse's principle of transfer (German: Übertragungsprinzip) states that if the points of the projective line P1 are depicted by a rational normal curve in Pn, then the group of the projective transformations of Pn that preserve the curve is isomorphic to the group of the projective transformations of P1 (this is a generalization of the original Hesse's principle, in a form suggested by Wilhelm Franz Meyer). It was originally introduced by Otto Hesse in 1866, in a more restricted form. It influenced Felix Klein in the development of the Erlangen program. Since its original conception, it was generalized by many mathematicians, including Klein, Fano, and Cartan.","An Ambu bag is a self-inflating bag resuscitator from Ambu A/S, which still manufactures and markets self-inflating bag resuscitators.
In the second edition Keisler introduces the extension principle and the transfer principle in the following form:
In geometry, Hesse's theorem, named for Otto Hesse, states that if two pairs of opposite vertices of a quadrilateral are conjugate with respect to some conic, then so is the third pairIf x is infinite, then the hyperinteger ∗⌊x⌋ is infinite, as well.
In mathematics, the syzygetic pencil or Hesse pencil, named for Otto Hesse, is a pencil (one-dimensional family) of cubic plane elliptic curves in the complex projective plane, defined by the equation 0.
In geometry, Hesse's principle of transfer (German: Übertragungsprinzip) states that if the points of the projective line P1 are depicted by a rational normal curve in Pn, then the group of the projective transformations of Pn that preserve the curve is isomorphic to the group of the projective transformations of P1 (this is a generalization of the original Hesse's principle, in a form suggested by Wilhelm Franz Meyer)A quadrilateral with this property is called a Hesse quadrilateral.
In geometry, Hesse's principle of transfer () states that if the points of the projective line P are depicted by a rational normal curve in P, then the group of the projective transformations of P that preserve the curve is isomorphic to the group of the projective transformations of P (this is a generalization of the original Hesse's principle, in a form suggested by Wilhelm Franz Meyer)A freshman-level accessible formulation of the transfer principle is Keisler's book Elementary Calculus: An Infinitesimal Approach.
Example Every real x satisfies the inequality where ⌊⋅⌋ is the integer part functionIt was originally introduced by Otto Hesse in 1866, in a more restricted formSince its original conception, it was generalized by many mathematicians, including Klein, Fano, and CartanSince its original conception, it was generalized by many mathematicians, including Klein, Fano, and Cartan.It","An Ambu bag is a self-inflating bag resuscitator from Ambu A/S, which still manufactures and markets self-inflating bag resuscitators.
In the second edition Keisler introduces the extension principle and the transfer principle in the following form:
In geometry, Hesse's theorem, named for Otto Hesse, states that if two pairs of opposite vertices of a quadrilateral are conjugate with respect to some conic, then so is the third pairIf x is infinite, then the hyperinteger ∗⌊x⌋ is infinite, as well.
In mathematics, the syzygetic pencil or Hesse pencil, named for Otto Hesse, is a pencil (one-dimensional family) of cubic plane elliptic curves in the complex projective plane, defined by the equation 0.
In geometry, Hesse's principle of transfer (German: Übertragungsprinzip) states that if the points of the projective line P1 are depicted by a rational normal curve in Pn, then the group of the projective transformations of Pn that preserve the curve is isomorphic to the group of the projective transformations of P1 (this is a generalization of the original Hesse's principle, in a form suggested by Wilhelm Franz Meyer)A quadrilateral with this property is called a Hesse quadrilateral.
In geometry, Hesse's principle of transfer () states that if the points of the projective line P are depicted by a rational normal curve in P, then the group of the projective transformations of P that preserve the curve is isomorphic to the group of the projective transformations of P (this is a generalization of the original Hesse's principle, in a form suggested by Wilhelm Franz Meyer)A freshman-level accessible formulation of the transfer principle is Keisler's book Elementary Calculus: An Infinitesimal Approach.
Example Every real x satisfies the inequality where ⌊⋅⌋ is the integer part functionIt was originally introduced by Otto Hesse in 1866, in a more restricted formSince its original conception, it was generalized by many mathematicians, including Klein, Fano, and CartanSince its original conception, it was generalized by many mathematicians, including Klein, Fano, and Cartan.It[SEP]What is Hesse's principle of transfer in geometry?","['E', 'D', 'C']",1.0
What is the relationship between the Cauchy momentum equation and the Navier-Stokes equation?,"By expressing the shear tensor in terms of viscosity and fluid velocity, and assuming constant density and viscosity, the Cauchy momentum equation will lead to the Navier–Stokes equations. All non-relativistic momentum conservation equations, such as the Navier–Stokes equation, can be derived by beginning with the Cauchy momentum equation and specifying the stress tensor through a constitutive relation. The compressible momentum Navier–Stokes equation results from the following assumptions on the Cauchy stress tensor: * the stress is Galilean invariant: it does not depend directly on the flow velocity, but only on spatial derivatives of the flow velocity. ==Incompressible flow== The incompressible momentum Navier–Stokes equation results from the following assumptions on the Cauchy stress tensor:Batchelor (1967) pp. 142–148. * the stress is Galilean invariant: it does not depend directly on the flow velocity, but only on spatial derivatives of the flow velocity. All non-relativistic balance equations, such as the Navier–Stokes equations, can be derived by beginning with the Cauchy equations and specifying the stress tensor through a constitutive relation. The Navier–Stokes equations ( ) are partial differential equations which describe the motion of viscous fluid substances, named after French engineer and physicist Claude-Louis Navier and Anglo-Irish physicist and mathematician George Gabriel Stokes. As a result, the Navier–Stokes are a parabolic equation and therefore have better analytic properties, at the expense of having less mathematical structure (e.g. they are never completely integrable). The Navier-Stokes equations are a set of partial differential equations that describe the motion of fluids. The Navier–Stokes equations assume that the fluid being studied is a continuum (it is infinitely divisible and not composed of particles such as atoms or molecules), and is not moving at relativistic velocities. The Cauchy momentum equation is a vector partial differential equation put forth by Cauchy that describes the non-relativistic momentum transport in any continuum. ==Main equation== In convective (or Lagrangian) form the Cauchy momentum equation is written as: \frac{D \mathbf{u}}{D t} = \frac 1 \rho abla \cdot \boldsymbol{\sigma} + \mathbf{f} where * \mathbf{u} is the flow velocity vector field, which depends on time and space, (unit: \mathrm{m/s}) * t is time, (unit: \mathrm{s}) * \frac{D \mathbf{u}}{D t} is the material derivative of \mathbf{u}, equal to \partial_t\mathbf{u} + \mathbf{u}\cdot abla\mathbf{u}, (unit: \mathrm{m/s^2}) * \rho is the density at a given point of the continuum (for which the continuity equation holds), (unit: \mathrm{kg/m^3}) * \boldsymbol{\sigma} is the stress tensor, (unit: \mathrm{Pa=N/m^2 = kg \cdot m^{-1} \cdot s^{-2}}) * \mathbf{f}=\begin{bmatrix}f_x\\\ f_y\\\ f_z\end{bmatrix} is a vector containing all of the accelerations caused by body forces (sometimes simply gravitational acceleration), (unit: \mathrm{m/s^2}) * abla\cdot\boldsymbol{\sigma}= \begin{bmatrix} \dfrac{\partial \sigma_{xx}}{\partial x} + \dfrac{\partial \sigma_{yx}}{\partial y} + \dfrac{\partial \sigma_{zx}}{\partial z} \\\ \dfrac{\partial \sigma_{xy}}{\partial x} + \dfrac{\partial \sigma_{yy}}{\partial y} + \dfrac{\partial \sigma_{zy}}{\partial z} \\\ \dfrac{\partial \sigma_{xz}}{\partial x} + \dfrac{\partial \sigma_{yz}}{\partial y} + \dfrac{\partial \sigma_{zz}}{\partial z} \\\ \end{bmatrix} is the divergence of stress tensor. (unit: \mathrm{Pa/m=kg \cdot m^{-2} \cdot s^{-2} }) Commonly used SI units are given in parentheses although the equations are general in nature and other units can be entered into them or units can be removed at all by nondimensionalization. In the case of an incompressible fluid, (the density following the path of a fluid element is constant) and the equation reduces to: : abla\cdot\mathbf{u} = 0 which is in fact a statement of the conservation of volume. ==Cauchy momentum equation== The generic density of the momentum source seen previously is made specific first by breaking it up into two new terms, one to describe internal stresses and one for external forces, such as gravity. The Navier–Stokes equations mathematically express momentum balance and conservation of mass for Newtonian fluids. For different types of fluid flow this results in specific forms of the Navier–Stokes equations. ===Newtonian fluid=== ====Compressible Newtonian fluid==== The formulation for Newtonian fluids stems from an observation made by Newton that, for most fluids, :\tau \propto \frac{\partial u}{\partial y} In order to apply this to the Navier–Stokes equations, three assumptions were made by Stokes: :* The stress tensor is a linear function of the strain rate tensor or equivalently the velocity gradient. The Navier–Stokes equations are useful because they describe the physics of many phenomena of scientific and engineering interest. This equation generally accompanies the Navier–Stokes equation. The cross differentiated Navier–Stokes equation becomes two equations and one meaningful equation. A significant feature of the Cauchy equation and consequently all other continuum equations (including Euler and Navier–Stokes) is the presence of convective acceleration: the effect of acceleration of a flow with respect to space. The above solution is key to deriving Navier–Stokes equations from the equation of motion in fluid dynamics when density and viscosity are constant. ===Non-Newtonian fluids=== A non-Newtonian fluid is a fluid whose flow properties differ in any way from those of Newtonian fluids. By expressing the deviatoric (shear) stress tensor in terms of viscosity and the fluid velocity gradient, and assuming constant viscosity, the above Cauchy equations will lead to the Navier–Stokes equations below. ===Convective acceleration=== thumb|An example of convection. This equation is called the Cauchy momentum equation and describes the non-relativistic momentum conservation of any continuum that conserves mass. is a rank two symmetric tensor given by its covariant components. ","The Navier-Stokes equation can be derived from the Cauchy momentum equation by specifying the stress tensor through a constitutive relation, expressing the shear tensor in terms of viscosity and fluid velocity, and assuming constant density and viscosity.",The Navier-Stokes equation is a simplified version of the Cauchy momentum equation that only applies to situations with constant density and viscosity.,"The Navier-Stokes equation is a special case of the Cauchy momentum equation, which is a more general equation that applies to all non-relativistic momentum conservation situations.",The Cauchy momentum equation and the Navier-Stokes equation are completely unrelated and cannot be used interchangeably in any situation.,"The Cauchy momentum equation is a special case of the Navier-Stokes equation, which is a more general equation that applies to all non-relativistic momentum conservation situations.",A,kaggle200,"where represents the control volume. Since this equation must hold for any control volume, it must be true that the integrand is zero, from this the Cauchy momentum equation follows. The main step (not done above) in deriving this equation is establishing that the derivative of the stress tensor is one of the forces that constitutes .
Euler momentum equation is a Cauchy momentum equation with the Pascal law being the stress constitutive relation:
The Cauchy momentum equation is a vector partial differential equation put forth by Cauchy that describes the non-relativistic momentum transport in any continuum.
All non-relativistic momentum conservation equations, such as the Navier–Stokes equation, can be derived by beginning with the Cauchy momentum equation and specifying the stress tensor through a constitutive relation. By expressing the shear tensor in terms of viscosity and fluid velocity, and assuming constant density and viscosity, the Cauchy momentum equation will lead to the Navier–Stokes equations. By assuming inviscid flow, the Navier–Stokes equations can further simplify to the Euler equations.","Finally in convective form the equations are:
All non-relativistic momentum conservation equations, such as the Navier–Stokes equation, can be derived by beginning with the Cauchy momentum equation and specifying the stress tensor through a constitutive relation. By expressing the shear tensor in terms of viscosity and fluid velocity, and assuming constant density and viscosity, the Cauchy momentum equation will lead to the Navier–Stokes equations. By assuming inviscid flow, the Navier–Stokes equations can further simplify to the Euler equations.
The Cauchy momentum equation is a vector partial differential equation put forth by Cauchy that describes the non-relativistic momentum transport in any continuum.","By assuming inviscid flow, the Navier–Stokes equations can further simplify to the Euler equations.
The Cauchy momentum equation is a vector partial differential equation put forth by Cauchy that describes the non-relativistic momentum transport in any continuumFinally in convective form the equations are:
All non-relativistic momentum conservation equations, such as the Navier–Stokes equation, can be derived by beginning with the Cauchy momentum equation and specifying the stress tensor through a constitutive relationBy expressing the shear tensor in terms of viscosity and fluid velocity, and assuming constant density and viscosity, the Cauchy momentum equation will lead to the Navier–Stokes equationsThe main step (not done above) in deriving this equation is establishing that the derivative of the stress tensor is one of the forces that constitutes .
Euler momentum equation is a Cauchy momentum equation with the Pascal law being the stress constitutive relation:
The Cauchy momentum equation is a vector partial differential equation put forth by Cauchy that describes the non-relativistic momentum transport in any continuum.
All non-relativistic momentum conservation equations, such as the Navier–Stokes equation, can be derived by beginning with the Cauchy momentum equation and specifying the stress tensor through a constitutive relationSince this equation must hold for any control volume, it must be true that the integrand is zero, from this the Cauchy momentum equation followsBy assuming inviscid flow, the Navier–Stokes equations can further simplify to the Euler equations.- where represents the control volume","By assuming inviscid flow, the Navier–Stokes equations can further simplify to the Euler equations.
The Cauchy momentum equation is a vector partial differential equation put forth by Cauchy that describes the non-relativistic momentum transport in any continuumFinally in convective form the equations are:
All non-relativistic momentum conservation equations, such as the Navier–Stokes equation, can be derived by beginning with the Cauchy momentum equation and specifying the stress tensor through a constitutive relationBy expressing the shear tensor in terms of viscosity and fluid velocity, and assuming constant density and viscosity, the Cauchy momentum equation will lead to the Navier–Stokes equationsThe main step (not done above) in deriving this equation is establishing that the derivative of the stress tensor is one of the forces that constitutes .
Euler momentum equation is a Cauchy momentum equation with the Pascal law being the stress constitutive relation:
The Cauchy momentum equation is a vector partial differential equation put forth by Cauchy that describes the non-relativistic momentum transport in any continuum.
All non-relativistic momentum conservation equations, such as the Navier–Stokes equation, can be derived by beginning with the Cauchy momentum equation and specifying the stress tensor through a constitutive relationSince this equation must hold for any control volume, it must be true that the integrand is zero, from this the Cauchy momentum equation followsBy assuming inviscid flow, the Navier–Stokes equations can further simplify to the Euler equations.- where represents the control volume[SEP]What is the relationship between the Cauchy momentum equation and the Navier-Stokes equation?","['C', 'A', 'E']",0.5
What is X-ray pulsar-based navigation (XNAV)?,"X-ray pulsar-based navigation and timing (XNAV) or simply pulsar navigation is a navigation technique whereby the periodic X-ray signals emitted from pulsars are used to determine the location of a vehicle, such as a spacecraft in deep space. A vehicle using XNAV would compare received X-ray signals with a database of known pulsar frequencies and locations. After the study, the interest in the XNAV technology within the European Space Agency was consolidated leading, in 2012, to two different and more detailed studies performed by GMV AEROSPACE AND DEFENCE (ES) and the National Physical Laboratory (UK). ===Experiments=== ;XPNAV 1: On 9 November 2016, the Chinese Academy of Sciences launched an experimental pulsar navigation satellite called XPNAV 1. The advantage of pulsar navigation would be more available signals than from satnav constellations, being unjammable, with the broad range of frequencies available, and security of signal sources from destruction by anti-satellite weapons. ==Types of pulsar for XNAV== Among pulsars, millisecond pulsars are good candidate to be space-time references. Experimental demonstrations have been reported in 2018.NASA test proves pulsars can function as a celestial GPS ==Spacecraft navigation== ===Studies=== The Advanced Concepts Team of ESA studied in 2003 the feasibility of x-ray pulsar navigation in collaboration with the Universitat Politecnica de Catalunya in Spain. In particular, extraterrestrial intelligence might encode rich information using millisecond pulsar signals, and the metadata about XNAV is likely to be encoded by reference to millisecond pulsars. If this is successful, XNAV may be used as secondary navigation technology for the planned Orion missions. X-ray pulsars or accretion-powered pulsars are a class of astronomical objects that are X-ray sources displaying strict periodic variations in X-ray intensity. XPNAV-1 will characterize 26 nearby pulsars for their pulse frequency and intensity to create a navigation database that could be used by future operational missions. XPNAV-1 is the first pulsar navigation mission launched into orbit. X-ray motion analysis is a technique used to track the movement of objects using X-rays. In contrast, X-ray pulsars are members of binary star systems and accrete matter from either stellar winds or accretion disks. Finally, it has been suggested that advanced extraterrestrial intelligence might have tweaked or engineered millisecond pulsars for the goals of timing, navigation and communication. ==References== ==External links== *Johns Hopkins APL to Develop Deep Space Navigation Network *US Government Contract Proposal for X-Ray Pulsar Based Navigation and Time Determination Category:Navigational aids Category:Pulsars Category:Celestial navigation thumb|A simple diagram showing the main difference between traditional navigation and RNAV methods Area navigation (RNAV, usually pronounced as ""ar- nav"") is a method of instrument flight rules (IFR) navigation that allows an aircraft to choose any course within a network of navigation beacons, rather than navigate directly to and from the beacons. 300px|right Radio navigation or radionavigation is the application of radio frequencies to determine a position of an object on the Earth, either the vessel or an obstruction. The X-ray periods range from as little as a fraction of a second to as much as several minutes. == Characteristics == An X-ray pulsar consists of a magnetized neutron star in orbit with a normal stellar companion and is a type of binary star system. In contrast, the X-ray pulsars exhibit a variety of spin behaviors. As the neutron star rotates, pulses of X-rays are observed as the hotspots move in and out of view if the magnetic axis is tilted with respect to the spin axis. == Gas supply == The gas that supplies the X-ray pulsar can reach the neutron star by a variety of ways that depend on the size and shape of the neutron star's orbital path and the nature of the companion star. Exactly why the X-ray pulsars show such varied spin behavior is still not clearly understood. == Observations== X-ray pulsars are observed using X-ray telescopes that are satellites in low Earth orbit although some observations have been made, mostly in the early years of X-ray astronomy, using detectors carried by balloons or sounding rockets. This type of imaging allows for tracking movements in the two-dimensional plane of the X-ray. ",X-ray pulsar-based navigation (XNAV) is a navigation technique that uses the periodic X-ray signals emitted from pulsars to determine the location of a vehicle in the Earth's atmosphere.,"X-ray pulsar-based navigation (XNAV) is a navigation technique that uses the periodic radio signals emitted from pulsars to determine the location of a vehicle in deep space, such as a spacecraft.","X-ray pulsar-based navigation (XNAV) is a navigation technique that uses the periodic X-ray signals emitted from satellites to determine the location of a vehicle in deep space, such as a spacecraft.","X-ray pulsar-based navigation (XNAV) is a navigation technique that uses the periodic X-ray signals emitted from pulsars to determine the location of a vehicle in deep space, such as a spacecraft.","X-ray pulsar-based navigation (XNAV) is a navigation technique that uses the periodic radio signals emitted from satellites to determine the location of a vehicle in deep space, such as a spacecraft.",D,kaggle200,"An enhancement to the ""NICER"" mission, the Station Explorer for X-ray Timing and Navigation Technology (SEXTANT), will act as a technology demonstrator for X-ray pulsar-based navigation (XNAV) techniques that may one day be used for deep-space navigation.
X-ray pulsar-based navigation and timing (XNAV) is an experimental navigation technique whereby the periodic X-ray signals emitted from pulsars are used to determine the location of a vehicle, such as a spacecraft in deep space. A vehicle using XNAV would compare received X-ray signals with a database of known pulsar frequencies and locations. Similar to GNSS, this comparison would allow the vehicle to triangulate its position accurately (±5 km). The advantage of using X-ray signals over radio waves is that X-ray telescopes can be made smaller and lighter. On 9 November 2016 the Chinese Academy of Sciences launched an experimental pulsar navigation satellite called XPNAV 1. SEXTANT (Station Explorer for X-ray Timing and Navigation Technology) is a NASA-funded project developed at the Goddard Space Flight Center that is testing XNAV on-orbit on board the International Space Station in connection with the NICER project, launched on 3 June 2017 on the SpaceX CRS-11 ISS resupply mission.
""X-ray pulsar-based navigation and timing (XNAV)"" or simply ""pulsar navigation"" is a navigation technique whereby the periodic X-ray signals emitted from pulsars are used to determine the location of a vehicle, such as a spacecraft in deep space. A vehicle using XNAV would compare received X-ray signals with a database of known pulsar frequencies and locations. Similar to GPS, this comparison would allow the vehicle to calculate its position accurately (±5 km). The advantage of using X-ray signals over radio waves is that X-ray telescopes can be made smaller and lighter. Experimental demonstrations have been reported in 2018.
X-ray pulsar-based navigation and timing (XNAV) or simply pulsar navigation is a navigation technique whereby the periodic X-ray signals emitted from pulsars are used to determine the location of a vehicle, such as a spacecraft in deep space. A vehicle using XNAV would compare received X-ray signals with a database of known pulsar frequencies and locations. Similar to GPS, this comparison would allow the vehicle to calculate its position accurately (±5 km). The advantage of using X-ray signals over radio waves is that X-ray telescopes can be made smaller and lighter. Experimental demonstrations have been reported in 2018.","Pulsar navigation X-ray pulsar-based navigation and timing (XNAV) or simply pulsar navigation is a navigation technique whereby the periodic X-ray signals emitted from pulsars are used to determine the location of a vehicle, such as a spacecraft in deep space. A vehicle using XNAV would compare received X-ray signals with a database of known pulsar frequencies and locations. Similar to GPS, this comparison would allow the vehicle to calculate its position accurately (±5 km). The advantage of using X-ray signals over radio waves is that X-ray telescopes can be made smaller and lighter. Experimental demonstrations have been reported in 2018.
X-ray pulsar-based navigation and timing (XNAV) is an experimental navigation technique whereby the periodic X-ray signals emitted from pulsars are used to determine the location of a vehicle, such as a spacecraft in deep space. A vehicle using XNAV would compare received X-ray signals with a database of known pulsar frequencies and locations. Similar to GNSS, this comparison would allow the vehicle to triangulate its position accurately (±5 km). The advantage of using X-ray signals over radio waves is that X-ray telescopes can be made smaller and lighter. On 9 November 2016 the Chinese Academy of Sciences launched an experimental pulsar navigation satellite called XPNAV 1. SEXTANT (Station Explorer for X-ray Timing and Navigation Technology) is a NASA-funded project developed at the Goddard Space Flight Center that is testing XNAV on-orbit on board the International Space Station in connection with the NICER project, launched on 3 June 2017 on the SpaceX CRS-11 ISS resupply mission.
X-ray pulsar-based navigation and timing (XNAV) or simply pulsar navigation is a navigation technique whereby the periodic X-ray signals emitted from pulsars are used to determine the location of a vehicle, such as a spacecraft in deep space. A vehicle using XNAV would compare received X-ray signals with a database of known pulsar frequencies and locations. Similar to GPS, this comparison would allow the vehicle to calculate its position accurately (±5 km). The advantage of using X-ray signals over radio waves is that X-ray telescopes can be made smaller and lighter. Experimental demonstrations have been reported in 2018.","Experimental demonstrations have been reported in 2018.
X-ray pulsar-based navigation and timing (XNAV) or simply pulsar navigation is a navigation technique whereby the periodic X-ray signals emitted from pulsars are used to determine the location of a vehicle, such as a spacecraft in deep spacePulsar navigation X-ray pulsar-based navigation and timing (XNAV) or simply pulsar navigation is a navigation technique whereby the periodic X-ray signals emitted from pulsars are used to determine the location of a vehicle, such as a spacecraft in deep spaceExperimental demonstrations have been reported in 2018.
X-ray pulsar-based navigation and timing (XNAV) is an experimental navigation technique whereby the periodic X-ray signals emitted from pulsars are used to determine the location of a vehicle, such as a spacecraft in deep space- An enhancement to the ""NICER"" mission, the Station Explorer for X-ray Timing and Navigation Technology (SEXTANT), will act as a technology demonstrator for X-ray pulsar-based navigation (XNAV) techniques that may one day be used for deep-space navigation.
X-ray pulsar-based navigation and timing (XNAV) is an experimental navigation technique whereby the periodic X-ray signals emitted from pulsars are used to determine the location of a vehicle, such as a spacecraft in deep spaceA vehicle using XNAV would compare received X-ray signals with a database of known pulsar frequencies and locationsSEXTANT (Station Explorer for X-ray Timing and Navigation Technology) is a NASA-funded project developed at the Goddard Space Flight Center that is testing XNAV on-orbit on board the International Space Station in connection with the NICER project, launched on 3 June 2017 on the SpaceX CRS-11 ISS resupply mission.
X-ray pulsar-based navigation and timing (XNAV) or simply pulsar navigation is a navigation technique whereby the periodic X-ray signals emitted from pulsars are used to determine the location of a vehicle, such as a spacecraft in deep spaceSEXTANT (Station Explorer for X-ray Timing and Navigation Technology) is a NASA-funded project developed","Experimental demonstrations have been reported in 2018.
X-ray pulsar-based navigation and timing (XNAV) or simply pulsar navigation is a navigation technique whereby the periodic X-ray signals emitted from pulsars are used to determine the location of a vehicle, such as a spacecraft in deep spacePulsar navigation X-ray pulsar-based navigation and timing (XNAV) or simply pulsar navigation is a navigation technique whereby the periodic X-ray signals emitted from pulsars are used to determine the location of a vehicle, such as a spacecraft in deep spaceExperimental demonstrations have been reported in 2018.
X-ray pulsar-based navigation and timing (XNAV) is an experimental navigation technique whereby the periodic X-ray signals emitted from pulsars are used to determine the location of a vehicle, such as a spacecraft in deep space- An enhancement to the ""NICER"" mission, the Station Explorer for X-ray Timing and Navigation Technology (SEXTANT), will act as a technology demonstrator for X-ray pulsar-based navigation (XNAV) techniques that may one day be used for deep-space navigation.
X-ray pulsar-based navigation and timing (XNAV) is an experimental navigation technique whereby the periodic X-ray signals emitted from pulsars are used to determine the location of a vehicle, such as a spacecraft in deep spaceA vehicle using XNAV would compare received X-ray signals with a database of known pulsar frequencies and locationsSEXTANT (Station Explorer for X-ray Timing and Navigation Technology) is a NASA-funded project developed at the Goddard Space Flight Center that is testing XNAV on-orbit on board the International Space Station in connection with the NICER project, launched on 3 June 2017 on the SpaceX CRS-11 ISS resupply mission.
X-ray pulsar-based navigation and timing (XNAV) or simply pulsar navigation is a navigation technique whereby the periodic X-ray signals emitted from pulsars are used to determine the location of a vehicle, such as a spacecraft in deep spaceSEXTANT (Station Explorer for X-ray Timing and Navigation Technology) is a NASA-funded project developed[SEP]What is X-ray pulsar-based navigation (XNAV)?","['D', 'E', 'C']",1.0
What is the evidence for the existence of a supermassive black hole at the center of the Milky Way galaxy?,"Professor Andrea Ghez et al. suggested in 2014 that G2 is not a gas cloud but rather a pair of binary stars that had been orbiting the black hole in tandem and merged into an extremely large star. ==See also== * * List of nearest known black holes ==Notes== ==References== * * * * * * * * ==Further reading== * * * * * * * * * * ==External links== * UCLA Galactic Center Group – latest results retrieved 8/12/2009 * Is there a Supermassive Black Hole at the Center of the Milky Way? (arXiv preprint) * 2004 paper deducing mass of central black hole from orbits of 7 stars (arXiv preprint) * ESO video clip of orbiting star (533 KB MPEG Video) * The Proper Motion of Sgr A* and the Mass of Sgr A* (PDF) * NRAO article regarding VLBI radio imaging of Sgr A* * Peering into a Black Hole, 2015 New York Times video * Image of supermassive black hole Sagittarius A* (2022), Harvard Center for Astrophysics * (NSF; 12 May 2022) Category:Articles containing video clips Category:Astronomical objects discovered in 1974 Category:Astronomical radio sources Category:Supermassive black holes Category:Galactic Center From examining the Keplerian orbit of S2, they determined the mass of Sagittarius A* to be solar masses, confined in a volume with a radius no more than 17 light-hours ().Ghez et al. (2003) ""The First Measurement of Spectral Lines in a Short-Period Star Bound to the Galaxy's Central Black Hole: A Paradox of Youth"" Astrophysical Journal 586 L127 Later observations of the star S14 showed the mass of the object to be about 4.1 million solar masses within a volume with radius no larger than 6.25 light- hours (). Based on mass and increasingly precise radius limits, astronomers concluded that Sagittarius A* must be the Milky Way's central supermassive black hole. The stellar orbits in the Galactic Center show that the central mass concentration of four million solar masses must be a black hole, beyond any reasonable doubt.""O'Neill 2008 On January 5, 2015, NASA reported observing an X-ray flare 400 times brighter than usual, a record-breaker, from Sgr A*. Nevertheless, it is commonly accepted that the center of nearly every galaxy contains a supermassive black hole. The comparatively small mass of this supermassive black hole, along with the low luminosity of the radio and infrared emission lines, imply that the Milky Way is not a Seyfert galaxy. The rapid motion of S2 (and other nearby stars) easily stood out against slower-moving stars along the line-of-sight so these could be subtracted from the images.Schödel et al. 2002 thumb|upright=1.2|Dusty cloud G2 passes the supermassive black hole at the center of the Milky Way. In all other galaxies observed to date, the rms velocities are flat, or even falling, toward the center, making it impossible to state with certainty that a supermassive black hole is present. PG 1426+015 3C 273 Brightest quasar in the sky ULAS J1342+0928 Most distant quasar − currently on record as the most distant quasar at z=7.54 Messier 49 ESO 444-46 – Brightest cluster galaxy of Abell 3558 in the center of the Shapley Supercluster; estimated using spheroidal luminosity profile of the host galaxy. Sagittarius A* ( ), abbreviated Sgr A* ( ), is the supermassive black hole at the Galactic Center of the Milky Way. For a black hole of around 4 million solar masses, this corresponds to a size of approximately 52 μas, which is consistent with the observed overall size of about 50 μas, the size (apparent diameter) of the black hole Sgr A* itself being 20 μas. PKS 2128-123 ULAS J1120+0641 QSO 0537-286 NGC 3115 Q0906+6930 Most distant blazar, at z = 5.47 QSO B0805+614 Messier 84 J100758.264+211529.207 (""Pōniuāʻena"") Second most-distant quasar known PKS 2059+034 Abell 3565-BCG NGC 7768 NGC 1277 Once thought to harbor a black hole so large that it contradicted modern galaxy formation and evolutionary theories, re-analysis of the data revised it downward to roughly a third of the original estimate. and then one tenth. The star is in the Grus (or Crane) constellation in the southern sky, and about 29,000 light-years from Earth, and may have been propelled out of the Milky Way galaxy after interacting with Sagittarius A*. ==Orbiting stars== thumb|left|Inferred orbits of six stars around supermassive black hole candidate Sagittarius A* at the Milky Way's center thumb|Stars moving around Sagittarius A*, 20-year timelapse, ending in 2018 thumb|Stars moving around Sagittarius A* as seen in 2021 There are a number of stars in close orbit around Sagittarius A*, which are collectively known as ""S stars"". Black hole of central elliptical galaxy of RX J1532.9+3021 * QSO B2126-158 – Higher value estimated with quasar Hβ emission line correlation. This is an ordered list of the most massive black holes so far discovered (and probable candidates), measured in units of solar masses (), approximately . == Introduction == A supermassive black hole (SMBH) is an extremely large black hole, on the order of hundreds of thousands to billions of solar masses (), and is theorized to exist in the center of almost all massive galaxies. PG 1307+085 281 840 000 SAGE0536AGN Constitutes 1.4% of the mass of its host galaxy NGC 1275 Central galaxy of the Perseus Cluster 3C 390.3 II Zwicky 136 PG 0052+251 Messier 59 This black hole has a retrograde rotation. The observed distribution of the planes of the orbits of the S stars limits the spin of Sagittarius A* to less than 10% of its theoretical maximum value. , S4714 is the current record holder of closest approach to Sagittarius A*, at about , almost as close as Saturn gets to the Sun, traveling at about 8% of the speed of light. ULAS J1342+0928 is the second-most distant known quasar detected and contains the second-most distant and oldest known supermassive black hole, at a reported redshift of z = 7.54. The related supermassive black hole is reported to be ""800 million times the mass of the Sun"". ==Discovery== On 6 December 2017, astronomers published that they had found the quasar using data from the Wide- field Infrared Survey Explorer (WISE) combined with ground-based surveys from one of the Magellan Telescopes at Las Campanas Observatory in Chile, as well as the Large Binocular Telescope in Arizona and the Gemini North telescope in Hawaii. The observations of several stars orbiting Sagittarius A*, particularly star S2, have been used to determine the mass and upper limits on the radius of the object. ","The Milky Way galaxy has a supermassive black hole at its center because of the bright flare activity observed near Sagittarius A*. The radius of the central object must be less than 17 light-hours, because otherwise S2 would collide with it. Observations of the star S14 indicate that the radius is no more than 6.25 light-hours, about the diameter of Uranus' orbit. No known astronomical object other than a black hole can contain 4.0 million M☉ in this volume of space.","The Milky Way galaxy has a supermassive black hole at its center because the star S14 follows an elliptical orbit with a period of 15.2 years and a pericenter of 17 light-hours from the center of the central object. From the motion of star S14, the object's mass can be estimated as 4.0 million M☉, or about 7.96×1036 kg. The radius of the central object must be less than 17 light-hours, because otherwise S14 would collide with it. Observations of the star S2 indicate that the radius is no more than 6.25 light-hours, about the diameter of Uranus' orbit. No known astronomical object other than a black hole can contain 4.0 million M☉ in this volume of space.","The Milky Way galaxy has a supermassive black hole at its center because of the bright flare activity observed near Sagittarius A*. The radius of the central object must be less than 6.25 light-hours, about the diameter of Uranus' orbit. Observations of the star S2 indicate that the radius is no more than 17 light-hours, because otherwise S2 would collide with it. No known astronomical object other than a black hole can contain 4.0 million M☉ in this volume of space.",The Milky Way galaxy has a supermassive black hole at its center because it is the only explanation for the bright flare activity observed near Sagittarius A* at a separation of six to ten times the gravitational radius of the candidate SMBH.,"The star S2 follows an elliptical orbit with a period of 15.2 years and a pericenter of 17 light-hours from the center of the central object. From the motion of star S2, the object's mass can be estimated as 4.0 million M☉, or about 7.96×1036 kg. The radius of the central object must be less than 17 light-hours, because otherwise S2 would collide with it. Observations of the star S14 indicate that the radius is no more than 6.25 light-hours, about the diameter of Uranus' orbit. No known astronomical object other than a black hole can contain 4.0 million M☉ in this volume of space.",E,kaggle200,"Gravitational redshift in light from the S2 star orbiting the supermassive black hole Sagittarius A* in the center of the Milky Way has been measured with the Very Large Telescope using GRAVITY, NACO and SIFONI instruments. Additionally, there has now been detection of the Schwarzschild precession in the orbit of the star S2 near the Galactic centre massive black hole.
Two supermassive black holes have been directly imaged by the Event Horizon Telescope: the black hole in the giant elliptical galaxy Messier 87 and the black hole at the Milky Way’s center.
A supermassive black hole (SMBH or sometimes SBH) is the largest type of black hole, with its mass being on the order of hundreds of thousands, or millions to billions of times the mass of the Sun (). Black holes are a class of astronomical objects that have undergone gravitational collapse, leaving behind spheroidal regions of space from which nothing can escape, not even light. Observational evidence indicates that almost every large galaxy has a supermassive black hole at its center. For example, the Milky Way has a supermassive black hole in its Galactic Center, corresponding to the radio source Sagittarius A*. Accretion of interstellar gas onto supermassive black holes is the process responsible for powering active galactic nuclei (AGNs) and quasars.
Astronomers are confident that the Milky Way galaxy has a supermassive black hole at its center, 26,000 light-years from the Solar System, in a region called Sagittarius A* because:","A supermassive black hole (SMBH or sometimes SBH) is the largest type of black hole, with its mass being on the order of hundreds of thousands, or millions to billions of times the mass of the Sun (M☉). Black holes are a class of astronomical objects that have undergone gravitational collapse, leaving behind spheroidal regions of space from which nothing can escape, not even light. Observational evidence indicates that almost every large galaxy has a supermassive black hole at its center. For example, the Milky Way galaxy has a supermassive black hole at its center, corresponding to the radio source Sagittarius A*. Accretion of interstellar gas onto supermassive black holes is the process responsible for powering active galactic nuclei (AGNs) and quasars.Two supermassive black holes have been directly imaged by the Event Horizon Telescope: the black hole in the giant elliptical galaxy Messier 87 and the black hole at the Milky Way’s center.
In the Milky Way Evidence indicates that the Milky Way galaxy has a supermassive black hole at its center, 26,000 light-years from the Solar System, in a region called Sagittarius A* because: The star S2 follows an elliptical orbit with a period of 15.2 years and a pericenter (closest distance) of 17 light-hours (1.8×1013 m or 120 AU) from the center of the central object.
From the motion of star S2, the object's mass can be estimated as 4.0 million M☉, or about 7.96×1036 kg.
The radius of the central object must be less than 17 light-hours, because otherwise S2 would collide with it. Observations of the star S14 indicate that the radius is no more than 6.25 light-hours, about the diameter of Uranus' orbit.","Observational evidence indicates that almost every large galaxy has a supermassive black hole at its centerAccretion of interstellar gas onto supermassive black holes is the process responsible for powering active galactic nuclei (AGNs) and quasars.
Astronomers are confident that the Milky Way galaxy has a supermassive black hole at its center, 26,000 light-years from the Solar System, in a region called Sagittarius A* because:For example, the Milky Way galaxy has a supermassive black hole at its center, corresponding to the radio source Sagittarius A*Accretion of interstellar gas onto supermassive black holes is the process responsible for powering active galactic nuclei (AGNs) and quasars.Two supermassive black holes have been directly imaged by the Event Horizon Telescope: the black hole in the giant elliptical galaxy Messier 87 and the black hole at the Milky Way’s center.
In the Milky Way Evidence indicates that the Milky Way galaxy has a supermassive black hole at its center, 26,000 light-years from the Solar System, in a region called Sagittarius A* because: The star S2 follows an elliptical orbit with a period of 15.2 years and a pericenter (closest distance) of 17 light-hours (1.8×1013 m or 120 AU) from the center of the central object.
From the motion of star S2, the object's mass can be estimated as 4.0 million M☉, or about 7.96×1036 kg.
The radius of the central object must be less than 17 light-hours, because otherwise S2 would collide with itAdditionally, there has now been detection of the Schwarzschild precession in the orbit of the star S2 near the Galactic centre massive black hole.
Two supermassive black holes have been directly imaged by the Event Horizon Telescope: the black hole in the giant elliptical galaxy Messier 87 and the black hole at the Milky Way’s center.
A supermassive black hole (SMBH or sometimes SBH) is the largest type of black hole, with its mass being on the order of hundreds of thousands, or millions to billions of times the mass of the Sun ()For example, the Milky Way has a supermassive black hole in its Galactic Center, co","Observational evidence indicates that almost every large galaxy has a supermassive black hole at its centerAccretion of interstellar gas onto supermassive black holes is the process responsible for powering active galactic nuclei (AGNs) and quasars.
Astronomers are confident that the Milky Way galaxy has a supermassive black hole at its center, 26,000 light-years from the Solar System, in a region called Sagittarius A* because:For example, the Milky Way galaxy has a supermassive black hole at its center, corresponding to the radio source Sagittarius A*Accretion of interstellar gas onto supermassive black holes is the process responsible for powering active galactic nuclei (AGNs) and quasars.Two supermassive black holes have been directly imaged by the Event Horizon Telescope: the black hole in the giant elliptical galaxy Messier 87 and the black hole at the Milky Way’s center.
In the Milky Way Evidence indicates that the Milky Way galaxy has a supermassive black hole at its center, 26,000 light-years from the Solar System, in a region called Sagittarius A* because: The star S2 follows an elliptical orbit with a period of 15.2 years and a pericenter (closest distance) of 17 light-hours (1.8×1013 m or 120 AU) from the center of the central object.
From the motion of star S2, the object's mass can be estimated as 4.0 million M☉, or about 7.96×1036 kg.
The radius of the central object must be less than 17 light-hours, because otherwise S2 would collide with itAdditionally, there has now been detection of the Schwarzschild precession in the orbit of the star S2 near the Galactic centre massive black hole.
Two supermassive black holes have been directly imaged by the Event Horizon Telescope: the black hole in the giant elliptical galaxy Messier 87 and the black hole at the Milky Way’s center.
A supermassive black hole (SMBH or sometimes SBH) is the largest type of black hole, with its mass being on the order of hundreds of thousands, or millions to billions of times the mass of the Sun ()For example, the Milky Way has a supermassive black hole in its Galactic Center, co[SEP]What is the evidence for the existence of a supermassive black hole at the center of the Milky Way galaxy?","['D', 'E', 'B']",0.5
What is the function of the fibrous cardiac skeleton?,"In cardiology, the cardiac skeleton, also known as the fibrous skeleton of the heart, is a high-density homogeneous structure of connective tissue that forms and anchors the valves of the heart, and influences the forces exerted by and through them. This is the strongest part of the fibrous cardiac skeleton. Understood as such, the cardiac skeleton efficiently centers and robustly funnels electrical energy from the atria to the ventricles. ==Structure== The structure of the components of the heart has become an area of increasing interest. Fibrocyte cells normally secrete collagen, and function to provide structural support for the heart. While not a traditionally or ""true"" or rigid skeleton, it does provide structure and support for the heart, as well as isolate the atria from the ventricles. The cardiac skeleton separates and partitions the atria (the smaller, upper two chambers) from the ventricles (the larger, lower two chambers).The heart's cardiac skeleton comprises four dense connective tissue rings that encircle the mitral and tricuspid atrioventricular (AV) canals and extend to the origins of the pulmonary trunk and aorta. The unique matrix of connective tissue within the cardiac skeleton isolates electrical influence within these defined chambers. The physiologic cardiac skeleton forms a firewall governing autonomic/electrical influence until bordering the bundle of His which further governs autonomic flow to the bundle branches of the ventricles. The cardiac skeleton binds several bands of dense connective tissue, as collagen, that encircle the bases of the pulmonary trunk, aorta, and all four heart valves.Martini Anatomy and Physiology, 5th ed. Band theory within the ventricular myocardium first suggested by Dr. Francisco Torrent-Guasp (1931-2005) closely follows the band structure above. Throughout life, the cardiac collagen skeleton is remodeled. The cardiac skeleton does this by establishing an electrically impermeable boundary to autonomic electrical influence within the heart. The cardiac skeleton ensures that the electrical and autonomic energy generated above is ushered below and cannot return. Fibrotic cardiac muscle is stiffer and less compliant and is seen in the progression to heart failure. In anatomy, fibrous joints are joints connected by fibrous tissue, consisting mainly of collagen. This provides crucial support and structure to the heart while also serving to electrically isolate the atria from the ventricles. The inert characteristics of the collagen structure that blocks electrical influence also make it difficult to attain an accurate signal for imaging without allowing for an applied ratio of collagen to calcium. ==History== Boundaries within the heart were first described and greatly magnified by Drs. Charles S. Peskin and David M. McQueen at the Courant Institute of Mathematical Sciences. ==See also== *Chordae tendineae *Fibrous ring of intervertebral disk * Coronary arteries * Coronary sinus ==References== ==External links== * Description at cwc.net * Histology (see slide #96) Category:Cardiac anatomy The upper chambers (atria) and lower (ventricles) are electrically divided by the properties of collagen proteins within the rings. Distensibility of the ventricles is tied to variable accumulation of minerals which also contributes to the delay of the depolarization wave in geriatric patients that can take place from the AV node and the bundle of His. ===Fibrous rings=== The right and left fibrous rings of heart (annuli fibrosi cordis) surround the atrioventricular and arterial orifices. Cardiac fibrosis commonly refers to the excess deposition of extracellular matrix in the cardiac muscle, but the term may also refer to an abnormal thickening of the heart valves due to inappropriate proliferation of cardiac fibroblasts. Small quantity of fibrous tissue holds the bones together. ",The fibrous cardiac skeleton is a system of blood vessels that supplies oxygen and nutrients to the heart muscle.,"The fibrous cardiac skeleton is responsible for the pumping action of the heart, regulating the flow of blood through the atria and ventricles.","The fibrous cardiac skeleton provides structure to the heart, forming the atrioventricular septum that separates the atria from the ventricles, and the fibrous rings that serve as bases for the four heart valves.",The fibrous cardiac skeleton is a network of nerves that controls the heartbeat and rhythm of the heart.,"The fibrous cardiac skeleton is a protective layer that surrounds the heart, shielding it from external damage.",C,kaggle200,"From the margins of the semicircular notches, the fibrous structure of the ring is continued into the segments of the valves.
The term cardiac skeleton is sometimes considered synonymous with endomysium in the heart, but cardiac skeleton also refers to the combination of the endomysium and perimysium.
In cardiology, the cardiac skeleton, also known as the fibrous skeleton of the heart, is a high-density homogeneous structure of connective tissue that forms and anchors the valves of the heart, and influences the forces exerted by and through them. The cardiac skeleton separates and partitions the atria (the smaller, upper two chambers) from the ventricles (the larger, lower two chambers).
The right and left fibrous rings of heart (""annuli fibrosi cordis"") surround the atrioventricular and arterial orifices. The right fibrous ring is known as the ""annulus fibrosus dexter cordis"", and the left is known as the ""annulus fibrosus sinister cordis"". The right fibrous trigone is continuous with the central fibrous body. This is the strongest part of the fibrous cardiac skeleton.","The cardiac skeleton ensures that the electrical and autonomic energy generated above is ushered below and cannot return. The cardiac skeleton does this by establishing an electrically impermeable boundary to autonomic electrical influence within the heart. Simply put, the dense connective tissue within the cardiac skeleton does not conduct electricity and its deposition within the myocardial matrix is not accidental.
In cardiology, the cardiac skeleton, also known as the fibrous skeleton of the heart, is a high-density homogeneous structure of connective tissue that forms and anchors the valves of the heart, and influences the forces exerted by and through them. The cardiac skeleton separates and partitions the atria (the smaller, upper two chambers) from the ventricles (the larger, lower two chambers).The heart's cardiac skeleton comprises four dense connective tissue rings that encircle the mitral and tricuspid atrioventricular (AV) canals and extend to the origins of the pulmonary trunk and aorta. This provides crucial support and structure to the heart while also serving to electrically isolate the atria from the ventricles.The unique matrix of connective tissue within the cardiac skeleton isolates electrical influence within these defined chambers. In normal anatomy, there is only one conduit for electrical conduction from the upper chambers to the lower chambers, known as the atrioventricular node. The physiologic cardiac skeleton forms a firewall governing autonomic/electrical influence until bordering the bundle of His which further governs autonomic flow to the bundle branches of the ventricles. Understood as such, the cardiac skeleton efficiently centers and robustly funnels electrical energy from the atria to the ventricles.
Fibrous rings The right and left fibrous rings of heart (annuli fibrosi cordis) surround the atrioventricular and arterial orifices. The right fibrous ring is known as the annulus fibrosus dexter cordis, and the left is known as the annulus fibrosus sinister cordis. The right fibrous trigone is continuous with the central fibrous body. This is the strongest part of the fibrous cardiac skeleton.","This is the strongest part of the fibrous cardiac skeleton.This is the strongest part of the fibrous cardiac skeleton- From the margins of the semicircular notches, the fibrous structure of the ring is continued into the segments of the valves.
The term cardiac skeleton is sometimes considered synonymous with endomysium in the heart, but cardiac skeleton also refers to the combination of the endomysium and perimysium.
In cardiology, the cardiac skeleton, also known as the fibrous skeleton of the heart, is a high-density homogeneous structure of connective tissue that forms and anchors the valves of the heart, and influences the forces exerted by and through themSimply put, the dense connective tissue within the cardiac skeleton does not conduct electricity and its deposition within the myocardial matrix is not accidental.
In cardiology, the cardiac skeleton, also known as the fibrous skeleton of the heart, is a high-density homogeneous structure of connective tissue that forms and anchors the valves of the heart, and influences the forces exerted by and through themThis provides crucial support and structure to the heart while also serving to electrically isolate the atria from the ventricles.The unique matrix of connective tissue within the cardiac skeleton isolates electrical influence within these defined chambersThe cardiac skeleton ensures that the electrical and autonomic energy generated above is ushered below and cannot returnThe cardiac skeleton separates and partitions the atria (the smaller, upper two chambers) from the ventricles (the larger, lower two chambers).The heart's cardiac skeleton comprises four dense connective tissue rings that encircle the mitral and tricuspid atrioventricular (AV) canals and extend to the origins of the pulmonary trunk and aortaUnderstood as such, the cardiac skeleton efficiently centers and robustly funnels electrical energy from the atria to the ventricles.
Fibrous rings The right and left fibrous rings of heart (annuli fibrosi cordis) surround the atrioventricular and arterial orificesThe cardiac skeleton does this by ","This is the strongest part of the fibrous cardiac skeleton.This is the strongest part of the fibrous cardiac skeleton- From the margins of the semicircular notches, the fibrous structure of the ring is continued into the segments of the valves.
The term cardiac skeleton is sometimes considered synonymous with endomysium in the heart, but cardiac skeleton also refers to the combination of the endomysium and perimysium.
In cardiology, the cardiac skeleton, also known as the fibrous skeleton of the heart, is a high-density homogeneous structure of connective tissue that forms and anchors the valves of the heart, and influences the forces exerted by and through themSimply put, the dense connective tissue within the cardiac skeleton does not conduct electricity and its deposition within the myocardial matrix is not accidental.
In cardiology, the cardiac skeleton, also known as the fibrous skeleton of the heart, is a high-density homogeneous structure of connective tissue that forms and anchors the valves of the heart, and influences the forces exerted by and through themThis provides crucial support and structure to the heart while also serving to electrically isolate the atria from the ventricles.The unique matrix of connective tissue within the cardiac skeleton isolates electrical influence within these defined chambersThe cardiac skeleton ensures that the electrical and autonomic energy generated above is ushered below and cannot returnThe cardiac skeleton separates and partitions the atria (the smaller, upper two chambers) from the ventricles (the larger, lower two chambers).The heart's cardiac skeleton comprises four dense connective tissue rings that encircle the mitral and tricuspid atrioventricular (AV) canals and extend to the origins of the pulmonary trunk and aortaUnderstood as such, the cardiac skeleton efficiently centers and robustly funnels electrical energy from the atria to the ventricles.
Fibrous rings The right and left fibrous rings of heart (annuli fibrosi cordis) surround the atrioventricular and arterial orificesThe cardiac skeleton does this by [SEP]What is the function of the fibrous cardiac skeleton?","['C', 'B', 'D']",1.0
What is the Carnot engine?,"Carnot's theorem is a formal statement of this fact: No engine operating between two heat reservoirs can be more efficient than a Carnot engine operating between those same reservoirs. Other practical requirements that make the Carnot cycle hard to realize (e.g., fine control of the gas, thermal contact with the surroundings including high and low temperature reservoirs), so the Carnot engine should be thought as the theoretical limit of macroscopic scale heat engines rather than a practical device that could ever be built. ==See also== * Carnot heat engine * Reversible process (thermodynamics) ==References== ;Notes ;Sources :* Carnot, Sadi, Reflections on the Motive Power of Fire :* Ewing, J. A. (1910) The Steam-Engine and Other Engines edition 3, page 62, via Internet Archive :* :* :* :* American Institute of Physics, 2011. . This is the Carnot heat engine working efficiency definition as the fraction of the work done by the system to the thermal energy received by the system from the hot reservoir per cycle. The Carnot engine is the most efficient heat engine which is theoretically possible. By Carnot's theorem, it provides an upper limit on the efficiency of any classical thermodynamic engine during the conversion of heat into work, or conversely, the efficiency of a refrigeration system in creating a temperature difference through the application of work to the system. A quantum Carnot engine is one in which the atoms in the heat bath are given a small bit of quantum coherence. Carnot defined work as “weight lifted through a height”. ==Carnot cycle== 350px|thumb|Figure 2: A Carnot cycle acting as a heat engine, illustrated on a temperature-entropy diagram. The Carnot cycle when acting as a heat engine consists of the following steps: # Reversible isothermal expansion of the gas at the ""hot"" temperature, TH (isothermal heat addition or absorption). Hence, the efficiency of the real engine is always less than the ideal Carnot engine. In a Carnot cycle, a system or engine transfers energy in the form of heat between two thermal reservoirs at temperatures T_H and T_C (referred to as the hot and cold reservoirs, respectively), and a part of this transferred energy is converted to the work done by the system. A Carnot cycle is an ideal thermodynamic cycle proposed by French physicist Sadi Carnot in 1824 and expanded upon by others in the 1830s and 1840s. At this point the gas is in the same state as at the start of step 1. == Carnot's theorem == Carnot's theorem is a formal statement of this fact: No engine operating between two heat reservoirs can be more efficient than a Carnot engine operating between the same reservoirs. \eta_{I}=\frac{W}{Q_{\mathrm{H}}}=1-\frac{T_{\mathrm{C}}}{T_{\mathrm{H}}} Explanation This maximum efficiency \eta_\text{I} is defined as above: : is the work done by the system (energy exiting the system as work), : Q_\text{H} is the heat put into the system (heat energy entering the system), : T_\text{C} is the absolute temperature of the cold reservoir, and : T_\text{H} is the absolute temperature of the hot reservoir. In a footnote, Carnot distinguishes the steam-engine (machine à vapeur) from the heat-engine in general. The work W done by the system or engine to the environment per Carnot cycle depends on the temperatures of the thermal reservoirs and the entropy transferred from the hot reservoir to the system \Delta S per cycle such as W = (T_H - T_C) \Delta S = (T_H - T_C) \frac{Q_H}{T_H}, where Q_H is heat transferred from the hot reservoir to the system per cycle. ==Stages== A Carnot cycle as an idealized thermodynamic cycle performed by a heat engine (Carnot heat engine) consists of the following steps. In the process of going through this cycle, the system may perform work on its surroundings, thereby acting as a heat engine. == Carnot's diagram == In the adjacent diagram, from Carnot's 1824 work, Reflections on the Motive Power of Fire, there are ""two bodies A and B, kept each at a constant temperature, that of A being higher than that of B. This thermal energy is the cycle initiator. === Reversed Carnot cycle === A Carnot heat-engine cycle described is a totally reversible cycle. The first prototype of the diesel engine was based on the Carnot cycle. == Carnot heat engine as an impractical macroscopic construct == A Carnot heat engine is a heat engine performing a Carnot cycle, and its realization on a macroscopic scale is impractical. A corollary to Carnot's theorem states that: All reversible engines operating between the same heat reservoirs are equally efficient. A corollary to Carnot's theorem states that: All reversible engines operating between the same heat reservoirs are equally efficient. A Carnot heat engineIn French, Carnot uses machine à feu, which Thurston translates as heat-engine or steam-engine. ",The Carnot engine is a theoretical engine that operates in the limiting mode of extreme speed known as dynamic. It represents the theoretical maximum efficiency of a heat engine operating between any two given thermal or heat reservoirs at different temperatures.,The Carnot engine is an ideal heat engine that operates in the limiting mode of extreme slowness known as quasi-static. It represents the theoretical maximum efficiency of a heat engine operating between any two given thermal or heat reservoirs at different temperatures.,The Carnot engine is a real heat engine that operates in the limiting mode of extreme speed known as dynamic. It represents the theoretical minimum efficiency of a heat engine operating between any two given thermal or heat reservoirs at different temperatures.,The Carnot engine is a theoretical engine that operates in the limiting mode of extreme slowness known as quasi-static. It represents the theoretical minimum efficiency of a heat engine operating between any two given thermal or heat reservoirs at different temperatures.,The Carnot engine is a real engine that operates in the limiting mode of extreme slowness known as quasi-static. It represents the theoretical maximum efficiency of a heat engine operating between any two given thermal or heat reservoirs at different temperatures.,B,kaggle200,"Carnot's theorem states that all heat engines operating between the same two thermal or heat reservoirs can't have efficiencies greater than a reversible heat engine operating between the same reservoirs. A corollary of this theorem is that every reversible heat engine operating between a pair of heat reservoirs is equally efficient, regardless of the working substance employed or the operation details. Since a Carnot heat engine is also a reversible engine, the efficiency of all the reversible heat engines is determined as the efficiency of the Carnot heat engine that depends solely on the temperatures of its hot and cold reservoirs.
Carnot's theorem is a formal statement of this fact: ""No engine operating between two heat reservoirs can be more efficient than a Carnot engine operating between those same reservoirs."" Thus, Equation gives the maximum efficiency possible for any engine using the corresponding temperatures. A corollary to Carnot's theorem states that: ""All reversible engines operating between the same heat reservoirs are equally efficient."" Rearranging the right side of the equation gives what may be a more easily understood form of the equation, namely that the theoretical maximum efficiency of a heat engine equals the difference in temperature between the hot and cold reservoir divided by the absolute temperature of the hot reservoir. Looking at this formula an interesting fact becomes apparent: Lowering the temperature of the cold reservoir will have more effect on the ceiling efficiency of a heat engine than raising the temperature of the hot reservoir by the same amount. In the real world, this may be difficult to achieve since the cold reservoir is often an existing ambient temperature.
For any heat engine, the exergy efficiency compares a given cycle to a Carnot heat engine with the cold side temperature in equilibrium with the environment. Note that a Carnot engine is the most efficient heat engine possible, but not the most efficient device for creating work. Fuel cells, for instance, can theoretically reach much higher efficiencies than a Carnot engine; their energy source is not thermal energy and so their exergy efficiency does not compare them to a Carnot engine.
The historical origin of the second law of thermodynamics was in Sadi Carnot's theoretical analysis of the flow of heat in steam engines (1824). The centerpiece of that analysis, now known as a Carnot engine, is an ideal heat engine fictively operated in the limiting mode of extreme slowness known as quasi-static, so that the heat and work transfers are between subsystems that are always in their own internal states of thermodynamic equilibrium. It represents the theoretical maximum efficiency of a heat engine operating between any two given thermal or heat reservoirs at different temperatures. Carnot's principle was recognized by Carnot at a time when the caloric theory represented the dominant understanding of the nature of heat, before the recognition of the first law of thermodynamics, and before the mathematical expression of the concept of entropy. Interpreted in the light of the first law, Carnot's analysis is physically equivalent to the second law of thermodynamics, and remains valid today. Some samples from his book are:","Carnot's theorem is a formal statement of this fact: No engine operating between two heat reservoirs can be more efficient than a Carnot engine operating between those same reservoirs. Thus, Equation 3 gives the maximum efficiency possible for any engine using the corresponding temperatures. A corollary to Carnot's theorem states that: All reversible engines operating between the same heat reservoirs are equally efficient. Rearranging the right side of the equation gives what may be a more easily understood form of the equation, namely that the theoretical maximum efficiency of a heat engine equals the difference in temperature between the hot and cold reservoir divided by the absolute temperature of the hot reservoir. Looking at this formula an interesting fact becomes apparent: Lowering the temperature of the cold reservoir will have more effect on the ceiling efficiency of a heat engine than raising the temperature of the hot reservoir by the same amount. In the real world, this may be difficult to achieve since the cold reservoir is often an existing ambient temperature.
For any heat engine, the exergy efficiency compares a given cycle to a Carnot heat engine with the cold side temperature in equilibrium with the environment. Note that a Carnot engine is the most efficient heat engine possible, but not the most efficient device for creating work. Fuel cells, for instance, can theoretically reach much higher efficiencies than a Carnot engine; their energy source is not thermal energy and so their exergy efficiency does not compare them to a Carnot engine.
Carnot's principle The historical origin of the second law of thermodynamics was in Sadi Carnot's theoretical analysis of the flow of heat in steam engines (1824). The centerpiece of that analysis, now known as a Carnot engine, is an ideal heat engine fictively operated in the limiting mode of extreme slowness known as quasi-static, so that the heat and work transfers are between subsystems that are always in their own internal states of thermodynamic equilibrium. It represents the theoretical maximum efficiency of a heat engine operating between any two given thermal or heat reservoirs at different temperatures. Carnot's principle was recognized by Carnot at a time when the caloric theory represented the dominant understanding of the nature of heat, before the recognition of the first law of thermodynamics, and before the mathematical expression of the concept of entropy. Interpreted in the light of the first law, Carnot's analysis is physically equivalent to the second law of thermodynamics, and remains valid today. Some samples from his book are: ...wherever there exists a difference of temperature, motive power can be produced.The production of motive power is then due in steam engines not to an actual consumption of caloric, but to its transportation from a warm body to a cold body ...The motive power of heat is independent of the agents employed to realize it; its quantity is fixed solely by the temperatures of the bodies between which is effected, finally, the transfer of caloric.In modern terms, Carnot's principle may be stated more precisely: The efficiency of a quasi-static or reversible Carnot cycle depends only on the temperatures of the two heat reservoirs, and is the same, whatever the working substance. A Carnot engine operated in this way is the most efficient possible heat engine using those two temperatures.","The centerpiece of that analysis, now known as a Carnot engine, is an ideal heat engine fictively operated in the limiting mode of extreme slowness known as quasi-static, so that the heat and work transfers are between subsystems that are always in their own internal states of thermodynamic equilibriumA Carnot engine operated in this way is the most efficient possible heat engine using those two temperaturesSince a Carnot heat engine is also a reversible engine, the efficiency of all the reversible heat engines is determined as the efficiency of the Carnot heat engine that depends solely on the temperatures of its hot and cold reservoirs.
Carnot's theorem is a formal statement of this fact: ""No engine operating between two heat reservoirs can be more efficient than a Carnot engine operating between those same reservoirs."" Thus, Equation gives the maximum efficiency possible for any engine using the corresponding temperaturesNote that a Carnot engine is the most efficient heat engine possible, but not the most efficient device for creating workCarnot's theorem is a formal statement of this fact: No engine operating between two heat reservoirs can be more efficient than a Carnot engine operating between those same reservoirsA corollary to Carnot's theorem states that: ""All reversible engines operating between the same heat reservoirs are equally efficient."" Rearranging the right side of the equation gives what may be a more easily understood form of the equation, namely that the theoretical maximum efficiency of a heat engine equals the difference in temperature between the hot and cold reservoir divided by the absolute temperature of the hot reservoir- Carnot's theorem states that all heat engines operating between the same two thermal or heat reservoirs can't have efficiencies greater than a reversible heat engine operating between the same reservoirsFuel cells, for instance, can theoretically reach much higher efficiencies than a Carnot engine; their energy source is not thermal energy and so their exergy efficiency does not compare them to a Carnot engine.
Carno","The centerpiece of that analysis, now known as a Carnot engine, is an ideal heat engine fictively operated in the limiting mode of extreme slowness known as quasi-static, so that the heat and work transfers are between subsystems that are always in their own internal states of thermodynamic equilibriumA Carnot engine operated in this way is the most efficient possible heat engine using those two temperaturesSince a Carnot heat engine is also a reversible engine, the efficiency of all the reversible heat engines is determined as the efficiency of the Carnot heat engine that depends solely on the temperatures of its hot and cold reservoirs.
Carnot's theorem is a formal statement of this fact: ""No engine operating between two heat reservoirs can be more efficient than a Carnot engine operating between those same reservoirs."" Thus, Equation gives the maximum efficiency possible for any engine using the corresponding temperaturesNote that a Carnot engine is the most efficient heat engine possible, but not the most efficient device for creating workCarnot's theorem is a formal statement of this fact: No engine operating between two heat reservoirs can be more efficient than a Carnot engine operating between those same reservoirsA corollary to Carnot's theorem states that: ""All reversible engines operating between the same heat reservoirs are equally efficient."" Rearranging the right side of the equation gives what may be a more easily understood form of the equation, namely that the theoretical maximum efficiency of a heat engine equals the difference in temperature between the hot and cold reservoir divided by the absolute temperature of the hot reservoir- Carnot's theorem states that all heat engines operating between the same two thermal or heat reservoirs can't have efficiencies greater than a reversible heat engine operating between the same reservoirsFuel cells, for instance, can theoretically reach much higher efficiencies than a Carnot engine; their energy source is not thermal energy and so their exergy efficiency does not compare them to a Carnot engine.
Carno[SEP]What is the Carnot engine?","['E', 'B', 'D']",0.5
Which mathematical function is commonly used to characterize linear time-invariant systems?,"A linear system that is not time-invariant can be solved using other approaches such as the Green function method. == Continuous-time systems == ===Impulse response and convolution=== The behavior of a linear, continuous-time, time-invariant system with input signal x(t) and output signal y(t) is described by the convolution integral:Crutchfield, p. Of particular interest are pure sinusoids (i.e., exponential functions of the form e^{j \omega t} where \omega \in \mathbb{R} and j \mathrel{\stackrel{\text{def}}{=}} \sqrt{-1}). The exponential functions A e^{s t}, where A, s \in \mathbb{C}, are eigenfunctions of a linear, time-invariant operator. Linear time- invariant system theory is also used in image processing, where the systems have spatial dimensions instead of, or in addition to, a temporal dimension. Similarly, a discrete-time linear time-invariant (or, more generally, ""shift-invariant"") system is defined as one operating in discrete time: y_{i} = x_{i} * h_{i} where y, x, and h are sequences and the convolution, in discrete time, uses a discrete summation rather than an integral. thumb|Relationship between the time domain and the frequency domain|right|320px LTI systems can also be characterized in the frequency domain by the system's transfer function, which is the Laplace transform of the system's impulse response (or Z transform in the case of discrete-time systems). In applied mathematics, the Rosenbrock system matrix or Rosenbrock's system matrix of a linear time-invariant system is a useful representation bridging state-space representation and transfer function matrix form. In mathematics, a dynamical system is a system in which a function describes the time dependence of a point in an ambient space, such as in a parametric curve. In system analysis, among other fields of study, a linear time-invariant (LTI) system is a system that produces an output signal from any input signal subject to the constraints of linearity and time-invariance; these terms are briefly defined below. This branch of mathematics deals with the long-term qualitative behavior of dynamical systems. * The behavior of trajectories as a function of a parameter may be what is needed for an application. Of particular interest are pure sinusoids; i.e. exponentials of the form e^{j \omega n}, where \omega \in \mathbb{R}. For all LTI systems, the eigenfunctions, and the basis functions of the transforms, are complex exponentials. In mathematics, the Lyapunov time is the characteristic timescale on which a dynamical system is chaotic. These systems may be referred to as linear translation-invariant to give the terminology the most general reach. The exponential functions z^n = e^{sT n}, where n \in \mathbb{Z}, are eigenfunctions of a linear, time-invariant operator. As example, the equation: :y'= -\text{sgn}(y)\sqrt{|y|},\,\,y(0)=1 Admits the finite duration solution: :y(x)=\frac{1}{4}\left(1-\frac{x}{2}+\left|1-\frac{x}{2}\right|\right)^2 == See also == * Behavioral modeling * Cognitive modeling * Complex dynamics * Dynamic approach to second language development * Feedback passivation * Infinite compositions of analytic functions * List of dynamical system topics * Oscillation * People in systems and control * Sharkovskii's theorem * System dynamics * Systems theory * Principle of maximum caliber ==References== * * online version of first edition on the EMIS site . * == Further reading == Works providing a broad coverage: * (available as a reprint: ) * Encyclopaedia of Mathematical Sciences () has a sub-series on dynamical systems with reviews of current research. Concentrates on the applications of dynamical systems. In order to make a prediction about the system's future behavior, an analytical solution of such equations or their integration over time through computer simulation is realized. In particular, for any A, s \in \mathbb{C}, the system output is the product of the input A e^{st} and the constant H(s). LTI system theory is an area of applied mathematics which has direct applications in electrical circuit analysis and design, signal processing and filter design, control theory, mechanical engineering, image processing, the design of measuring instruments of many sorts, NMR spectroscopy, and many other technical areas where systems of ordinary differential equations present themselves. ==Overview== The defining properties of any LTI system are linearity and time invariance. ",Trigonometric function,Quadratic function,Exponential function,Logarithmic function,Transfer function,E,kaggle200,"An eigenfunction is a function for which the output of the operator is a scaled version of the same function. That is,
If a time-invariant system is also linear, it is the subject of linear time-invariant theory (linear time-invariant) with direct applications in NMR spectroscopy, seismology, circuits, signal processing, control theory, and other technical areas. Nonlinear time-invariant systems lack a comprehensive, governing theory. Discrete time-invariant systems are known as shift-invariant systems. Systems which lack the time-invariant property are studied as time-variant systems.
""Multiplicative scramblers"" (also known as ""feed-through"") are called so because they perform a ""multiplication"" of the input signal by the scrambler's transfer function in Z-space. They are discrete linear time-invariant systems.
Linear time-invariant systems are most commonly characterized by the Laplace transform of the impulse response function called the ""transfer function"" which is:","For time-invariant systems this is the basis of the impulse response or the frequency response methods (see LTI system theory), which describe a general input function x(t) in terms of unit impulses or frequency components. Typical differential equations of linear time-invariant systems are well adapted to analysis using the Laplace transform in the continuous case, and the Z-transform in the discrete case (especially in computer implementations).
In the context of a system schematic, this property can also be stated as follows, as shown in the figure to the right: If a system is time-invariant then the system block commutes with an arbitrary delay.If a time-invariant system is also linear, it is the subject of linear time-invariant theory (linear time-invariant) with direct applications in NMR spectroscopy, seismology, circuits, signal processing, control theory, and other technical areas. Nonlinear time-invariant systems lack a comprehensive, governing theory. Discrete time-invariant systems are known as shift-invariant systems. Systems which lack the time-invariant property are studied as time-variant systems.
Transfer functions are commonly used in the analysis of systems such as single-input single-output filters in the fields of signal processing, communication theory, and control theory. The term is often used exclusively to refer to linear time-invariant (LTI) systems. Most real systems have non-linear input/output characteristics, but many systems, when operated within nominal parameters (not ""over-driven"") have behavior close enough to linear that LTI system theory is an acceptable representation of the input/output behavior.","That is,
If a time-invariant system is also linear, it is the subject of linear time-invariant theory (linear time-invariant) with direct applications in NMR spectroscopy, seismology, circuits, signal processing, control theory, and other technical areas Typical differential equations of linear time-invariant systems are well adapted to analysis using the Laplace transform in the continuous case, and the Z-transform in the discrete case (especially in computer implementations).
In the context of a system schematic, this property can also be stated as follows, as shown in the figure to the right: If a system is time-invariant then the system block commutes with an arbitrary delay.If a time-invariant system is also linear, it is the subject of linear time-invariant theory (linear time-invariant) with direct applications in NMR spectroscopy, seismology, circuits, signal processing, control theory, and other technical areasThe term is often used exclusively to refer to linear time-invariant (LTI) systemsThey are discrete linear time-invariant systems.
Linear time-invariant systems are most commonly characterized by the Laplace transform of the impulse response function called the ""transfer function"" which is:For time-invariant systems this is the basis of the impulse response or the frequency response methods (see LTI system theory), which describe a general input function x(t) in terms of unit impulses or frequency componentsDiscrete time-invariant systems are known as shift-invariant systemsSystems which lack the time-invariant property are studied as time-variant systems.
Transfer functions are commonly used in the analysis of systems such as single-input single-output filters in the fields of signal processing, communication theory, and control theoryNonlinear time-invariant systems lack a comprehensive, governing theorySystems which lack the time-invariant property are studied as time-variant systems.
""Multiplicative scramblers"" (also known as ""feed-through"") are called so because they perform a ""multiplication"" of the input signal by the scrambler's transfer fun","That is,
If a time-invariant system is also linear, it is the subject of linear time-invariant theory (linear time-invariant) with direct applications in NMR spectroscopy, seismology, circuits, signal processing, control theory, and other technical areas Typical differential equations of linear time-invariant systems are well adapted to analysis using the Laplace transform in the continuous case, and the Z-transform in the discrete case (especially in computer implementations).
In the context of a system schematic, this property can also be stated as follows, as shown in the figure to the right: If a system is time-invariant then the system block commutes with an arbitrary delay.If a time-invariant system is also linear, it is the subject of linear time-invariant theory (linear time-invariant) with direct applications in NMR spectroscopy, seismology, circuits, signal processing, control theory, and other technical areasThe term is often used exclusively to refer to linear time-invariant (LTI) systemsThey are discrete linear time-invariant systems.
Linear time-invariant systems are most commonly characterized by the Laplace transform of the impulse response function called the ""transfer function"" which is:For time-invariant systems this is the basis of the impulse response or the frequency response methods (see LTI system theory), which describe a general input function x(t) in terms of unit impulses or frequency componentsDiscrete time-invariant systems are known as shift-invariant systemsSystems which lack the time-invariant property are studied as time-variant systems.
Transfer functions are commonly used in the analysis of systems such as single-input single-output filters in the fields of signal processing, communication theory, and control theoryNonlinear time-invariant systems lack a comprehensive, governing theorySystems which lack the time-invariant property are studied as time-variant systems.
""Multiplicative scramblers"" (also known as ""feed-through"") are called so because they perform a ""multiplication"" of the input signal by the scrambler's transfer fun[SEP]Which mathematical function is commonly used to characterize linear time-invariant systems?","['E', 'C', 'D']",1.0
What is the second law of thermodynamics?,"The second law of thermodynamics is a physical law based on universal experience concerning heat and energy interconversions. The second law of thermodynamics in other versions establishes the concept of entropy as a physical property of a thermodynamic system. The second law may be formulated by the observation that the entropy of isolated systems left to spontaneous evolution cannot decrease, as they always arrive at a state of thermodynamic equilibrium where the entropy is highest at the given internal energy. The first law of thermodynamics is a formulation of the law of conservation of energy, adapted for thermodynamic processes. The second law of thermodynamics allows the definition of the concept of thermodynamic temperature, relying also on the zeroth law of thermodynamics. ==Introduction== thumb|upright|Heat flowing from hot water to cold water The first law of thermodynamics provides the definition of the internal energy of a thermodynamic system, and expresses its change for a closed system in terms of work and heat.Planck, M. (1897/1903), pp. 40–41. In physics, the first law of thermodynamics is an expression of the conservation of total energy of a system. Because of the looseness of its language, e.g. universe, as well as lack of specific conditions, e.g. open, closed, or isolated, many people take this simple statement to mean that the second law of thermodynamics applies virtually to every subject imaginable. They do not offer it as a full statement of the second law: ::... there is only one way in which the entropy of a [closed] system can be decreased, and that is to transfer heat from the system.Borgnakke, C., Sonntag., R.E. (2009), p. 304. Removal of matter from a system can also decrease its entropy. ===Relating the Second Law to the definition of temperature=== The second law has been shown to be equivalent to the internal energy defined as a convex function of the other extensive properties of the system. It can be used to predict whether processes are forbidden despite obeying the requirement of conservation of energy as expressed in the first law of thermodynamics and provides necessary criteria for spontaneous processes. *Caratheodory, C., ""Examination of the foundations of thermodynamics,"" trans. by D. H. Delphenich * The Second Law of Thermodynamics, BBC Radio 4 discussion with John Gribbin, Peter Atkins & Monica Grady (In Our Time, December 16, 2004) * The Journal of the International Society for the History of Philosophy of Science, 2012 Category:Equations of physics 2 Category:Non-equilibrium thermodynamics Category:Philosophy of thermal and statistical physics The first rigorous definition of the second law based on the concept of entropy came from German scientist Rudolf Clausius in the 1850s and included his statement that heat can never pass from a colder to a warmer body without some other change, connected therewith, occurring at the same time. The second law has been related to the difference between moving forwards and backwards in time, or to the principle that cause precedes effect (the causal arrow of time, or causality). chapter 6 ==Irreversibility== Irreversibility in thermodynamic processes is a consequence of the asymmetric character of thermodynamic operations, and not of any internally irreversible microscopic properties of the bodies. There are two main ways of stating a law of thermodynamics, physically or mathematically. This formulation does not mention heat and does not mention temperature, nor even entropy, and does not necessarily implicitly rely on those concepts, but it implies the content of the second law. For isolated systems, no energy is provided by the surroundings and the second law requires that the entropy of the system alone must increase: ΔS > 0. If, rather than an isolated system, we have a closed system, in which the entropy rather than the energy remains constant, then it follows from the first and second laws of thermodynamics that the energy of that system will drop to a minimum value at equilibrium, transferring its energy to the other system. Conceptually, the first law describes the fundamental principle that systems do not consume or 'use up' energy, that energy is neither created nor destroyed, but is simply converted from one form to another. In contrast, for isolated systems (and fixed external parameters), the second law states that the entropy will increase to a maximum value at equilibrium. Energy is conserved in such transfers. ==Description== ===Cyclic processes=== The first law of thermodynamics for a closed system was expressed in two ways by Clausius. ",The second law of thermodynamics is a physical law based on universal experience concerning heat and energy interconversions. It states that heat always moves from colder objects to hotter objects unless energy in some form is supplied to reverse the direction of heat flow.,The second law of thermodynamics is a physical law based on universal experience concerning heat and energy interconversions. It establishes that the internal energy of a thermodynamic system is a physical property that can be used to predict whether processes are forbidden despite obeying the requirement of conservation of energy as expressed in the first law of thermodynamics.,The second law of thermodynamics is a physical law based on universal experience concerning heat and energy interconversions. It establishes that all heat energy can be converted into work in a cyclic process.,"The second law of thermodynamics is a physical law based on universal experience concerning heat and energy interconversions. It states that the entropy of isolated systems left to spontaneous evolution can decrease, as they always arrive at a state of thermodynamic equilibrium where the entropy is highest at the given internal energy.",The second law of thermodynamics is a physical law based on universal experience concerning heat and energy interconversions. It establishes the concept of entropy as a physical property of a thermodynamic system and can be used to predict whether processes are forbidden despite obeying the requirement of conservation of energy as expressed in the first law of thermodynamics.,E,kaggle200,"According to the first law of thermodynamics, the change ""dU"" in the internal energy of the sub-system is the sum of the heat ""δq"" added to the sub-system, ""less"" any work ""δw"" done ""by"" the sub-system, ""plus"" any net chemical energy entering the sub-system ""d"" Σ""μN"", so that:
The second law of thermodynamics allows the definition of the concept of thermodynamic temperature, relying also on the zeroth law of thermodynamics.
The second law of thermodynamics in other versions establishes the concept of entropy as a physical property of a thermodynamic system. It can be used to predict whether processes are forbidden despite obeying the requirement of conservation of energy as expressed in the first law of thermodynamics and provides necessary criteria for spontaneous processes. The second law may be formulated by the observation that the entropy of isolated systems left to spontaneous evolution cannot decrease, as they always arrive at a state of thermodynamic equilibrium where the entropy is highest at the given internal energy. An increase in the combined entropy of system and surroundings accounts for the irreversibility of natural processes, often referred to in the concept of the arrow of time.
The second law of thermodynamics is a physical law based on universal experience concerning heat and energy interconversions. One simple statement of the law is that heat always moves from hotter objects to colder objects (or ""downhill""), unless energy in some form is supplied to reverse the direction of heat flow. Another definition is: ""Not all heat energy can be converted into work in a cyclic process.""","There have been nearly as many formulations of the second law as there have been discussions of it.
The second law of thermodynamics allows the definition of the concept of thermodynamic temperature, relying also on the zeroth law of thermodynamics.
The second law of thermodynamics is a physical law based on universal experience concerning heat and energy interconversions. One simple statement of the law is that heat always moves from hotter objects to colder objects (or ""downhill""), unless energy in some form is supplied to reverse the direction of heat flow. Another definition is: ""Not all heat energy can be converted into work in a cyclic process.""The second law of thermodynamics in other versions establishes the concept of entropy as a physical property of a thermodynamic system. It can be used to predict whether processes are forbidden despite obeying the requirement of conservation of energy as expressed in the first law of thermodynamics and provides necessary criteria for spontaneous processes. The second law may be formulated by the observation that the entropy of isolated systems left to spontaneous evolution cannot decrease, as they always arrive at a state of thermodynamic equilibrium where the entropy is highest at the given internal energy. An increase in the combined entropy of system and surroundings accounts for the irreversibility of natural processes, often referred to in the concept of the arrow of time.Historically, the second law was an empirical finding that was accepted as an axiom of thermodynamic theory. Statistical mechanics provides a microscopic explanation of the law in terms of probability distributions of the states of large assemblies of atoms or molecules. The second law has been expressed in many ways. Its first formulation, which preceded the proper definition of entropy and was based on caloric theory, is Carnot's theorem, formulated by the French scientist Sadi Carnot, who in 1824 showed that the efficiency of conversion of heat to work in a heat engine has an upper limit. The first rigorous definition of the second law based on the concept of entropy came from German scientist Rudolf Clausius in the 1850s and included his statement that heat can never pass from a colder to a warmer body without some other change, connected therewith, occurring at the same time.","There have been nearly as many formulations of the second law as there have been discussions of it.
The second law of thermodynamics allows the definition of the concept of thermodynamic temperature, relying also on the zeroth law of thermodynamics.
The second law of thermodynamics is a physical law based on universal experience concerning heat and energy interconversionsAn increase in the combined entropy of system and surroundings accounts for the irreversibility of natural processes, often referred to in the concept of the arrow of time.
The second law of thermodynamics is a physical law based on universal experience concerning heat and energy interconversionsAn increase in the combined entropy of system and surroundings accounts for the irreversibility of natural processes, often referred to in the concept of the arrow of time.Historically, the second law was an empirical finding that was accepted as an axiom of thermodynamic theoryThe second law may be formulated by the observation that the entropy of isolated systems left to spontaneous evolution cannot decrease, as they always arrive at a state of thermodynamic equilibrium where the entropy is highest at the given internal energy- According to the first law of thermodynamics, the change ""dU"" in the internal energy of the sub-system is the sum of the heat ""δq"" added to the sub-system, ""less"" any work ""δw"" done ""by"" the sub-system, ""plus"" any net chemical energy entering the sub-system ""d"" Σ""μN"", so that:
The second law of thermodynamics allows the definition of the concept of thermodynamic temperature, relying also on the zeroth law of thermodynamics.
The second law of thermodynamics in other versions establishes the concept of entropy as a physical property of a thermodynamic systemThe first rigorous definition of the second law based on the concept of entropy came from German scientist Rudolf Clausius in the 1850s and included his statement that heat can never pass from a colder to a warmer body without some other change, connected therewith, occurring at the same timeThe second law has been expressed in m","There have been nearly as many formulations of the second law as there have been discussions of it.
The second law of thermodynamics allows the definition of the concept of thermodynamic temperature, relying also on the zeroth law of thermodynamics.
The second law of thermodynamics is a physical law based on universal experience concerning heat and energy interconversionsAn increase in the combined entropy of system and surroundings accounts for the irreversibility of natural processes, often referred to in the concept of the arrow of time.
The second law of thermodynamics is a physical law based on universal experience concerning heat and energy interconversionsAn increase in the combined entropy of system and surroundings accounts for the irreversibility of natural processes, often referred to in the concept of the arrow of time.Historically, the second law was an empirical finding that was accepted as an axiom of thermodynamic theoryThe second law may be formulated by the observation that the entropy of isolated systems left to spontaneous evolution cannot decrease, as they always arrive at a state of thermodynamic equilibrium where the entropy is highest at the given internal energy- According to the first law of thermodynamics, the change ""dU"" in the internal energy of the sub-system is the sum of the heat ""δq"" added to the sub-system, ""less"" any work ""δw"" done ""by"" the sub-system, ""plus"" any net chemical energy entering the sub-system ""d"" Σ""μN"", so that:
The second law of thermodynamics allows the definition of the concept of thermodynamic temperature, relying also on the zeroth law of thermodynamics.
The second law of thermodynamics in other versions establishes the concept of entropy as a physical property of a thermodynamic systemThe first rigorous definition of the second law based on the concept of entropy came from German scientist Rudolf Clausius in the 1850s and included his statement that heat can never pass from a colder to a warmer body without some other change, connected therewith, occurring at the same timeThe second law has been expressed in m[SEP]What is the second law of thermodynamics?","['D', 'E', 'B']",0.5
"What are amorphous ferromagnetic metallic alloys, and what are their advantages?","Amorphous metals can be grouped in two categories, as either non-ferromagnetic, if they are composed of Ln, Mg, Zr, Ti, Pd, Ca, Cu, Pt and Au, or ferromagnetic alloys, if they are composed of Fe, Co, and Ni. Amorphous metals have higher tensile yield strengths and higher elastic strain limits than polycrystalline metal alloys, but their ductilities and fatigue strengths are lower. Their methods promise to speed up research and time to market for new amorphous metals alloys. ==Properties== Amorphous metal is usually an alloy rather than a pure metal. Amorphous alloys have a variety of potentially useful properties. Amorphous metals derive their strength directly from their non-crystalline structure, which does not have any of the defects (such as dislocations) that limit the strength of crystalline alloys. Amorphous metals are non-crystalline, and have a glass-like structure. Thin films of amorphous metals can be deposited via high velocity oxygen fuel technique as protective coatings. ==Applications== ===Commercial=== Currently the most important application is due to the special magnetic properties of some ferromagnetic metallic glasses. As temperatures change, the electrical resistivity of amorphous metals behaves very different than that of regular metals. One common way to try and understand the electronic properties of amorphous metals is by comparing them to liquid metals, which are similarly disordered, and for which established theoretical frameworks exist. Perhaps the most useful property of bulk amorphous alloys is that they are true glasses, which means that they soften and flow upon heating. There are several ways in which amorphous metals can be produced, including extremely rapid cooling, physical vapor deposition, solid-state reaction, ion irradiation, and mechanical alloying.Some scientists only consider amorphous metals produced by rapid cooling from a liquid state to be glasses. As a result, amorphous alloys have been commercialized for use in sports equipment, medical devices, and as cases for electronic equipment. Thermal conductivity of amorphous materials is lower than that of crystalline metal. While the resistivity in regular metals generally increases with temperature, following the Matthiessen's rule, the resistivity in a large number of amorphous metals is found to decrease with increasing temperature. In 2004, bulk amorphous steel was successfully produced by two groups: one at Oak Ridge National Laboratory, who refers to their product as ""glassy steel"", and the other at the University of Virginia, calling theirs ""DARVA-Glass 101"".U.Va. News Service, ""University Of Virginia Scientists Discover Amorphous Steel Material is three times stronger than conventional steel and non- magnetic"" , U.Va. News Services, 7/2/2004Google Patents listing for Patent WO 2006091875 A2, ""Patent WO 2006091875 A2 - Amorphous steel composites with enhanced strengths, elastic properties and ductilities (Also published as US20090025834, WO2006091875A3)"", Joseph S Poon, Gary J Shiflet, Univ Virginia, 8/31/2006 The product is non-magnetic at room temperature and significantly stronger than conventional steel, though a long research and development process remains before the introduction of the material into public or military use. ISSN 2075-4701. ==External links== *Liquidmetal Design Guide *""Metallic glass: a drop of the hard stuff"" at New Scientist *Glass-Like Metal Performs Better Under Stress Physical Review Focus, June 9, 2005 *""Overview of metallic glasses"" *New Computational Method Developed By Carnegie Mellon University Physicist Could Speed Design and Testing of Metallic Glass (2004) (the alloy database developed by Marek Mihalkovic, Michael Widom, and others) * *New tungsten-tantalum-copper amorphous alloy developed at the Korea Advanced Institute of Science and Technology Digital Chosunilbo (English Edition) : Daily News in English About Korea *Amorphous Metals in Electric-Power Distribution Applications *Amorphous and Nanocrystalline Soft Magnets * * *Metallic glasses and those composites, Materials Research Forum LLC, Millersville, PA, USA, (2018), p. 336 Category:Alloys Category:Emerging technologies Category:Metallurgy Category:Glass Together, they can explain the anomalous decrease of resistivity in amorphous metals, as the first part outweighs the second. thumb|Samples of amorphous metal, with millimeter scale An amorphous metal (also known as metallic glass, glassy metal, or shiny metal) is a solid metallic material, usually an alloy, with disordered atomic-scale structure. The nature of this production process is the reason why amorphous alloys are offered only in the form of thin, ductile metal foils. But unlike common glasses, such as window glass, which are typically electrical insulators, amorphous metals have good electrical conductivity and can show metallic luster. ","Amorphous ferromagnetic metallic alloys are crystalline alloys that can be made by rapidly cooling a liquid alloy. Their properties are nearly anisotropic, resulting in low coercivity, low hysteresis loss, high permeability, and high electrical resistivity.","Amorphous ferromagnetic metallic alloys are non-crystalline alloys that can be made by slowly heating a solid alloy. Their properties are nearly isotropic, resulting in low coercivity, low hysteresis loss, high permeability, and high electrical resistivity.","Amorphous ferromagnetic metallic alloys are crystalline alloys that can be made by slowly cooling a liquid alloy. Their properties are nearly anisotropic, resulting in high coercivity, high hysteresis loss, low permeability, and low electrical resistivity.","Amorphous ferromagnetic metallic alloys are non-crystalline alloys that can be made by rapidly cooling a liquid alloy. Their properties are nearly isotropic, resulting in low coercivity, low hysteresis loss, high permeability, and high electrical resistivity.","Amorphous ferromagnetic metallic alloys are non-crystalline alloys that can be made by rapidly heating a solid alloy. Their properties are nearly isotropic, resulting in high coercivity, high hysteresis loss, low permeability, and low electrical resistivity.",D,kaggle200,"The European Commission funded the Network of Excellence CMA from 2005 to 2010, uniting 19 core groups in 12 countries. From this emerged the European Integrated Center for the Development of New Metallic Alloys and Compounds C-MAC , which connects researchers at 21 universities.
Complex metallic alloys (CMAs) or complex intermetallics (CIMs) are intermetallic compounds characterized by the following structural features:
Most physical properties of CMAs show distinct differences with respect to the behavior of normal metallic alloys and therefore these materials possess a high potential for technological application.
Amorphous (non-crystalline) ferromagnetic metallic alloys can be made by very rapid quenching (cooling) of a liquid alloy. These have the advantage that their properties are nearly isotropic (not aligned along a crystal axis); this results in low coercivity, low hysteresis loss, high permeability, and high electrical resistivity. One such typical material is a transition metal-metalloid alloy, made from about 80% transition metal (usually Fe, Co, or Ni) and a metalloid component (B, C, Si, P, or Al) that lowers the melting point.","Bulk metallic glasses A metallic glass (also known as an amorphous or glassy metal) is a solid metallic material, usually an alloy, with a disordered atomic-scale structure. Most pure and alloyed metals, in their solid state, have atoms arranged in a highly ordered crystalline structure. Amorphous metals have a non-crystalline glass-like structure. But unlike common glasses, such as window glass, which are typically electrical insulators, amorphous metals have good electrical conductivity. Amorphous metals are produced in several ways, including extremely rapid cooling, physical vapor deposition, solid-state reaction, ion irradiation, and mechanical alloying. The first reported metallic glass was an alloy (Au75Si25) produced at Caltech in 1960. More recently, batches of amorphous steel with three times the strength of conventional steel alloys have been produced. Currently, the most important applications rely on the special magnetic properties of some ferromagnetic metallic glasses. The low magnetization loss is used in high-efficiency transformers. Theft control ID tags and other article surveillance schemes often use metallic glasses because of these magnetic properties.
Well known alloys Other alloys (see also solder alloys)
Amorphous (non-crystalline) ferromagnetic metallic alloys can be made by very rapid quenching (cooling) of an alloy. These have the advantage that their properties are nearly isotropic (not aligned along a crystal axis); this results in low coercivity, low hysteresis loss, high permeability, and high electrical resistivity. One such typical material is a transition metal-metalloid alloy, made from about 80% transition metal (usually Fe, Co, or Ni) and a metalloid component (B, C, Si, P, or Al) that lowers the melting point.","More recently, batches of amorphous steel with three times the strength of conventional steel alloys have been producedAmorphous metals have a non-crystalline glass-like structureCurrently, the most important applications rely on the special magnetic properties of some ferromagnetic metallic glassesFrom this emerged the European Integrated Center for the Development of New Metallic Alloys and Compounds C-MAC , which connects researchers at 21 universities.
Complex metallic alloys (CMAs) or complex intermetallics (CIMs) are intermetallic compounds characterized by the following structural features:
Most physical properties of CMAs show distinct differences with respect to the behavior of normal metallic alloys and therefore these materials possess a high potential for technological application.
Amorphous (non-crystalline) ferromagnetic metallic alloys can be made by very rapid quenching (cooling) of a liquid alloyTheft control ID tags and other article surveillance schemes often use metallic glasses because of these magnetic properties.
Well known alloys Other alloys (see also solder alloys)
Amorphous (non-crystalline) ferromagnetic metallic alloys can be made by very rapid quenching (cooling) of an alloyAmorphous metals are produced in several ways, including extremely rapid cooling, physical vapor deposition, solid-state reaction, ion irradiation, and mechanical alloyingBulk metallic glasses A metallic glass (also known as an amorphous or glassy metal) is a solid metallic material, usually an alloy, with a disordered atomic-scale structureBut unlike common glasses, such as window glass, which are typically electrical insulators, amorphous metals have good electrical conductivityThese have the advantage that their properties are nearly isotropic (not aligned along a crystal axis); this results in low coercivity, low hysteresis loss, high permeability, and high electrical resistivityThe first reported metallic glass was an alloy (Au75Si25) produced at Caltech in 1960One such typical material is a transition metal-metalloid alloy, made from about 80% transition met","More recently, batches of amorphous steel with three times the strength of conventional steel alloys have been producedAmorphous metals have a non-crystalline glass-like structureCurrently, the most important applications rely on the special magnetic properties of some ferromagnetic metallic glassesFrom this emerged the European Integrated Center for the Development of New Metallic Alloys and Compounds C-MAC , which connects researchers at 21 universities.
Complex metallic alloys (CMAs) or complex intermetallics (CIMs) are intermetallic compounds characterized by the following structural features:
Most physical properties of CMAs show distinct differences with respect to the behavior of normal metallic alloys and therefore these materials possess a high potential for technological application.
Amorphous (non-crystalline) ferromagnetic metallic alloys can be made by very rapid quenching (cooling) of a liquid alloyTheft control ID tags and other article surveillance schemes often use metallic glasses because of these magnetic properties.
Well known alloys Other alloys (see also solder alloys)
Amorphous (non-crystalline) ferromagnetic metallic alloys can be made by very rapid quenching (cooling) of an alloyAmorphous metals are produced in several ways, including extremely rapid cooling, physical vapor deposition, solid-state reaction, ion irradiation, and mechanical alloyingBulk metallic glasses A metallic glass (also known as an amorphous or glassy metal) is a solid metallic material, usually an alloy, with a disordered atomic-scale structureBut unlike common glasses, such as window glass, which are typically electrical insulators, amorphous metals have good electrical conductivityThese have the advantage that their properties are nearly isotropic (not aligned along a crystal axis); this results in low coercivity, low hysteresis loss, high permeability, and high electrical resistivityThe first reported metallic glass was an alloy (Au75Si25) produced at Caltech in 1960One such typical material is a transition metal-metalloid alloy, made from about 80% transition met[SEP]What are amorphous ferromagnetic metallic alloys, and what are their advantages?","['D', 'E', 'A']",1.0
What is the Penrose process?,"The Penrose process (also called Penrose mechanism) is theorised by Sir Roger Penrose as a means whereby energy can be extracted from a rotating black hole.R. Penrose and R. M. Floyd, ""Extraction of Rotational Energy from a Black Hole"", Nature Physical Science 229, 177 (1971).Misner, Thorne, and Wheeler, Gravitation, Freeman and Company, 1973. The process takes advantage of the ergosphere – a region of spacetime around the black hole dragged by its rotation faster than the speed of light, meaning that from the point of an outside observer any matter inside is forced to move in the direction of the rotation of the black hole. thumb|upright=1.2|Trajectories of bodies in a Penrose process. However, this is not a reverse of the Penrose process, as both increase the entropy of the black hole by throwing material into it. == See also == * * * High Life, a 2018 science- fiction film that includes a mission to harness the process * == References == == Further reading == * * Category:Black holes Category:Energy sources Category:Hypothetical technology Penrose mechanism exploits that by diving into the ergosphere, dumping an object that was given negative energy, and returning with more energy than before. The energy is taken from the rotation of the black hole, so there is a limit on how much energy one can extract by Penrose process and similar strategies (for an uncharged black hole no more than 29% of its original mass;Carroll, Spacetime and Geometry pg. 271 larger efficiencies are possible for charged rotating black holes). == Details of the ergosphere == The outer surface of the ergosphere is the surface at which light that moves in the direction opposite to the rotation of the black hole remains at a fixed angular coordinate, according to an external observer. The maximum amount of energy gain possible for a single particle decay via the original (or classical) Penrose process is 20.7% of its mass in the case of an uncharged black hole (assuming the best case of maximal rotation of the black hole). In this way, rotational energy is extracted from the black hole, resulting in the black hole being spun down to a lower rotational speed. The maximum amount of energy (per mass of the thrown in object) is extracted if the black hole is rotating at the maximal rate, the object just grazes the event horizon and decays into forwards and backwards moving packets of light (the first escapes the black hole, the second falls inside). In an adjunct process, a black hole can be spun up (its rotational speed increased) by sending in particles that do not split up, but instead give their entire angular momentum to the black hole. The Penrose interpretation is a speculation by Roger Penrose about the relationship between quantum mechanics and general relativity. According to > Penrose's theory, it takes energy to sustain these dual fields. Penrose points out that tiny objects, such as dust > specks, atoms and electrons, produce space-time warps as well. That allows matter to have negative energy inside of the ergosphere as long as it moves counter the black hole's rotation fast enough (or, from outside perspective, resists being dragged along to a sufficient degree). Penrose's idea is a type of objective collapse theory. The propellant, being slowed, falls (thin gray line) to the event horizon of the black hole (black disk). Penrose is an unincorporated community in Transylvania County, North Carolina, United States. Inside the ergosphere even light cannot keep up with the rotation of the black hole, as the trajectories of stationary (from the outside perspective) objects become space-like, rather than time-like (that normal matter would have), or light-like. Penrose proposes that a quantum state remains in superposition until the difference of space-time curvature attains a significant level. == Overview == Penrose's idea is inspired by quantum gravity, because it uses both the physical constants \hbar and G. Penrose is located on U.S. Route 64 east-northeast of Brevard. Penrose theorises that the wave function cannot be sustained in superposition beyond a certain energy difference between the quantum states. ","The Penrose process is a mechanism through which objects can emerge from the ergosphere with less energy than they entered with, taking energy from the rotational energy of the black hole and speeding up its rotation.","The Penrose process is a mechanism through which objects can emerge from the ergosphere with the same energy as they entered with, taking energy from the rotational energy of the black hole and maintaining its rotation.","The Penrose process is a mechanism through which objects can emerge from the ergosphere with more energy than they entered with, taking extra energy from the rotational energy of the black hole and slowing down its rotation.","The Penrose process is a mechanism through which objects can emerge from the event horizon with less energy than they entered with, taking energy from the rotational energy of the black hole and speeding up its rotation.","The Penrose process is a mechanism through which objects can emerge from the event horizon with more energy than they entered with, taking extra energy from the rotational energy of the black hole and slowing down its rotation.",C,kaggle200,"In an adjunct process, a black hole can be spun up (its rotational speed increased) by sending in particles that do not split up, but instead give their entire angular momentum to the black hole. However, this is not a reverse of the Penrose process, as both increase the entropy of the black hole by throwing material into it.
The Penrose process (also called Penrose mechanism) is theorised by Sir Roger Penrose as a means whereby energy can be extracted from a rotating black hole. The process takes advantage of the ergosphere --- a region of spacetime around the black hole dragged by its rotation ""faster than the speed of light"", meaning that from the point of an outside observer any matter inside is forced to move in the direction of the rotation of the black hole.
The maximum amount of energy gain possible for a single particle decay via the original (or classical) Penrose process is 20.7% of its mass in the case of an uncharged black hole (assuming the best case of maximal rotation of the black hole). The energy is taken from the rotation of the black hole, so there is a limit on how much energy one can extract by Penrose process and similar strategies (for an uncharged black hole no more than 29% of its original mass; larger efficiencies are possible for charged rotating black holes).
Objects and radiation can escape normally from the ergosphere. Through the Penrose process, objects can emerge from the ergosphere with more energy than they entered with. The extra energy is taken from the rotational energy of the black hole. Thereby the rotation of the black hole slows down. A variation of the Penrose process in the presence of strong magnetic fields, the Blandford–Znajek process is considered a likely mechanism for the enormous luminosity and relativistic jets of quasars and other active galactic nuclei.","Ergosphere Rotating black holes are surrounded by a region of spacetime in which it is impossible to stand still, called the ergosphere. This is the result of a process known as frame-dragging; general relativity predicts that any rotating mass will tend to slightly ""drag"" along the spacetime immediately surrounding it. Any object near the rotating mass will tend to start moving in the direction of rotation. For a rotating black hole, this effect is so strong near the event horizon that an object would have to move faster than the speed of light in the opposite direction to just stand still.The ergosphere of a black hole is a volume bounded by the black hole's event horizon and the ergosurface, which coincides with the event horizon at the poles but is at a much greater distance around the equator.Objects and radiation can escape normally from the ergosphere. Through the Penrose process, objects can emerge from the ergosphere with more energy than they entered with. The extra energy is taken from the rotational energy of the black hole. Thereby the rotation of the black hole slows down. A variation of the Penrose process in the presence of strong magnetic fields, the Blandford–Znajek process is considered a likely mechanism for the enormous luminosity and relativistic jets of quasars and other active galactic nuclei.
The Penrose process (also called Penrose mechanism) is theorised by Sir Roger Penrose as a means whereby energy can be extracted from a rotating black hole. The process takes advantage of the ergosphere – a region of spacetime around the black hole dragged by its rotation faster than the speed of light, meaning that from the point of an outside observer any matter inside is forced to move in the direction of the rotation of the black hole.
The maximum amount of energy gain possible for a single particle decay via the original (or classical) Penrose process is 20.7% of its mass in the case of an uncharged black hole (assuming the best case of maximal rotation of the black hole). The energy is taken from the rotation of the black hole, so there is a limit on how much energy one can extract by Penrose process and similar strategies (for an uncharged black hole no more than 29% of its original mass; larger efficiencies are possible for charged rotating black holes).","However, this is not a reverse of the Penrose process, as both increase the entropy of the black hole by throwing material into it.
The Penrose process (also called Penrose mechanism) is theorised by Sir Roger Penrose as a means whereby energy can be extracted from a rotating black holeThe process takes advantage of the ergosphere --- a region of spacetime around the black hole dragged by its rotation ""faster than the speed of light"", meaning that from the point of an outside observer any matter inside is forced to move in the direction of the rotation of the black hole.
The maximum amount of energy gain possible for a single particle decay via the original (or classical) Penrose process is 20.7% of its mass in the case of an uncharged black hole (assuming the best case of maximal rotation of the black hole)The process takes advantage of the ergosphere – a region of spacetime around the black hole dragged by its rotation faster than the speed of light, meaning that from the point of an outside observer any matter inside is forced to move in the direction of the rotation of the black hole.
The maximum amount of energy gain possible for a single particle decay via the original (or classical) Penrose process is 20.7% of its mass in the case of an uncharged black hole (assuming the best case of maximal rotation of the black hole)A variation of the Penrose process in the presence of strong magnetic fields, the Blandford–Znajek process is considered a likely mechanism for the enormous luminosity and relativistic jets of quasars and other active galactic nuclei.
The Penrose process (also called Penrose mechanism) is theorised by Sir Roger Penrose as a means whereby energy can be extracted from a rotating black holeThrough the Penrose process, objects can emerge from the ergosphere with more energy than they entered withA variation of the Penrose process in the presence of strong magnetic fields, the Blandford–Znajek process is considered a likely mechanism for the enormous luminosity and relativistic jets of quasars and other active galactic nuclei.- In an adjunct proces","However, this is not a reverse of the Penrose process, as both increase the entropy of the black hole by throwing material into it.
The Penrose process (also called Penrose mechanism) is theorised by Sir Roger Penrose as a means whereby energy can be extracted from a rotating black holeThe process takes advantage of the ergosphere --- a region of spacetime around the black hole dragged by its rotation ""faster than the speed of light"", meaning that from the point of an outside observer any matter inside is forced to move in the direction of the rotation of the black hole.
The maximum amount of energy gain possible for a single particle decay via the original (or classical) Penrose process is 20.7% of its mass in the case of an uncharged black hole (assuming the best case of maximal rotation of the black hole)The process takes advantage of the ergosphere – a region of spacetime around the black hole dragged by its rotation faster than the speed of light, meaning that from the point of an outside observer any matter inside is forced to move in the direction of the rotation of the black hole.
The maximum amount of energy gain possible for a single particle decay via the original (or classical) Penrose process is 20.7% of its mass in the case of an uncharged black hole (assuming the best case of maximal rotation of the black hole)A variation of the Penrose process in the presence of strong magnetic fields, the Blandford–Znajek process is considered a likely mechanism for the enormous luminosity and relativistic jets of quasars and other active galactic nuclei.
The Penrose process (also called Penrose mechanism) is theorised by Sir Roger Penrose as a means whereby energy can be extracted from a rotating black holeThrough the Penrose process, objects can emerge from the ergosphere with more energy than they entered withA variation of the Penrose process in the presence of strong magnetic fields, the Blandford–Znajek process is considered a likely mechanism for the enormous luminosity and relativistic jets of quasars and other active galactic nuclei.- In an adjunct proces[SEP]What is the Penrose process?","['E', 'C', 'D']",0.5
What was the aim of the Gravity Probe B (GP-B) mission?,"Gravity Probe B (GP-B) was a satellite-based experiment to test two unverified predictions of general relativity: the geodetic effect and frame-dragging. In a public press and media event at NASA Headquarters, GP-B Principal Investigator, Francis Everitt presented the final results of Gravity Probe B. ;19 November 2015 : Publication of GP-B Special Volume (Volume #32, Issue #22) in the peer-reviewed journal, Classical and Quantum Gravity. Final science results were reported in 2011. ==Experimental setup== thumb The Gravity Probe B experiment comprised four London moment gyroscopes and a reference telescope sighted on IM Pegasi, a binary star in the constellation Pegasus. Gravity Probe may refer to: * Gravity Probe A * Gravity Probe B de:Gravity Probe Gravity Probe B was expected to measure this effect to an accuracy of one part in 10,000, the most stringent check on general relativistic predictions to date. The Gravity Probe B mission timeline describes the events during the flight of Gravity Probe B, the science phase of its experimental campaign, and the analysis of the recorded data. ==Mission progress== * April 20, 2004 thumb|Launch of Gravity Probe B ** Launch of GP-B from Vandenberg AFB and successful insertion into polar orbit. Mission scientists viewed it as the second relativity experiment in space, following the successful launch of Gravity Probe A (GP-A) in 1976. Gravity Probe B marks the first time that Stanford University has been in control of the development and operations of a space satellite funded by NASA. The prospects for further experimental measurement of frame-dragging after GP-B were commented on in the journal Europhysics Letters. ==See also== * Frame-dragging * Gravity Probe A * Gravitomagnetism * Modified Newtonian dynamics * Tests of general relativity * Timeline of gravitational physics and relativity ==References== ==External links== * Gravity Probe B web site at NASA * Gravity Probe B Web site at Stanford * Graphic explanation of how Gravity Probe B works * NASA GP-B launch site * NASA article on the technologies used in Gravity Probe B * * General Relativistic Frame Dragging * Layman's article on the project progress * IOP Classical and Quantum Gravity, Volume #32, Issue #22, Special Focus Issue on Gravity Probe B * Gravity Probe B Collection, The University of Alabama in Huntsville Archives and Special Collections Category:Tests of general relativity Category:Physics experiments Category:Satellites orbiting Earth Category:Spacecraft launched in 2004 Category:Spacecraft launched by Delta II rockets The mission plans were to test two unverified predictions of general relativity: the geodetic effect and frame-dragging. (Source: Gravity Probe B web site ) * Spring 2008 ** Mission update Increasing the Precision of the Results : ""In reality, GP-B experienced six major or significant anomalies during the 353-day science data collection period, and these anomalies caused the experimental data set to be divided into seven major segments, with a total of 307 days of ""good"" science data when all seven segments are combined. Francis Everitt gave a plenary talk at the meeting of the American Physical Society announcing initial results: ""The data from the GP-B gyroscopes clearly confirm Einstein's predicted geodetic effect to a precision of better than 1 percent. Francis Everitt gave a plenary talk at the meeting of the American Physical Society announcing initial results: ""The data from the GP-B gyroscopes clearly confirm Einstein's predicted geodetic effect to a precision of better than 1 percent. In an article published in the journal Physical Review Letters in 2011, the authors reported analysis of the data from all four gyroscopes results in a geodetic drift rate of and a frame-dragging drift rate of , in good agreement with the general relativity predictions of and , respectively. ==Overview== Gravity Probe B was a relativity gyroscope experiment funded by NASA. Because future interpretations of the data by scientists outside GPB may differ from the official results, it may take several more years for all of the data received by GPB to be completely understood. ==See also== * Frame- dragging * Geodetic effect * Gravity Probe B * Tests of general relativity * Timeline of gravitational physics and relativity ==References== Category:Spaceflight timelines The ensuing SAC report to NASA states: The Stanford-based analysis group and NASA announced on 4 May 2011 that the data from GP-B indeed confirms the two predictions of Albert Einstein's general theory of relativity. The spaceflight phase lasted until 2005; Its aim was to measure spacetime curvature near Earth, and thereby the stress–energy tensor (which is related to the distribution and the motion of matter in space) in and near Earth. This provided a test of general relativity, gravitomagnetism and related models. Several posters and alternative theorists (some skeptical of GPB and its methodology) have indicated that understanding these signals may be more interesting than the original goal of testing GR. A more precise explanation for the space curvature part of the geodetic precession is obtained by using a nearly flat cone to model the space curvature of the Earth's gravitational field. ",To prove that pressure contributes equally to spacetime curvature as does mass-energy.,"To measure spacetime curvature near Earth, with particular emphasis on gravitomagnetism.",To measure the distribution of Fe and Al on the Moon's surface.,"To confirm the relatively large geodetic effect due to simple spacetime curvature, and is also known as de Sitter precession.",To measure the discrepancy between active and passive mass to about 10−12.,B,kaggle200,"Gravity Probe B marks the first time that Stanford University has been in control of the development and operations of a space satellite funded by NASA.
The Gravity Probe B mission timeline describes the events during the flight of Gravity Probe B, the science phase of its experimental campaign, and the analysis of the recorded data.
Gravity Probe B (GP-B) was a satellite-based experiment to test two unverified predictions of general relativity: the geodetic effect and frame-dragging. This was to be accomplished by measuring, very precisely, tiny changes in the direction of spin of four gyroscopes contained in an Earth-orbiting satellite at of altitude, crossing directly over the poles.
The existence of gravitomagnetism was proven by Gravity Probe B , a satellite-based mission which launched on 20 April 2004. The spaceflight phase lasted until . The mission aim was to measure spacetime curvature near Earth, with particular emphasis on gravitomagnetism.","Gravity Probe B (GP-B) was a satellite-based experiment to test two unverified predictions of general relativity: the geodetic effect and frame-dragging. This was to be accomplished by measuring, very precisely, tiny changes in the direction of spin of four gyroscopes contained in an Earth-orbiting satellite at 650 km (400 mi) of altitude, crossing directly over the poles.
This is a list of major events for the GP-B experiment.
Gravitomagnetism The existence of gravitomagnetism was proven by Gravity Probe B (GP-B), a satellite-based mission which launched on 20 April 2004. The spaceflight phase lasted until 2005. The mission aim was to measure spacetime curvature near Earth, with particular emphasis on gravitomagnetism.","The mission aim was to measure spacetime curvature near Earth, with particular emphasis on gravitomagnetism.The mission aim was to measure spacetime curvature near Earth, with particular emphasis on gravitomagnetism- Gravity Probe B marks the first time that Stanford University has been in control of the development and operations of a space satellite funded by NASA.
The Gravity Probe B mission timeline describes the events during the flight of Gravity Probe B, the science phase of its experimental campaign, and the analysis of the recorded data.
Gravity Probe B (GP-B) was a satellite-based experiment to test two unverified predictions of general relativity: the geodetic effect and frame-draggingGravity Probe B (GP-B) was a satellite-based experiment to test two unverified predictions of general relativity: the geodetic effect and frame-draggingThis was to be accomplished by measuring, very precisely, tiny changes in the direction of spin of four gyroscopes contained in an Earth-orbiting satellite at 650 km (400 mi) of altitude, crossing directly over the poles.
This is a list of major events for the GP-B experiment.
Gravitomagnetism The existence of gravitomagnetism was proven by Gravity Probe B (GP-B), a satellite-based mission which launched on 20 April 2004This was to be accomplished by measuring, very precisely, tiny changes in the direction of spin of four gyroscopes contained in an Earth-orbiting satellite at of altitude, crossing directly over the poles.
The existence of gravitomagnetism was proven by Gravity Probe B , a satellite-based mission which launched on 20 April 2004The spaceflight phase lasted until The spaceflight phase lasted until 2005","The mission aim was to measure spacetime curvature near Earth, with particular emphasis on gravitomagnetism.The mission aim was to measure spacetime curvature near Earth, with particular emphasis on gravitomagnetism- Gravity Probe B marks the first time that Stanford University has been in control of the development and operations of a space satellite funded by NASA.
The Gravity Probe B mission timeline describes the events during the flight of Gravity Probe B, the science phase of its experimental campaign, and the analysis of the recorded data.
Gravity Probe B (GP-B) was a satellite-based experiment to test two unverified predictions of general relativity: the geodetic effect and frame-draggingGravity Probe B (GP-B) was a satellite-based experiment to test two unverified predictions of general relativity: the geodetic effect and frame-draggingThis was to be accomplished by measuring, very precisely, tiny changes in the direction of spin of four gyroscopes contained in an Earth-orbiting satellite at 650 km (400 mi) of altitude, crossing directly over the poles.
This is a list of major events for the GP-B experiment.
Gravitomagnetism The existence of gravitomagnetism was proven by Gravity Probe B (GP-B), a satellite-based mission which launched on 20 April 2004This was to be accomplished by measuring, very precisely, tiny changes in the direction of spin of four gyroscopes contained in an Earth-orbiting satellite at of altitude, crossing directly over the poles.
The existence of gravitomagnetism was proven by Gravity Probe B , a satellite-based mission which launched on 20 April 2004The spaceflight phase lasted until The spaceflight phase lasted until 2005[SEP]What was the aim of the Gravity Probe B (GP-B) mission?","['B', 'C', 'D']",1.0
What was Pierre de Fermat's solution to the problem of refraction?,"His eventual solution, described in a letter to La Chambre dated 1 January 1662, construed ""resistance"" as inversely proportional to speed, so that light took the path of least time. Fermat replied that refraction might be brought into the same framework by supposing that light took the path of least resistance, and that different media offered different resistances. The ordinary law of refraction was at that time attributed to René Descartes (d.1650), who had tried to explain it by supposing that light was a force that propagated instantaneously, or that light was analogous to a tennis ball that traveled faster in the denser medium,Darrigol, 2012, pp.41–2. either premise being inconsistent with Fermat's. First proposed by the French mathematician Pierre de Fermat in 1662, as a means of explaining the ordinary law of refraction of light (Fig.1), Fermat's principle was initially controversial because it seemed to ascribe knowledge and intent to nature. Laplace continued: > According to Huygens, the velocity of the extraordinary ray, in the crystal, > is simply expressed by the radius of the spheroid; consequently his > hypothesis does not agree with the principle of the least action: but it is > remarkable that it agrees with the principle of Fermat, which is, that light > passes, from a given point without the crystal, to a given point within it, > in the least possible time; for it is easy to see that this principle > coincides with that of the least action, if we invert the expression of the > velocity.Translated by Young (1809), p.341; Young's italics. left|thumb|Thomas Young Laplace's report was the subject of a wide-ranging rebuttal by Thomas Young, who wrote in part: > The principle of Fermat, although it was assumed by that mathematician on > hypothetical, or even imaginary grounds, is in fact a fundamental law with > respect to undulatory motion, and is the basis of every determination in the > Huygenian theory... He would hardly have thought this necessary if he had known that the principle of least time followed directly from the same common-tangent construction by which he had deduced not only the law of ordinary refraction, but also the laws of rectilinear propagation and ordinary reflection (which were also known to follow from Fermat's principle), and a previously unknown law of extraordinary refraction -- the last by means of secondary wavefronts that were spheroidal rather than spherical, with the result that the rays were generally oblique to the wavefronts. But, for the time being, the corresponding extension of Fermat's principle went unnoticed. === Laplace, Young, Fresnel, and Lorentz === thumb|Pierre- Simon Laplace On 30 January 1809, Pierre-Simon Laplace, reporting on the work of his protégé Étienne-Louis Malus, claimed that the extraordinary refraction of calcite could be explained under the corpuscular theory of light with the aid of Maupertuis's principle of least action: that the integral of speed with respect to distance was a minimum. That premise yielded the ordinary law of refraction, provided that light traveled more slowly in the optically denser medium.Sabra, 1981, pp.139,143–7; Darrigol, 2012, pp.48–9 (where, in footnote 21, ""Descartes to..."" If this notion was to explain refraction, it required the resistance to vary with direction in a manner that was hard to reconcile with reflection. (emphasis added), and was therefore bound to sow confusion rather than clarity. thumb|Augustin-Jean Fresnel No such confusion subsists in Augustin-Jean Fresnel's ""Second Memoir"" on double refraction (Fresnel, 1827), which addresses Fermat's principle in several places (without naming Fermat), proceeding from the special case in which rays are normal to wavefronts, to the general case in which rays are paths of least time or stationary time. Ibn al-Haytham, an 11th century polymaths later extended this principle to refraction, hence giving an early version of the Fermat's principle. === Fermat vs. the Cartesians === thumb|Pierre de Fermat In 1657, Pierre de Fermat received from Marin Cureau de la Chambre a copy of newly published treatise, in which La Chambre noted Hero's principle and complained that it did not work for refraction.Sabra, 1981, pp.137–9; Darrigol, 2012, p.48. Huygens gave a geometric proof that a ray refracted according to this law takes the path of least time.Huygens, 1690, tr. Thompson, pp.42–5. Fermat refined and generalized this to ""light travels between two given points along the path of shortest time"" now known as the principle of least time. Fermat's principle, also known as the principle of least time, is the link between ray optics and wave optics. His only endorsement of Fermat's principle was limited in scope: having derived the law of ordinary refraction, for which the rays are normal to the wavefronts,Huygens, 1690, tr. Thompson, pp.34–9. obviously should be ""Fermat to..."").Ibn al-Haytham, writing in Cairo in the 2nd decade of the 11th century, also believed that light took the path of least resistance and that denser media offered more resistance, but he retained a more conventional notion of ""resistance"". And in optical experiments, a beam is routinely considered as a collection of rays or (if it is narrow) as an approximation to a ray (Fig.3).See (e.g.) Newton, 1730, p.55; Huygens, 1690, tr. Thompson, pp.40–41,56. === Analogies === According to the ""strong"" form of Fermat's principle, the problem of finding the path of a light ray from point A in a medium of faster propagation, to point B in a medium of slower propagation (Fig.1), is analogous to the problem faced by a lifeguard in deciding where to enter the water in order to reach a drowning swimmer as soon as possible, given that the lifeguard can run faster than (s)he can swim. Ziggelaar, 1980, ""The sine law of refraction derived from the principle of Fermat -- prior to Fermat? Fermat's principle states that the path taken by a ray between two given points is the path that can be traveled in the least time. Fermat's principle states that the path taken by a ray between two given points is the path that can be traveled in the least time. ","Fermat supposed that light took the path of least resistance, and that different media offered the same resistance. His eventual solution, described in a letter to La Chambre dated 1 January 1662, construed ""resistance"" as inversely proportional to speed, so that light took the path of least time. That premise yielded the ordinary law of refraction, provided that light traveled more slowly in the optically denser medium.","Fermat supposed that light took the path of least resistance, and that different media offered different resistances. His eventual solution, described in a letter to La Chambre dated 1 January 1662, construed ""resistance"" as directly proportional to speed, so that light took the path of least time. That premise yielded the ordinary law of refraction, provided that light traveled more quickly in the optically denser medium.","Fermat supposed that light took the path of least resistance, and that different media offered the same resistance. His eventual solution, described in a letter to La Chambre dated 1 January 1662, construed ""resistance"" as directly proportional to speed, so that light took the path of least time. That premise yielded the ordinary law of refraction, provided that light traveled more slowly in the optically denser medium.","Fermat supposed that light took the path of least resistance, and that different media offered the same resistance. His eventual solution, described in a letter to La Chambre dated 1 January 1662, construed ""resistance"" as inversely proportional to speed, so that light took the path of least time. That premise yielded the ordinary law of refraction, provided that light traveled more quickly in the optically denser medium.","Fermat supposed that light took the path of least resistance, and that different media offered different resistances. His eventual solution, described in a letter to La Chambre dated 1 January 1662, construed ""resistance"" as inversely proportional to speed, so that light took the path of least time. That premise yielded the ordinary law of refraction, provided that light traveled more slowly in the optically denser medium.",E,kaggle200,"In nature, water cascading down a mountain will always follow the path of least resistance – the easiest route. In thinking, too, our minds tend to take the path of least resistance – those avenues most familiar to us. So doing, it is difficult to arrive at ideas new to us or to our competitors. SIT encourages an approach to the counter-intuitive path – the path of most resistance.
The path of least resistance is also used to describe certain human behaviors, although with much less specificity than in the strictly physical sense. In these cases, resistance is often used as a metaphor for personal effort or confrontation; a person taking the path of least resistance avoids these. In library science and technical writing, information is ideally arranged for users according to the principle of least effort, or the ""path of least resistance"". Recursive navigation systems are an example of this.
If a ray follows a straight line, it obviously takes the path of least ""length"". Hero of Alexandria, in his ""Catoptrics"" (1st century CE), showed that the ordinary law of reflection off a plane surface follows from the premise that the total ""length"" of the ray path is a minimum. In 1657, Pierre de Fermat received from Marin Cureau de la Chambre a copy of newly published treatise, in which La Chambre noted Hero's principle and complained that it did not work for refraction.
Fermat replied that refraction might be brought into the same framework by supposing that light took the path of least ""resistance"", and that different media offered different resistances. His eventual solution, described in a letter to La Chambre dated 1 January 1662, construed ""resistance"" as inversely proportional to speed, so that light took the path of least ""time"". That premise yielded the ordinary law of refraction, provided that light traveled more slowly in the optically denser medium.","Path of most resistance In nature, water cascading down a mountain will always follow the path of least resistance – the easiest route. In thinking, too, our minds tend to take the path of least resistance – those avenues most familiar to us. So doing, it is difficult to arrive at ideas new to us or to our competitors. SIT encourages an approach to the counter-intuitive path – the path of most resistance.
The path of least resistance is also used to describe certain human behaviors, although with much less specificity than in the strictly physical sense. In these cases, resistance is often used as a metaphor for personal effort or confrontation; a person taking the path of least resistance avoids these. In library science and technical writing, information is ideally arranged for users according to the principle of least effort, or the ""path of least resistance"". Recursive navigation systems are an example of this.
Fermat vs. the Cartesians In 1657, Pierre de Fermat received from Marin Cureau de la Chambre a copy of newly published treatise, in which La Chambre noted Hero's principle and complained that it did not work for refraction.Fermat replied that refraction might be brought into the same framework by supposing that light took the path of least resistance, and that different media offered different resistances. His eventual solution, described in a letter to La Chambre dated 1 January 1662, construed ""resistance"" as inversely proportional to speed, so that light took the path of least time. That premise yielded the ordinary law of refraction, provided that light traveled more slowly in the optically denser medium.Fermat's solution was a landmark in that it unified the then-known laws of geometrical optics under a variational principle or action principle, setting the precedent for the principle of least action in classical mechanics and the corresponding principles in other fields (see History of variational principles in physics). It was the more notable because it used the method of adequality, which may be understood in retrospect as finding the point where the slope of an infinitesimally short chord is zero, without the intermediate step of finding a general expression for the slope (the derivative).","That premise yielded the ordinary law of refraction, provided that light traveled more slowly in the optically denser medium.Fermat's solution was a landmark in that it unified the then-known laws of geometrical optics under a variational principle or action principle, setting the precedent for the principle of least action in classical mechanics and the corresponding principles in other fields (see History of variational principles in physics)In 1657, Pierre de Fermat received from Marin Cureau de la Chambre a copy of newly published treatise, in which La Chambre noted Hero's principle and complained that it did not work for refraction.
Fermat replied that refraction might be brought into the same framework by supposing that light took the path of least ""resistance"", and that different media offered different resistancesthe Cartesians In 1657, Pierre de Fermat received from Marin Cureau de la Chambre a copy of newly published treatise, in which La Chambre noted Hero's principle and complained that it did not work for refraction.Fermat replied that refraction might be brought into the same framework by supposing that light took the path of least resistance, and that different media offered different resistancesHis eventual solution, described in a letter to La Chambre dated 1 January 1662, construed ""resistance"" as inversely proportional to speed, so that light took the path of least timeHis eventual solution, described in a letter to La Chambre dated 1 January 1662, construed ""resistance"" as inversely proportional to speed, so that light took the path of least ""time""That premise yielded the ordinary law of refraction, provided that light traveled more slowly in the optically denser medium.Hero of Alexandria, in his ""Catoptrics"" (1st century CE), showed that the ordinary law of reflection off a plane surface follows from the premise that the total ""length"" of the ray path is a minimumIt was the more notable because it used the method of adequality, which may be understood in retrospect as finding the point where the slope of an infinitesimally short chord is zero,","That premise yielded the ordinary law of refraction, provided that light traveled more slowly in the optically denser medium.Fermat's solution was a landmark in that it unified the then-known laws of geometrical optics under a variational principle or action principle, setting the precedent for the principle of least action in classical mechanics and the corresponding principles in other fields (see History of variational principles in physics)In 1657, Pierre de Fermat received from Marin Cureau de la Chambre a copy of newly published treatise, in which La Chambre noted Hero's principle and complained that it did not work for refraction.
Fermat replied that refraction might be brought into the same framework by supposing that light took the path of least ""resistance"", and that different media offered different resistancesthe Cartesians In 1657, Pierre de Fermat received from Marin Cureau de la Chambre a copy of newly published treatise, in which La Chambre noted Hero's principle and complained that it did not work for refraction.Fermat replied that refraction might be brought into the same framework by supposing that light took the path of least resistance, and that different media offered different resistancesHis eventual solution, described in a letter to La Chambre dated 1 January 1662, construed ""resistance"" as inversely proportional to speed, so that light took the path of least timeHis eventual solution, described in a letter to La Chambre dated 1 January 1662, construed ""resistance"" as inversely proportional to speed, so that light took the path of least ""time""That premise yielded the ordinary law of refraction, provided that light traveled more slowly in the optically denser medium.Hero of Alexandria, in his ""Catoptrics"" (1st century CE), showed that the ordinary law of reflection off a plane surface follows from the premise that the total ""length"" of the ray path is a minimumIt was the more notable because it used the method of adequality, which may be understood in retrospect as finding the point where the slope of an infinitesimally short chord is zero,[SEP]What was Pierre de Fermat's solution to the problem of refraction?","['E', 'C', 'D']",1.0
What is the reason behind the adoption of a logarithmic scale of 5√100 ≈ 2.512 between magnitudes in astronomy?,"The ancient apparent magnitudes for the brightness of stars uses the base \sqrt[5]{100} \approx 2.512 and is reversed. thumb|Log-log plot of aperture diameter vs angular resolution at the diffraction limit for various light wavelengths compared with various astronomical instruments. Orders of magnitude Category:Elementary mathematics Category:Logarithmic scales of measurement For a number written in scientific notation, this logarithmic rounding scale requires rounding up to the next power of ten when the multiplier is greater than the square root of ten (about 3.162). In the case of log log x, this mean of two numbers (e.g. 2 and 16 giving 4) does not depend on the base of the logarithm, just like in the case of log x (geometric mean, 2 and 8 giving 4), but unlike in the case of log log log x (4 and giving 16 if the base is 2, but not otherwise). ==See also== * Big O notation * Decibel * Mathematical operators and symbols in Unicode * Names of large numbers * Names of small numbers * Number sense * Orders of magnitude (acceleration) * Orders of magnitude (area) * Orders of magnitude (current) * Orders of magnitude (energy) * Orders of magnitude (force) * Orders of magnitude (frequency) * Orders of magnitude (illuminance) * Orders of magnitude (length) * Orders of magnitude (mass) * Orders of magnitude (numbers) * Orders of magnitude (power) * Orders of magnitude (pressure) * Orders of magnitude (radiation) * Orders of magnitude (speed) * Orders of magnitude (temperature) * Orders of magnitude (time) * Orders of magnitude (voltage) * Orders of magnitude (volume) * Powers of Ten * Scientific notation * Unicode symbols for CJK Compatibility includes SI Unit symbols * Valuation (algebra), an algebraic generalization of ""order of magnitude"" * Scale (analytical tool) == References == ==Further reading== * Asimov, Isaac, The Measure of the Universe (1983). ==External links== * The Scale of the Universe 2 Interactive tool from Planck length 10−35 meters to universe size 1027 * Cosmos - an Illustrated Dimensional Journey from microcosmos to macrocosmos - from Digital Nature Agency * Powers of 10, a graphic animated illustration that starts with a view of the Milky Way at 1023 meters and ends with subatomic particles at 10−16 meters. A difference of 5 magnitudes between the absolute magnitudes of two objects corresponds to a ratio of 100 in their luminosities, and a difference of n magnitudes in absolute magnitude corresponds to a luminosity ratio of 100n/5. Although bolometric magnitudes had been used by astronomers for many decades, there had been systematic differences in the absolute magnitude-luminosity scales presented in various astronomical references, and no international standardization. A galaxy's magnitude is defined by measuring all the light radiated over the entire object, treating that integrated brightness as the brightness of a single point-like or star-like source, and computing the magnitude of that point-like source as it would appear if observed at the standard 10 parsecs distance. In 1815, Peter Mark Roget invented the log log slide rule, which included a scale displaying the logarithm of the logarithm. Following Resolution B2, the relation between a star's absolute bolometric magnitude and its luminosity is no longer directly tied to the Sun's (variable) luminosity: M_\mathrm{bol} = -2.5 \log_{10} \frac{L_\star}{L_0} \approx -2.5 \log_{10} L_\star + 71.197425 where * is the star's luminosity (bolometric luminosity) in watts * is the zero point luminosity * is the bolometric magnitude of the star The new IAU absolute magnitude scale permanently disconnects the scale from the variable Sun. Order-of-magnitude differences are called decades when measured on a logarithmic scale. ==Non-decimal orders of magnitude== Other orders of magnitude may be calculated using bases other than 10. Differences in order of magnitude can be measured on a base-10 logarithmic scale in “decades” (i.e., factors of ten). For objects at very large distances (outside the Milky Way) the luminosity distance (distance defined using luminosity measurements) must be used instead of , because the Euclidean approximation is invalid for distant objects. * Jansky radio astronomer's preferred unit – linear in power/unit area * List of most luminous stars * Photographic magnitude * Surface brightness – the magnitude for extended objects * Zero point (photometry) – the typical calibration point for star flux == References == /5}, where H_{\text{Sun}}=-26.76, the absolute magnitude of the Sun, and 1\text{ AU}=1.4959787\times10^{8}\text{ km}. }} == External links == * Reference zero-magnitude fluxes * International Astronomical Union * Absolute Magnitude of a Star calculator * The Magnitude system * About stellar magnitudes * Obtain the magnitude of any star – SIMBAD * Converting magnitude of minor planets to diameter * Another table for converting asteroid magnitude to estimated diameter Category:Observational astronomy It is defined based on the luminosity of the stars. The modernized version has however turned into a logarithmic scale with non-integer values. ===Extremely large numbers=== For extremely large numbers, a generalized order of magnitude can be based on their double logarithm or super-logarithm. Absolute magnitudes of stars generally range from approximately −10 to +20. Combined with incorrect assumed absolute bolometric magnitudes for the Sun, this could lead to systematic errors in estimated stellar luminosities (and other stellar properties, such as radii or ages, which rely on stellar luminosity to be calculated). The absolute magnitude can also be written in terms of the apparent magnitude and stellar parallax : M = m + 5 \left(\log_{10}p+1\right), or using apparent magnitude and distance modulus : M = m - \mu. ==== Examples ==== Rigel has a visual magnitude of 0.12 and distance of about 860 light-years: M_\mathrm{V} = 0.12 - 5 \left(\log_{10} \frac{860}{3.2616} - 1 \right) = -7.0. For example, the number has a logarithm (in base 10) of 6.602; its order of magnitude is 6. ",The logarithmic scale was adopted to ensure that five magnitude steps corresponded precisely to a factor of 100 in brightness.,The logarithmic scale was adopted to measure the size of stars.,The logarithmic scale was adopted to measure the intensity of light coming from a star.,The logarithmic scale was adopted to ensure that the apparent sizes of stars were not spurious.,The logarithmic scale was adopted to measure the distance between stars.,A,kaggle200,"Although the spectrogram is profoundly useful, it still has one drawback. It displays frequencies on a uniform scale. However, musical scales are based on a logarithmic scale for frequencies. Therefore, we should describe the frequency in logarithmic scale related to human hearing.
The size of commas is commonly expressed and compared in terms of cents – fractions of an octave on a logarithmic scale.
This type of rounding, which is also named rounding to a logarithmic scale, is a variant of rounding to a specified power. Rounding on a logarithmic scale is accomplished by taking the log of the amount and doing normal rounding to the nearest value on the log scale.
If only the ordinate or abscissa is scaled logarithmically, the plot is referred to as a semi-logarithmic plot.","When comparing magnitudes, a logarithmic scale is often used. Examples include the loudness of a sound (measured in decibels), the brightness of a star, and the Richter scale of earthquake intensity. Logarithmic magnitudes can be negative. In the natural sciences, a logarithmic magnitude is typically referred to as a level.
Thus in 1856 Norman Pogson of Oxford proposed that a logarithmic scale of 5√100 ≈ 2.512 be adopted between magnitudes, so five magnitude steps corresponded precisely to a factor of 100 in brightness. Every interval of one magnitude equates to a variation in brightness of 5√100 or roughly 2.512 times. Consequently, a magnitude 1 star is about 2.5 times brighter than a magnitude 2 star, about 2.52 times brighter than a magnitude 3 star, about 2.53 times brighter than a magnitude 4 star, and so on.
Scaled rounding This type of rounding, which is also named rounding to a logarithmic scale, is a variant of rounding to a specified power. Rounding on a logarithmic scale is accomplished by taking the log of the amount and doing normal rounding to the nearest value on the log scale.","In the natural sciences, a logarithmic magnitude is typically referred to as a level.
Thus in 1856 Norman Pogson of Oxford proposed that a logarithmic scale of 5√100 ≈ 2.512 be adopted between magnitudes, so five magnitude steps corresponded precisely to a factor of 100 in brightnessWhen comparing magnitudes, a logarithmic scale is often usedConsequently, a magnitude 1 star is about 2.5 times brighter than a magnitude 2 star, about 2.52 times brighter than a magnitude 3 star, about 2.53 times brighter than a magnitude 4 star, and so on.
Scaled rounding This type of rounding, which is also named rounding to a logarithmic scale, is a variant of rounding to a specified powerHowever, musical scales are based on a logarithmic scale for frequenciesRounding on a logarithmic scale is accomplished by taking the log of the amount and doing normal rounding to the nearest value on the log scale.
If only the ordinate or abscissa is scaled logarithmically, the plot is referred to as a semi-logarithmic plot.Rounding on a logarithmic scale is accomplished by taking the log of the amount and doing normal rounding to the nearest value on the log scaleExamples include the loudness of a sound (measured in decibels), the brightness of a star, and the Richter scale of earthquake intensityLogarithmic magnitudes can be negativeEvery interval of one magnitude equates to a variation in brightness of 5√100 or roughly 2.512 timesTherefore, we should describe the frequency in logarithmic scale related to human hearing.
The size of commas is commonly expressed and compared in terms of cents – fractions of an octave on a logarithmic scale.
This type of rounding, which is also named rounding to a logarithmic scale, is a variant of rounding to a specified powerIt displays frequencies on a uniform scale- Although the spectrogram is profoundly useful, it still has one drawback","In the natural sciences, a logarithmic magnitude is typically referred to as a level.
Thus in 1856 Norman Pogson of Oxford proposed that a logarithmic scale of 5√100 ≈ 2.512 be adopted between magnitudes, so five magnitude steps corresponded precisely to a factor of 100 in brightnessWhen comparing magnitudes, a logarithmic scale is often usedConsequently, a magnitude 1 star is about 2.5 times brighter than a magnitude 2 star, about 2.52 times brighter than a magnitude 3 star, about 2.53 times brighter than a magnitude 4 star, and so on.
Scaled rounding This type of rounding, which is also named rounding to a logarithmic scale, is a variant of rounding to a specified powerHowever, musical scales are based on a logarithmic scale for frequenciesRounding on a logarithmic scale is accomplished by taking the log of the amount and doing normal rounding to the nearest value on the log scale.
If only the ordinate or abscissa is scaled logarithmically, the plot is referred to as a semi-logarithmic plot.Rounding on a logarithmic scale is accomplished by taking the log of the amount and doing normal rounding to the nearest value on the log scaleExamples include the loudness of a sound (measured in decibels), the brightness of a star, and the Richter scale of earthquake intensityLogarithmic magnitudes can be negativeEvery interval of one magnitude equates to a variation in brightness of 5√100 or roughly 2.512 timesTherefore, we should describe the frequency in logarithmic scale related to human hearing.
The size of commas is commonly expressed and compared in terms of cents – fractions of an octave on a logarithmic scale.
This type of rounding, which is also named rounding to a logarithmic scale, is a variant of rounding to a specified powerIt displays frequencies on a uniform scale- Although the spectrogram is profoundly useful, it still has one drawback[SEP]What is the reason behind the adoption of a logarithmic scale of 5√100 ≈ 2.512 between magnitudes in astronomy?","['A', 'C', 'E']",1.0
What is the spin quantum number?,"In physics, the spin quantum number is a quantum number (designated ) that describes the intrinsic angular momentum (or spin angular momentum, or simply spin) of an electron or other particle. The phrase spin quantum number was originally used to describe the fourth of a set of quantum numbers (the principal quantum number , the azimuthal quantum number , the magnetic quantum number , and the spin magnetic quantum number ), which completely describe the quantum state of an electron in an atom. At a more advanced level where quantum mechanical operators or coupled spins are introduced, is referred to as the spin quantum number, and is described as the spin magnetic quantum number or as the -component of spin . In atomic physics, a magnetic quantum number is a quantum number used to distinguish quantum states of an electron or other particle according to its angular momentum along a given axis in space. Some introductory chemistry textbooks describe as the spin quantum number, and is not mentioned since its value is a fixed property of the electron, sometimes using the variable in place of . The spin magnetic quantum number specifies the z-axis component of the spin angular momentum for a particle having spin quantum number . Other magnetic quantum numbers are similarly defined, such as for the z-axis component the total electronic angular momentum , and for the nuclear spin . The direction of spin is described by spin quantum number. * The particles having integral value (0, 1, 2...) of spin are called bosons. == Magnetic nature of atoms and molecules == The spin quantum number helps to explain the magnetic properties of atoms and molecules. Spin quantum numbers apply also to systems of coupled spins, such as atoms that may contain more than one electron. The component of the spin along a specified axis is given by the spin magnetic quantum number, conventionally written . The azimuthal quantum number is the second of a set of quantum numbers that describe the unique quantum state of an electron (the others being the principal quantum number , the magnetic quantum number , and the spin quantum number ). Magnetic quantum numbers are capitalized to indicate totals for a system of particles, such as or for the total z-axis orbital angular momentum of all the electrons in an atom. ==Derivation== thumb|These orbitals have magnetic quantum numbers m_l=-\ell, \ldots,\ell from left to right in ascending order. Typical quantum numbers related to spacetime symmetries are spin (related to rotational symmetry), the parity, C-parity and T-parity (related to the Poincaré symmetry of spacetime). * The magnitude spin quantum number of an electron cannot be changed. In quantum mechanics, the azimuthal quantum number is a quantum number for an atomic orbital that determines its orbital angular momentum and describes the shape of the orbital. The magnetic quantum number determines the energy shift of an atomic orbital due to an external magnetic field (the Zeeman effect) -- hence the name magnetic quantum number. Nuclear-spin quantum numbers are conventionally written for spin, and or for the -axis component. Quantum numbers often describe specifically the energy levels of electrons in atoms, but other possibilities include angular momentum, spin, etc. As a result of the different basis that may be arbitrarily chosen to form a complete set of commuting operators, different sets of quantum numbers may be used for the description of the same system in different situations. ==Electron in an atom== Four quantum numbers can describe an electron in an atom completely: *Principal quantum number () *Azimuthal quantum number () *Magnetic quantum number () *Spin quantum number () The spin–orbital interaction, however, relates these numbers. ",The spin quantum number is a measure of the distance between an elementary particle and the nucleus of an atom.,The spin quantum number is a measure of the size of an elementary particle.,The spin quantum number is a measure of the charge of an elementary particle.,The spin quantum number is a measure of the speed of an elementary particle's rotation around some axis.,"The spin quantum number is a dimensionless quantity obtained by dividing the spin angular momentum by the reduced Planck constant ħ, which has the same dimensions as angular momentum.",E,kaggle200,"In atomic physics, the spin quantum number is a quantum number (designated ) which describes the intrinsic angular momentum (or spin angular momentum, or simply spin) of an electron or other particle. The phrase was originally used to describe the fourth of a set of quantum numbers (the principal quantum number , the azimuthal quantum number , the magnetic quantum number , and the spin quantum number ), which completely describe the quantum state of an electron in an atom. The name comes from a physical spinning of the electron about an axis, as proposed by Uhlenbeck and Goudsmit. The value of is the component of spin angular momentum parallel to a given direction (the –axis), which can be either +1/2 or –1/2 (in units of the reduced Planck constant).
In general, the values of range from to , where is the spin quantum number, associated with the particle's intrinsic spin angular momentum:
where is the secondary spin quantum number, ranging from − to + in steps of one. This generates different values of .
At an elementary level, is described as the spin quantum number, and is not mentioned since its value 1/2 is a fixed property of the electron. At a more advanced level where quantum mechanical operators are introduced, is referred to as the spin quantum number, and is described as the spin magnetic quantum number or as the z-component of spin .","For a solution of either the nonrelativistic Pauli equation or the relativistic Dirac equation, the quantized angular momentum (see angular momentum quantum number) can be written as: where s is the quantized spin vector or spinor ‖s‖ is the norm of the spin vector s is the spin quantum number associated with the spin angular momentum ℏ is the reduced Planck constant.Given an arbitrary direction z (usually determined by an external magnetic field) the spin z-projection is given by sz=msℏ where ms is the secondary spin quantum number, ranging from −s to +s in steps of one. This generates 2 s + 1 different values of ms.
In physics, the spin quantum number is a quantum number (designated s) that describes the intrinsic angular momentum (or spin angular momentum, or simply spin) of an electron or other particle. It has the same value for all particles of the same type, such as s = 1/2 for all electrons. It is an integer for all bosons, such as photons, and a half-odd-integer for all fermions, such as electrons and protons. The component of the spin along a specified axis is given by the spin magnetic quantum number, conventionally written ms. The value of ms is the component of spin angular momentum, in units of the reduced Planck constant ħ, parallel to a given direction (conventionally labelled the z–axis). It can take values ranging from +s to −s in integer increments. For an electron, ms can be either ++1/2 or −+1/2 .
The phrase spin quantum number was originally used to describe the fourth of a set of quantum numbers (the principal quantum number n, the azimuthal quantum number ℓ, the magnetic quantum number m, and the spin magnetic quantum number ms), which completely describe the quantum state of an electron in an atom. Some introductory chemistry textbooks describe ms as the spin quantum number, and s is not mentioned since its value 1/2 is a fixed property of the electron, sometimes using the variable s in place of ms. Some authors discourage this usage as it causes confusion. At a more advanced level where quantum mechanical operators or coupled spins are introduced, s is referred to as the spin quantum number, and ms is described as the spin magnetic quantum number or as the z-component of spin sz.Spin quantum numbers apply also to systems of coupled spins, such as atoms that may contain more than one electron. Capitalized symbols are used: S for the total electronic spin, and mS or MS for the z-axis component. A pair of electrons in a spin singlet state has S = 0, and a pair in the triplet state has S = 1, with mS = −1, 0, or +1. Nuclear-spin quantum numbers are conventionally written I for spin, and mI or MI for the z-axis component.","- In atomic physics, the spin quantum number is a quantum number (designated ) which describes the intrinsic angular momentum (or spin angular momentum, or simply spin) of an electron or other particleFor an electron, ms can be either ++1/2 or −+1/2 .
The phrase spin quantum number was originally used to describe the fourth of a set of quantum numbers (the principal quantum number n, the azimuthal quantum number ℓ, the magnetic quantum number m, and the spin magnetic quantum number ms), which completely describe the quantum state of an electron in an atomThis generates 2 s + 1 different values of ms.
In physics, the spin quantum number is a quantum number (designated s) that describes the intrinsic angular momentum (or spin angular momentum, or simply spin) of an electron or other particleAt a more advanced level where quantum mechanical operators are introduced, is referred to as the spin quantum number, and is described as the spin magnetic quantum number or as the z-component of spin .The value of is the component of spin angular momentum parallel to a given direction (the –axis), which can be either +1/2 or –1/2 (in units of the reduced Planck constant).
In general, the values of range from to , where is the spin quantum number, associated with the particle's intrinsic spin angular momentum:
where is the secondary spin quantum number, ranging from − to + in steps of oneThis generates different values of .
At an elementary level, is described as the spin quantum number, and is not mentioned since its value 1/2 is a fixed property of the electronAt a more advanced level where quantum mechanical operators or coupled spins are introduced, s is referred to as the spin quantum number, and ms is described as the spin magnetic quantum number or as the z-component of spin sz.Spin quantum numbers apply also to systems of coupled spins, such as atoms that may contain more than one electronThe component of the spin along a specified axis is given by the spin magnetic quantum number, conventionally written msNuclear-spin quantum numbers are conventionally written I for spi","- In atomic physics, the spin quantum number is a quantum number (designated ) which describes the intrinsic angular momentum (or spin angular momentum, or simply spin) of an electron or other particleFor an electron, ms can be either ++1/2 or −+1/2 .
The phrase spin quantum number was originally used to describe the fourth of a set of quantum numbers (the principal quantum number n, the azimuthal quantum number ℓ, the magnetic quantum number m, and the spin magnetic quantum number ms), which completely describe the quantum state of an electron in an atomThis generates 2 s + 1 different values of ms.
In physics, the spin quantum number is a quantum number (designated s) that describes the intrinsic angular momentum (or spin angular momentum, or simply spin) of an electron or other particleAt a more advanced level where quantum mechanical operators are introduced, is referred to as the spin quantum number, and is described as the spin magnetic quantum number or as the z-component of spin .The value of is the component of spin angular momentum parallel to a given direction (the –axis), which can be either +1/2 or –1/2 (in units of the reduced Planck constant).
In general, the values of range from to , where is the spin quantum number, associated with the particle's intrinsic spin angular momentum:
where is the secondary spin quantum number, ranging from − to + in steps of oneThis generates different values of .
At an elementary level, is described as the spin quantum number, and is not mentioned since its value 1/2 is a fixed property of the electronAt a more advanced level where quantum mechanical operators or coupled spins are introduced, s is referred to as the spin quantum number, and ms is described as the spin magnetic quantum number or as the z-component of spin sz.Spin quantum numbers apply also to systems of coupled spins, such as atoms that may contain more than one electronThe component of the spin along a specified axis is given by the spin magnetic quantum number, conventionally written msNuclear-spin quantum numbers are conventionally written I for spi[SEP]What is the spin quantum number?","['E', 'D', 'C']",1.0
What is the synapstor or synapse transistor?,"A synaptic transistor is an electrical device that can learn in ways similar to a neural synapse. SyNAPSE is a DARPA program that aims to develop electronic neuromorphic machine technology, an attempt to build a new kind of cognitive computer with form, function, and architecture similar to the mammalian brain. SyNAPSE is a backronym standing for Systems of Neuromorphic Adaptive Plastic Scalable Electronics. The initial phase of the SyNAPSE program developed nanometer scale electronic synaptic components capable of adapting the connection strength between two neurons in a manner analogous to that seen in biological systems (Hebbian learning), and simulated the utility of these synaptic components in core microcircuits that support the overall system architecture. The input and output of the synaptic transistor are continuous analog values, rather than digital on-off signals. A network of such devices can learn particular responses to ""sensory inputs"", with those responses being learned through experience rather than explicitly programmed. ==References== Category:Transistor types Category:Artificial neural networks While the physical structure of the device has the potential to learn from history, it contains no way to bias the transistor to control the memory effect. In a neuron, synaptic vesicles (or neurotransmitter vesicles) store various neurotransmitters that are released at the synapse. Transmitter loading Once at the synapse, synaptic vesicles are loaded with a neurotransmitter. Synapse is a peer-reviewed scientific journal of neuroscience published in New York City by Wiley-Liss to address basic science topics on synaptic function and structure. The device mimics the behavior of the property of neurons called spike-timing-dependent plasticity, or STDP. ==Structure== Its structure is similar to that of a field effect transistor, where an ionic liquid takes the place of the gate insulating layer between the gate electrode and the conducting channel. In support of these hardware developments, the program seeks to develop increasingly capable architecture and design tools, very large-scale computer simulations of the neuromorphic electronic systems to inform the designers and validate the hardware prior to fabrication, and virtual environments for training and testing the simulated and hardware neuromorphic systems. ==Published product highlights== * clockless operation (event-driven), consumes 70 mW during real- time operation, power density of 20 mW/cm²New IBM SyNAPSE Chip Could Open Era of Vast Neural Networks IBM, August 7, 2014 * manufactured in Samsung’s 28 nm process technology, 5.4 billion transistors * one million neurons and 256 million synapses networked into 4096 neurosynaptic cores by a 2D array, all programmable * each core module integrates memory, computation, and communication, and operates in an event-driven, parallel, and fault-tolerant fashion ==Participants== The following people and institutions are participating in the DARPA SyNAPSE program: IBM team, led by Dharmendra Modha * Stanford University: Brian A. Wandell, H.-S. Philip Wong * Cornell University: Rajit Manohar * Columbia University Medical Center: Stefano Fusi * University of Wisconsin–Madison: Giulio Tononi * University of California, Merced: Christopher Kello * iniLabs GmbH: Tobi Delbruck * IBM Research: Rajagopal Ananthanarayanan, Leland Chang, Daniel Friedman, Christoph Hagleitner, Bulent Kurdi, Chung Lam, Paul Maglio, Dharmendra Modha, Stuart Parkin, Bipin Rajendran, Raghavendra Singh HRL Team led by Narayan Srinivasa * HRL Laboratories: Narayan Srinivasa, Jose Cruz-Albrecht, Dana Wheeler, Tahir Hussain, Sri Satyanarayana, Tim Derosier, Youngkwan Cho, Corey Thibeault, Michael O' Brien, Michael Yung, Karl Dockendorf, Vincent De Sapio, Qin Jiang, Suhas Chelian * Boston University: Massimiliano Versace, Stephen Grossberg, Gail Carpenter, Yongqiang Cao, Praveen Pilly * Neurosciences Institute: Gerald Edelman, Einar Gall, Jason Fleischer * University of Michigan: Wei Lu * Georgia Institute of Technology: Jennifer Hasler * University of California, Irvine: Jeff Krichmar * George Mason University: Giorgio Ascoli, Alexei Samsonovich * Portland State University: Christof Teuscher * Stanford University: Mark Schnitzer * Set Corporation: Chris Long ==See also== *TrueNorth – IBM chip (introduced mid 2014) boasts of 1 million neurons and 256 million synapses (computing sense); 5.4 billion transistors and 4,096 neurosynaptic cores (hardware). The name alludes to synapses, the junctions between biological neurons. The analog of strengthening a synapse is to increase the SNO's conductivity, which essentially increases gain. Similarly, weakening a synapse is analogous to decreasing the SNO's conductivity, lowering the gain. That channel is composed of samarium nickelate (, or SNO) rather than the field effect transistor's doped silicon. ==Function== A synaptic transistor has a traditional immediate response whose amount of current that passes between the source and drain contacts varies with voltage applied to the gate electrode. * Computational RAM is another approach bypassing the von Neumann bottleneck ==References== ==External links== * Systems of Neuromorphic Adaptive Plastic Scalable Electronics * Neuromorphonics Lab, Boston University * Center for Neural and Emergent Systems Homepage * HRL Labs Homepage Category:Neurotechnology Category:DARPA projects In this case, the synaptic vesicle ""kisses"" the cellular membrane, opening a small pore for its neurotransmitter payload to be released through, then closes the pore and is recycled back into the cell. The missing link was the demonstration that the neurotransmitter acetylcholine is actually contained in synaptic vesicles. The Synapse web portal is an online registry of research projects that allows data scientists to discover and share data, models, and analysis methods. ==References== Category:Open science Category:Collaborative projects Category:Computing websites Category:Cross-platform software Category:Project hosting websites ",A device used to demonstrate a neuro-inspired circuit that shows short-term potentiation for learning and inactivity-based forgetting.,A device used to demonstrate a neuro-inspired circuit that shows long-term potentiation for learning and activity-based forgetting.,A device used to demonstrate a neuro-inspired circuit that shows short-term depression for learning and inactivity-based forgetting.,A device used to demonstrate a neuro-inspired circuit that shows short-term potentiation for learning and activity-based forgetting.,A device used to demonstrate a neuro-inspired circuit that shows long-term potentiation for learning and inactivity-based forgetting.,E,kaggle200,"The Book of Learning and Forgetting () is a 1998 book in which author Frank Smith investigates the history of learning theories and the events that shaped our current educational structure.
NOMFET is a nanoparticle organic memory field-effect transistor. The transistor is designed to mimic the feature of the human synapse known as plasticity, or the variation of the speed and strength of the signal going from neuron to neuron. The device uses gold nano-particles of about 5—20 nm set with pentacene to emulate the change in voltages and speed within the signal. This device uses charge trapping/detrapping in an array of gold nanoparticles (NPs) at the SiO/pentacene interface to design a SYNAPSTOR (synapse transistor) mimicking the dynamic plasticity of a biological synapse. This device (memristor-like) mimics short-term plasticity (STP) and temporal correlation plasticity (STDP, spike-timing dependent plasticity), two ""functions"" at the basis of learning processes. A compact model was developed, and these organic synapstors were used to demonstrate an associative memory, which can be trained to present a pavlovian response. A recent report showed that these organic synapse-transistors (synapstor) are working at 1 volt and with a plasticity typical response time in the range 100-200 ms. The device also works in contact with an electrolyte (EGOS : electrolyte gated organic synapstor) and can be interfaced with biologic neurons.
A balloon rocket is a rubber balloon filled with air or other gases. Besides being simple toys, balloon rockets are a widely used as a teaching device to demonstrate basic physics.
In 2010, Alibart, Gamrat, Vuillaume et al. introduced a new hybrid organic/nanoparticle device (the NOMFET : Nanoparticle Organic Memory Field Effect Transistor), which behaves as a memristor and which exhibits the main behavior of a biological spiking synapse. This device, also called a synapstor (synapse transistor), was used to demonstrate a neuro-inspired circuit (associative memory showing a pavlovian learning).","NOMFET is a nanoparticle organic memory field-effect transistor. The transistor is designed to mimic the feature of the human synapse known as plasticity, or the variation of the speed and strength of the signal going from neuron to neuron. The device uses gold nano-particles of about 5—20 nm set with pentacene to emulate the change in voltages and speed within the signal. This device uses charge trapping/detrapping in an array of gold nanoparticles (NPs) at the SiO2/pentacene interface to design a SYNAPSTOR (synapse transistor) mimicking the dynamic plasticity of a biological synapse. This device (memristor-like) mimics short-term plasticity (STP) and temporal correlation plasticity (STDP, spike-timing dependent plasticity), two ""functions"" at the basis of learning processes. A compact model was developed, and these organic synapstors were used to demonstrate an associative memory, which can be trained to present a pavlovian response. A recent report showed that these organic synapse-transistors (synapstor) are working at 1 volt and with a plasticity typical response time in the range 100-200 ms. The device also works in contact with an electrolyte (EGOS : electrolyte gated organic synapstor) and can be interfaced with biologic neurons. The recent creation of this novel transistor gives prospects to better recreation of certain types of human cognitive processes, such as recognition and image processing. When the NOMFET is used in a neuromorphic circuit it is able to replicate the functionality of plasticity that previously required groups of several transistors to emulate and thus continue to decrease the size of the processor that would be attempting to utilize the computational advantages of a pseudo-synaptic operation. (See Moore's Law)
A balloon rocket is a rubber balloon filled with air or other gases. Besides being simple toys, balloon rockets are a widely used as a teaching device to demonstrate basic physics.
In July 2008, Erokhin and Fontana claimed to have developed a polymeric memristor before the more recently announced titanium dioxide memristor.In 2010, Alibart, Gamrat, Vuillaume et al. introduced a new hybrid organic/nanoparticle device (the NOMFET : Nanoparticle Organic Memory Field Effect Transistor), which behaves as a memristor and which exhibits the main behavior of a biological spiking synapse. This device, also called a synapstor (synapse transistor), was used to demonstrate a neuro-inspired circuit (associative memory showing a pavlovian learning).In 2012, Crupi, Pradhan and Tozer described a proof of concept design to create neural synaptic memory circuits using organic ion-based memristors. The synapse circuit demonstrated long-term potentiation for learning as well as inactivity based forgetting. Using a grid of circuits, a pattern of light was stored and later recalled. This mimics the behavior of the V1 neurons in the primary visual cortex that act as spatiotemporal filters that process visual signals such as edges and moving lines.","This device, also called a synapstor (synapse transistor), was used to demonstrate a neuro-inspired circuit (associative memory showing a pavlovian learning).The transistor is designed to mimic the feature of the human synapse known as plasticity, or the variation of the speed and strength of the signal going from neuron to neuronThis device, also called a synapstor (synapse transistor), was used to demonstrate a neuro-inspired circuit (associative memory showing a pavlovian learning).In 2012, Crupi, Pradhan and Tozer described a proof of concept design to create neural synaptic memory circuits using organic ion-based memristorsThe device also works in contact with an electrolyte (EGOS : electrolyte gated organic synapstor) and can be interfaced with biologic neuronsWhen the NOMFET is used in a neuromorphic circuit it is able to replicate the functionality of plasticity that previously required groups of several transistors to emulate and thus continue to decrease the size of the processor that would be attempting to utilize the computational advantages of a pseudo-synaptic operationThis device uses charge trapping/detrapping in an array of gold nanoparticles (NPs) at the SiO2/pentacene interface to design a SYNAPSTOR (synapse transistor) mimicking the dynamic plasticity of a biological synapseA recent report showed that these organic synapse-transistors (synapstor) are working at 1 volt and with a plasticity typical response time in the range 100-200 msThis device uses charge trapping/detrapping in an array of gold nanoparticles (NPs) at the SiO/pentacene interface to design a SYNAPSTOR (synapse transistor) mimicking the dynamic plasticity of a biological synapseThe synapse circuit demonstrated long-term potentiation for learning as well as inactivity based forgettingThe recent creation of this novel transistor gives prospects to better recreation of certain types of human cognitive processes, such as recognition and image processingA compact model was developed, and these organic synapstors were used to demonstrate an associative memory, which can be trained to ","This device, also called a synapstor (synapse transistor), was used to demonstrate a neuro-inspired circuit (associative memory showing a pavlovian learning).The transistor is designed to mimic the feature of the human synapse known as plasticity, or the variation of the speed and strength of the signal going from neuron to neuronThis device, also called a synapstor (synapse transistor), was used to demonstrate a neuro-inspired circuit (associative memory showing a pavlovian learning).In 2012, Crupi, Pradhan and Tozer described a proof of concept design to create neural synaptic memory circuits using organic ion-based memristorsThe device also works in contact with an electrolyte (EGOS : electrolyte gated organic synapstor) and can be interfaced with biologic neuronsWhen the NOMFET is used in a neuromorphic circuit it is able to replicate the functionality of plasticity that previously required groups of several transistors to emulate and thus continue to decrease the size of the processor that would be attempting to utilize the computational advantages of a pseudo-synaptic operationThis device uses charge trapping/detrapping in an array of gold nanoparticles (NPs) at the SiO2/pentacene interface to design a SYNAPSTOR (synapse transistor) mimicking the dynamic plasticity of a biological synapseA recent report showed that these organic synapse-transistors (synapstor) are working at 1 volt and with a plasticity typical response time in the range 100-200 msThis device uses charge trapping/detrapping in an array of gold nanoparticles (NPs) at the SiO/pentacene interface to design a SYNAPSTOR (synapse transistor) mimicking the dynamic plasticity of a biological synapseThe synapse circuit demonstrated long-term potentiation for learning as well as inactivity based forgettingThe recent creation of this novel transistor gives prospects to better recreation of certain types of human cognitive processes, such as recognition and image processingA compact model was developed, and these organic synapstors were used to demonstrate an associative memory, which can be trained to [SEP]What is the synapstor or synapse transistor?","['E', 'A', 'D']",1.0
What is spontaneous symmetry breaking?,"Spontaneous symmetry breaking is a spontaneous process of symmetry breaking, by which a physical system in a symmetric state spontaneously ends up in an asymmetric state. The term ""spontaneous symmetry breaking"" is a misnomer here as Elitzur's theorem states that local gauge symmetries can never be spontaneously broken. Explicit symmetry breaking differs from spontaneous symmetry breaking. When a theory is symmetric with respect to a symmetry group, but requires that one element of the group be distinct, then spontaneous symmetry breaking has occurred. When the system goes to one of those vacuum solutions, the symmetry is broken for perturbations around that vacuum even though the entire Lagrangian retains that symmetry. ==Overview== By definition, spontaneous symmetry breaking requires the existence of physical laws (e.g. quantum mechanics) which are invariant under a symmetry transformation (such as translation or rotation), so that any pair of outcomes differing only by that transformation have the same probability distribution. If there is a field (often a background field) which acquires an expectation value (not necessarily a vacuum expectation value) which is not invariant under the symmetry in question, we say that the system is in the ordered phase, and the symmetry is spontaneously broken. Symmetry breaking can be distinguished into two types, explicit and spontaneous. In conventional spontaneous gauge symmetry breaking, there exists an unstable Higgs particle in the theory, which drives the vacuum to a symmetry-broken phase (i.e, electroweak interactions.) Spontaneous symmetry breaking occurs when this relation breaks down, while the underlying physical laws remain symmetrical. Typically, when spontaneous symmetry breaking occurs, the observable properties of the system change in multiple ways. The symmetry is spontaneously broken as when the Hamiltonian becomes invariant under the inversion transformation, but the expectation value is not invariant. In physics, symmetry breaking is a phenomenon where a disordered but symmetric state collapses into an ordered, but less symmetric state. Advances in Physics, vol. 2 Interscience Publishers, New York. pp. 567–708 * Spontaneous Symmetry Breaking in Gauge Theories: a Historical Survey *The Royal Society Publishing: Spontaneous symmetry breaking in gauge theories *University of Cambridge, David Tong: Lectures on Quantum Field Theory for masters level students. In particle physics, chiral symmetry breaking is the spontaneous symmetry breaking of a chiral symmetry - usually by a gauge theory such as quantum chromodynamics, the quantum field theory of the strong interaction. Dynamical breaking of a global symmetry is a spontaneous symmetry breaking, which happens not at the (classical) tree level (i.e., at the level of the bare action), but due to quantum corrections (i.e., at the level of the effective action). There are several known examples of matter that cannot be described by spontaneous symmetry breaking, including: topologically ordered phases of matter, such as fractional quantum Hall liquids, and spin-liquids. Hence, the symmetry is said to be spontaneously broken in that theory. The explicit symmetry breaking occurs at a smaller energy scale. A special case of this type of symmetry breaking is dynamical symmetry breaking. In the absence of explicit breaking, spontaneous symmetry breaking would engender massless Nambu–Goldstone bosons for the exact spontaneously broken chiral symmetries. ","Spontaneous symmetry breaking occurs when the action of a theory has no symmetry, but the vacuum state has a symmetry. In that case, there will exist a local operator that is non-invariant under the symmetry, giving it a nonzero vacuum expectation value.","Spontaneous symmetry breaking occurs when the action of a theory has a symmetry, and the vacuum state also has the same symmetry. In that case, there will exist a local operator that is invariant under the symmetry, giving it a zero vacuum expectation value.","Spontaneous symmetry breaking occurs when the action of a theory has no symmetry, and the vacuum state also has no symmetry. In that case, there will exist a local operator that is invariant under the symmetry, giving it a zero vacuum expectation value.","Spontaneous symmetry breaking occurs when the action of a theory has a symmetry, but the vacuum state violates this symmetry. In that case, there will exist a local operator that is invariant under the symmetry, giving it a zero vacuum expectation value.","Spontaneous symmetry breaking occurs when the action of a theory has a symmetry, but the vacuum state violates this symmetry. In that case, there will exist a local operator that is non-invariant under the symmetry, giving it a nonzero vacuum expectation value.",E,kaggle200,"Dynamical symmetry breaking (DSB) is a special form of spontaneous symmetry breaking in which the ground state of the system has reduced symmetry properties compared to its theoretical description (i.e., Lagrangian).
Outside of gauge symmetry, spontaneous symmetry breaking is associated with phase transitions. For example in the Ising model, as the temperature of the system falls below the critical temperature the formula_11 symmetry of the vacuum is broken, giving a phase transition of the system.
In the Lagrangian setting of quantum field theory, the Lagrangian formula_6 is a functional of quantum fields which is invariant under the action of a symmetry group formula_3. However, the ground state configuration (the vacuum expectation value) of the fields may not be invariant under formula_3, but instead partially breaks the symmetry to a subgroup formula_9 of formula_3. This is spontaneous symmetry breaking.
Spontaneous symmetry breaking occurs when the action of a theory has a symmetry but the vacuum state violates this symmetry. In that case there will exist a local operator that is non-invariant under the symmetry giving it a nonzero vacuum expectation value. Such non-invariant local operators always have vanishing vacuum expectation values for finite size systems prohibiting spontaneous symmetry breaking. This occurs because over large timescales, finite systems always transition between all its possible ground states, averaging away the expectation value to zero.","Spontaneous symmetry breaking is also associated with phase transitions. For example in the Ising model, as the temperature of the system falls below the critical temperature the Z2 symmetry of the vacuum is broken, giving a phase transition of the system.
Spontaneous symmetry breaking When the Hamiltonian of a system (or the Lagrangian) has a certain symmetry, but the vacuum does not, then one says that spontaneous symmetry breaking (SSB) has taken place.
A field theory admits numerous types of symmetries, with the two most common ones being global and local symmetries. Global symmetries are fields transformations acting the same way everywhere while local symmetries act on fields in a position dependent way. The latter correspond to redundancies in the description of the system. This is a consequence of Noether's second theorem which shows that each gauge symmetry degree of freedom corresponds to a relation among the Euler–Lagrange equations, making the system underdetermined. Underdeterminacy requires gauge fixing of the non-propagating components so that the equations of motion admits a unique solution.Spontaneous symmetry breaking occurs when the action of a theory has a symmetry but the vacuum state violates this symmetry. In that case there will exist a local operator that is non-invariant under the symmetry giving it a nonzero vacuum expectation value. Such non-invariant local operators always have vanishing vacuum expectation values for finite size systems prohibiting spontaneous symmetry breaking. This occurs because over large timescales, finite systems always transition between all its possible ground states, averaging away the expectation value to zero.While spontaneous symmetry breaking can occur for global symmetries, Elitzur's theorem states that the same is not the case for gauge symmetries; all vacuum expectation values of gauge non-invariant operators are vanishing, even in systems of infinite size. On the lattice this follows from the fact that integrating gauge non-invariant observables over a group measure always yields zero for compact gauge groups. Positivity of the measure and gauge invariance are sufficient to prove the theorem. This is also an explanation for why gauge symmetries are mere redundancies in lattice field theories, where the equations of motion need not define a well-posed problem as they do not need to be solved. Instead, Elitzur's theorem shows that any observable that is not invariant under the symmetry has a vanishing expectation value making it unobservable and thus redundant.","For example in the Ising model, as the temperature of the system falls below the critical temperature the Z2 symmetry of the vacuum is broken, giving a phase transition of the system.
Spontaneous symmetry breaking When the Hamiltonian of a system (or the Lagrangian) has a certain symmetry, but the vacuum does not, then one says that spontaneous symmetry breaking (SSB) has taken place.
A field theory admits numerous types of symmetries, with the two most common ones being global and local symmetriesSpontaneous symmetry breaking is also associated with phase transitions- Dynamical symmetry breaking (DSB) is a special form of spontaneous symmetry breaking in which the ground state of the system has reduced symmetry properties compared to its theoretical description (i.e., Lagrangian).
Outside of gauge symmetry, spontaneous symmetry breaking is associated with phase transitionsThis is spontaneous symmetry breaking.
Spontaneous symmetry breaking occurs when the action of a theory has a symmetry but the vacuum state violates this symmetryThis occurs because over large timescales, finite systems always transition between all its possible ground states, averaging away the expectation value to zero.While spontaneous symmetry breaking can occur for global symmetries, Elitzur's theorem states that the same is not the case for gauge symmetries; all vacuum expectation values of gauge non-invariant operators are vanishing, even in systems of infinite sizeUnderdeterminacy requires gauge fixing of the non-propagating components so that the equations of motion admits a unique solution.Spontaneous symmetry breaking occurs when the action of a theory has a symmetry but the vacuum state violates this symmetryFor example in the Ising model, as the temperature of the system falls below the critical temperature the formula_11 symmetry of the vacuum is broken, giving a phase transition of the system.
In the Lagrangian setting of quantum field theory, the Lagrangian formula_6 is a functional of quantum fields which is invariant under the action of a symmetry group formula_3However, the g","For example in the Ising model, as the temperature of the system falls below the critical temperature the Z2 symmetry of the vacuum is broken, giving a phase transition of the system.
Spontaneous symmetry breaking When the Hamiltonian of a system (or the Lagrangian) has a certain symmetry, but the vacuum does not, then one says that spontaneous symmetry breaking (SSB) has taken place.
A field theory admits numerous types of symmetries, with the two most common ones being global and local symmetriesSpontaneous symmetry breaking is also associated with phase transitions- Dynamical symmetry breaking (DSB) is a special form of spontaneous symmetry breaking in which the ground state of the system has reduced symmetry properties compared to its theoretical description (i.e., Lagrangian).
Outside of gauge symmetry, spontaneous symmetry breaking is associated with phase transitionsThis is spontaneous symmetry breaking.
Spontaneous symmetry breaking occurs when the action of a theory has a symmetry but the vacuum state violates this symmetryThis occurs because over large timescales, finite systems always transition between all its possible ground states, averaging away the expectation value to zero.While spontaneous symmetry breaking can occur for global symmetries, Elitzur's theorem states that the same is not the case for gauge symmetries; all vacuum expectation values of gauge non-invariant operators are vanishing, even in systems of infinite sizeUnderdeterminacy requires gauge fixing of the non-propagating components so that the equations of motion admits a unique solution.Spontaneous symmetry breaking occurs when the action of a theory has a symmetry but the vacuum state violates this symmetryFor example in the Ising model, as the temperature of the system falls below the critical temperature the formula_11 symmetry of the vacuum is broken, giving a phase transition of the system.
In the Lagrangian setting of quantum field theory, the Lagrangian formula_6 is a functional of quantum fields which is invariant under the action of a symmetry group formula_3However, the g[SEP]What is spontaneous symmetry breaking?","['E', 'D', 'C']",1.0
What is the proper distance for a redshift of 8.2?,"__NOTOC__ MACS0647-JD is a galaxy with a redshift of about z = 10.7, equivalent to a light travel distance of 13.26 billion light-years (4 billion parsecs). Using Hubble's law, the redshift can be used to estimate the distance of an object from Earth. Photometric redshifts were originally determined by calculating the expected observed data from a known emission spectrum at a range of redshifts. In the absence of sufficient telescope time to determine a spectroscopic redshift for each object, the technique of photometric redshifts provides a method to determine an at least qualitative characterization of a redshift. A photometric redshift is an estimate for the recession velocity of an astronomical object such as a galaxy or quasar, made without measuring its spectrum. BDF-3299 is a remote galaxy with a redshift of z = 7.109 corresponds to a distance traveled by light to come down to Earth of 12.9 billion light-years. ==See also== *List of most distant galaxies *List of the most distant astronomical objects ==Sources== * Category:Galaxies Category:Piscis Austrinus BDF-521 is a remote galaxy with a redshift of z = 7.008 corresponds to a distance traveled by light to come down to Earth of 12.89 billion light years. ==See also== *List of the most distant astronomical objects *List of galaxies Category:Galaxies Category:Piscis Austrinus This was later extended to the CfA2 redshift survey of 15,000 galaxies, completed in the early 1990s. At present, the errors on photometric redshift measurements are significantly higher than those of spectroscopic redshifts, but future surveys (for example, the LSST) aim to significantly refine the technique. == See also == * Baryon acoustic oscillations * Intensity mapping * Large-scale structure of the cosmos * Redshift-space distortions * Galaxy filament ==References== ==External links== * Probes of Large Scale Structure * List of galaxy redshift surveys Category:Physical cosmology Category:Observational astronomy Category:Large- scale structure of the cosmos Additional spectroscopic observations by JWST will be needed to accurately confirm the redshift of MACS0647-JD. == See also == * List of the most distant astronomical objects * Farthest galaxies ==References== ==External links== * * NASA Great Observatories Find Candidate for Most Distant Object in the Universe to Date * European Space Agency – Galaxy cluster MACS J0647.7+7015 Category:Galaxies Category:Camelopardalis Category:Dwarf galaxies If the distance estimate is correct, it formed about 427 million years after the Big Bang. ==Details== JD refers to J-band Dropout – the galaxy was not detected in the so-called J-band (F125W), nor in 14 bluer Hubble filters. Because of the demands on observing time required to obtain spectroscopic redshifts (i.e., redshifts determined directly from spectral features measured at high precision), a common alternative is to use photometric redshifts based on model fits to the brightnesses and colors of objects. As photometric filters are sensitive to a range of wavelengths, and the technique relies on making many assumptions about the nature of the spectrum at the light-source, errors for these sorts of measurements can range up to δz = 0.5, and are much less reliable than spectroscopic determinations.Bolzonella, M.; Miralles, J.-M.; Pelló, R., Photometric redshifts based on standard SED fitting procedures, Astronomy and Astrophysics, 363, p.476-492 (2000). The photometric redshift technique has come back into mainstream use since 2000, as a result of large sky surveys conducted in the late 1990s and 2000s which have detected a large number of faint high-redshift objects, and telescope time limitations mean that only a small fraction of these can be observed by spectroscopy. In recent years, Bayesian statistical methods and artificial neural networks have been used to estimate redshifts from photometric data. ==References== ==External links== *What are photometric redshifts? The first systematic redshift survey was the CfA Redshift Survey of around 2,200 galaxies, started in 1977 with the initial data collection completed in 1982. Infrared NIRCam imaging of MACS0647-JD by the James Webb Space Telescope (JWST) in September 2022 determined a photometric redshift of , in agreement with the previous Hubble estimate. Other means of estimating the redshift based on alternative observed quantities have been developed, like for instance morphological redshifts applied to galaxy clusters which rely on geometric measurements J.M. Diego et al. Morphological redshift estimates for galaxy clusters in a Sunyaev-Zel'dovich effect survey. The technique uses photometry (that is, the brightness of the object viewed through various standard filters, each of which lets through a relatively broad passband of colours, such as red light, green light, or blue light) to determine the redshift, and hence, through Hubble's law, the distance, of the observed object. It is less than 600 light-years wide, and contains roughly a billion stars. ","The proper distance for a redshift of 8.2 is about 6.2 Gpc, or about 24 billion light-years.","The proper distance for a redshift of 8.2 is about 7.2 Gpc, or about 26 billion light-years.","The proper distance for a redshift of 8.2 is about 9.2 Gpc, or about 30 billion light-years.","The proper distance for a redshift of 8.2 is about 8.2 Gpc, or about 28 billion light-years.","The proper distance for a redshift of 8.2 is about 10.2 Gpc, or about 32 billion light-years.",C,kaggle200,"BDF-3299 is a remote galaxy with a redshift of z = 7.109 corresponds to a distance traveled by light to come down to Earth of 12.9 billion light-years.
The highest-redshift quasar known () was ULAS J1342+0928, with a redshift of 7.54, which corresponds to a comoving distance of approximately 29.36 billion light-years from Earth (these distances are much larger than the distance light could travel in the universe's 13.8-billion-year history because space itself has also been expanding).
The cosmic microwave background has a redshift of , corresponding to an age of approximately 379,000 years after the Big Bang and a proper distance of more than 46 billion light-years. The yet-to-be-observed first light from the oldest Population III stars, not long after atoms first formed and the CMB ceased to be absorbed almost completely, may have redshifts in the range of . Other high-redshift events predicted by physics but not presently observable are the cosmic neutrino background from about two seconds after the Big Bang (and a redshift in excess of ) and the cosmic gravitational wave background emitted directly from inflation at a redshift in excess of .
The most distant astronomical object identified (as of 2022) is a galaxy classified as HD1, with a redshift of 13.27, corresponding to a distance of about 33.4 billion light years. In 2009, a gamma ray burst, GRB 090423, was found to have a redshift of 8.2, which indicates that the collapsing star that caused it exploded when the universe was only 630 million years old. The burst happened approximately 13 billion years ago, so a distance of about 13 billion light-years was widely quoted in the media (or sometimes a more precise figure of 13.035 billion light-years), though this would be the ""light travel distance"" (see Distance measures (cosmology)) rather than the ""proper distance"" used in both Hubble's law and in defining the size of the observable universe (cosmologist Ned Wright argues against the common use of light travel distance in astronomical press releases on this page, and at the bottom of the page offers online calculators that can be used to calculate the current proper distance to a distant object in a flat universe based on either the redshift ""z"" or the light travel time). The proper distance for a redshift of 8.2 would be about 9.2 Gpc, or about 30 billion light-years.","13 Ym – 1.37 billion light-years – Length of the South Pole Wall 13 Ym – 1.38 billion light-years – Length of the Sloan Great Wall 18 Ym – redshift 0.16 – 1.9 billion light-years – Distance to the quasar 3C 273 (light travel distance) 30.8568 Ym – 3.2616 billion light-years – 1 gigaparsec 31.2204106 Ym − 3.3 billion light-years − Length of The Giant Arc, a large cosmic structure discovered in 2021 33 Ym – 3.5 billion light-years – Maximum distance of the 2dF Galaxy Redshift Survey (light travel distance) 37.8 Ym – 4 billion light-years – Length of the Huge-LQG 75 Ym – redshift 0.95 – 8 billion light-years – Approximate distance to the supernova SN 2002dd in the Hubble Deep Field North (light travel distance) 85 Ym – redshift 1.6 – 9 billion light-years – Approximate distance to the gamma-ray burst GRB 990123 (light travel distance) 94.6 Ym – 10 billion light-years – Approximate distance to quasar OQ172 94.6 Ym – 10 billion light-years – Length of the Hercules–Corona Borealis Great Wall, one of the largest and most massive-known cosmic structures known
Highest redshifts Currently, the objects with the highest known redshifts are galaxies and the objects producing gamma ray bursts. The most reliable redshifts are from spectroscopic data, and the highest-confirmed spectroscopic redshift of a galaxy is that of GN-z11, with a redshift of z = 11.1, corresponding to 400 million years after the Big Bang. The previous record was held by UDFy-38135539 at a redshift of z = 8.6, corresponding to 600 million years after the Big Bang. Slightly less reliable are Lyman-break redshifts, the highest of which is the lensed galaxy A1689-zD1 at a redshift z = 7.5 and the next highest being z = 7.0. The most distant-observed gamma-ray burst with a spectroscopic redshift measurement was GRB 090423, which had a redshift of z = 8.2. The most distant-known quasar, ULAS J1342+0928, is at z = 7.54. The highest-known redshift radio galaxy (TGSS1530) is at a redshift z = 5.72 and the highest-known redshift molecular material is the detection of emission from the CO molecule from the quasar SDSS J1148+5251 at z = 6.42.Extremely red objects (EROs) are astronomical sources of radiation that radiate energy in the red and near infrared part of the electromagnetic spectrum. These may be starburst galaxies that have a high redshift accompanied by reddening from intervening dust, or they could be highly redshifted elliptical galaxies with an older (and therefore redder) stellar population. Objects that are even redder than EROs are termed hyper extremely red objects (HEROs).The cosmic microwave background has a redshift of z = 1089, corresponding to an age of approximately 379,000 years after the Big Bang and a proper distance of more than 46 billion light-years. The yet-to-be-observed first light from the oldest Population III stars, not long after atoms first formed and the CMB ceased to be absorbed almost completely, may have redshifts in the range of 20 < z < 100. Other high-redshift events predicted by physics but not presently observable are the cosmic neutrino background from about two seconds after the Big Bang (and a redshift in excess of z > 1010) and the cosmic gravitational wave background emitted directly from inflation at a redshift in excess of z > 1025.In June 2015, astronomers reported evidence for Population III stars in the Cosmos Redshift 7 galaxy at z = 6.60. Such stars are likely to have existed in the very early universe (i.e., at high redshift), and may have started the production of chemical elements heavier than hydrogen that are needed for the later formation of planets and life as we know it.
The most distant astronomical object identified (as of 2022) is a galaxy classified as HD1, with a redshift of 13.27, corresponding to a distance of about 33.4 billion light years. In 2009, a gamma ray burst, GRB 090423, was found to have a redshift of 8.2, which indicates that the collapsing star that caused it exploded when the universe was only 630 million years old. The burst happened approximately 13 billion years ago, so a distance of about 13 billion light-years was widely quoted in the media (or sometimes a more precise figure of 13.035 billion light-years) - however, this would be the ""light travel distance"" (see Distance measures (cosmology)) rather than the ""proper distance"" used in both Hubble's law and in defining the size of the observable universe, and cosmologist Ned Wright argues against using this measure. The proper distance for a redshift of 8.2 would be about 9.2 Gpc, or about 30 billion light-years.","The proper distance for a redshift of 8.2 would be about 9.2 Gpc, or about 30 billion light-yearsThe proper distance for a redshift of 8.2 would be about 9.2 Gpc, or about 30 billion light-years.Slightly less reliable are Lyman-break redshifts, the highest of which is the lensed galaxy A1689-zD1 at a redshift z = 7.5 and the next highest being z = 7.0- BDF-3299 is a remote galaxy with a redshift of z = 7.109 corresponds to a distance traveled by light to come down to Earth of 12.9 billion light-years.
The highest-redshift quasar known () was ULAS J1342+0928, with a redshift of 7.54, which corresponds to a comoving distance of approximately 29.36 billion light-years from Earth (these distances are much larger than the distance light could travel in the universe's 13.8-billion-year history because space itself has also been expanding).
The cosmic microwave background has a redshift of , corresponding to an age of approximately 379,000 years after the Big Bang and a proper distance of more than 46 billion light-yearsThe burst happened approximately 13 billion years ago, so a distance of about 13 billion light-years was widely quoted in the media (or sometimes a more precise figure of 13.035 billion light-years), though this would be the ""light travel distance"" (see Distance measures (cosmology)) rather than the ""proper distance"" used in both Hubble's law and in defining the size of the observable universe (cosmologist Ned Wright argues against the common use of light travel distance in astronomical press releases on this page, and at the bottom of the page offers online calculators that can be used to calculate the current proper distance to a distant object in a flat universe based on either the redshift ""z"" or the light travel time)The most reliable redshifts are from spectroscopic data, and the highest-confirmed spectroscopic redshift of a galaxy is that of GN-z11, with a redshift of z = 11.1, corresponding to 400 million years after the Big BangOther high-redshift events predicted by physics but not presently observable are the cosmic neutrino background from abo","The proper distance for a redshift of 8.2 would be about 9.2 Gpc, or about 30 billion light-yearsThe proper distance for a redshift of 8.2 would be about 9.2 Gpc, or about 30 billion light-years.Slightly less reliable are Lyman-break redshifts, the highest of which is the lensed galaxy A1689-zD1 at a redshift z = 7.5 and the next highest being z = 7.0- BDF-3299 is a remote galaxy with a redshift of z = 7.109 corresponds to a distance traveled by light to come down to Earth of 12.9 billion light-years.
The highest-redshift quasar known () was ULAS J1342+0928, with a redshift of 7.54, which corresponds to a comoving distance of approximately 29.36 billion light-years from Earth (these distances are much larger than the distance light could travel in the universe's 13.8-billion-year history because space itself has also been expanding).
The cosmic microwave background has a redshift of , corresponding to an age of approximately 379,000 years after the Big Bang and a proper distance of more than 46 billion light-yearsThe burst happened approximately 13 billion years ago, so a distance of about 13 billion light-years was widely quoted in the media (or sometimes a more precise figure of 13.035 billion light-years), though this would be the ""light travel distance"" (see Distance measures (cosmology)) rather than the ""proper distance"" used in both Hubble's law and in defining the size of the observable universe (cosmologist Ned Wright argues against the common use of light travel distance in astronomical press releases on this page, and at the bottom of the page offers online calculators that can be used to calculate the current proper distance to a distant object in a flat universe based on either the redshift ""z"" or the light travel time)The most reliable redshifts are from spectroscopic data, and the highest-confirmed spectroscopic redshift of a galaxy is that of GN-z11, with a redshift of z = 11.1, corresponding to 400 million years after the Big BangOther high-redshift events predicted by physics but not presently observable are the cosmic neutrino background from abo[SEP]What is the proper distance for a redshift of 8.2?","['C', 'E', 'D']",1.0
Who was the first to determine the velocity of a star moving away from the Earth using the Doppler effect?,"He studied the Doppler displacement of the spectral lines of stars to determine their radial velocities deducing a star's absolute dimensions, masses, and the orbital elements of some specific stars. In 1912, he was the first to observe the shift of spectral lines of galaxies, making him the discoverer of galactic redshifts.Slipher first reports on the making the first Doppler measurement on September 17, 1912 in The radial velocity of the Andromeda Nebula in the inaugural volume of the Lowell Observatory Bulletin, pp. 2.56–2.57. He predicted that the small Doppler shifts to the light emitted by the star, caused by its continuously varying radial velocity, would be detectable by the most sensitive spectrographs as tiny redshifts and blueshifts in the star's emission. thumb|right|Doppler spectroscopy detects periodic shifts in radial velocity by recording variations in the color of light from the host star. \, In his 1905 paper on special relativity, Einstein obtained a somewhat different looking equation for the Doppler shift equation. The observed Doppler velocity, K = V_\mathrm{star}\sin(i), where i is the inclination of the planet's orbit to the line perpendicular to the line-of-sight. Using the Doppler effect and noting subtle changes, he measured the speeds in which spiral nebulae traveled during his research from 1912 and onward. Following this approach towards deriving the relativistic longitudinal Doppler effect, assume the receiver and the source are moving away from each other with a relative speed v\, as measured by an observer on the receiver or the source (The sign convention adopted here is that v\, is negative if the receiver and the source are moving towards each other). If we consider the angles relative to the frame of the source, then v_s = 0 and the equation reduces to , Einstein's 1905 formula for the Doppler effect. Doppler spectroscopy (also known as the radial-velocity method, or colloquially, the wobble method) is an indirect method for finding extrasolar planets and brown dwarfs from radial-velocity measurements via observation of Doppler shifts in the spectrum of the planet's parent star. Indeed, we obtain , the formula for relativistic longitudinal Doppler shift. Comparison of the relativistic Doppler effect (top) with the non-relativistic effect (bottom). The traditional analysis of the Doppler effect for sound represents a low speed approximation to the exact, relativistic analysis. A certain persistent critic of relativity maintained that, although the experiment was consistent with general relativity, it refuted special relativity, his point being that since the emitter and absorber were in uniform relative motion, special relativity demanded that a Doppler shift be observed. The transverse Doppler effect is one of the main novel predictions of the special theory of relativity. Alfred Harrison Joy (September 23, 1882 in Greenville, Illinois – April 18, 1973 in Pasadena, California) was an astronomer best known for his work on stellar distances, the radial motion of stars, and variable stars. He was the first to discover that distant galaxies are redshifted, thus providing the first empirical basis for the expansion of the universe.Physics ArXiv preprintPhysics ArXiv preprint He was also the first to relate these redshifts to velocity. == Personal life == Vesto Slipher was born in Mulberry, Indiana, to Daniel Clark and Hannah App Slipher. First-year physics textbooks almost invariably analyze Doppler shift for sound in terms of Newtonian kinematics, while analyzing Doppler shift for light and electromagnetic phenomena in terms of relativistic kinematics. Vesto Melvin Slipher (; November 11, 1875 – November 8, 1969) was an American astronomer who performed the first measurements of radial velocities for galaxies. The 1993 version of the experiment verified time dilation, and hence TDE, to an accuracy of 2.3×10−6. == Relativistic Doppler effect for sound and light == thumb|Figure 10. ",Fraunhofer,William Huggins,Hippolyte Fizeau,Vogel and Scheiner,None of the above,B,kaggle200,"Doppler broadening can also be used to determine the velocity distribution of a gas given its absorption spectrum. In particular, this has been used to determine the velocity distribution of interstellar gas clouds.
In the following table, it is assumed that for formula_1 the receiver and the source are moving away from each other, formula_2 being the relative velocity and formula_3 the speed of light, and formula_4.
37 Cancri is a star in the zodiac constellation of Cancer. It is a challenge to view with the naked eye, having an apparent magnitude of 6.54. The star is moving away from the Earth with a heliocentric radial velocity of +22 km/s, having come as close as some 2.7 million years ago.
The first Doppler redshift was described by French physicist Hippolyte Fizeau in 1848, who pointed to the shift in spectral lines seen in stars as being due to the Doppler effect. The effect is sometimes called the ""Doppler–Fizeau effect"". In 1868, British astronomer William Huggins was the first to determine the velocity of a star moving away from the Earth by this method. In 1871, optical redshift was confirmed when the phenomenon was observed in Fraunhofer lines using solar rotation, about 0.1 Å in the red. In 1887, Vogel and Scheiner discovered the ""annual Doppler effect"", the yearly change in the Doppler shift of stars located near the ecliptic due to the orbital velocity of the Earth. In 1901, Aristarkh Belopolsky verified optical redshift in the laboratory using a system of rotating mirrors.","The photoacoustic Doppler effect is a type of Doppler effect that occurs when an intensity modulated light wave induces a photoacoustic wave on moving particles with a specific frequency. The observed frequency shift is a good indicator of the velocity of the illuminated moving particles. A potential biomedical application is measuring blood flow.
There are 2 primary ultrasonic measurement technologies used in water metering: Doppler effect meters which utilize the Doppler Effect to determine the velocity of water passing through the meter.
The history of the subject began with the development in the 19th century of classical wave mechanics and the exploration of phenomena associated with the Doppler effect. The effect is named after Christian Doppler, who offered the first known physical explanation for the phenomenon in 1842. The hypothesis was tested and confirmed for sound waves by the Dutch scientist Christophorus Buys Ballot in 1845. Doppler correctly predicted that the phenomenon should apply to all waves, and in particular suggested that the varying colors of stars could be attributed to their motion with respect to the Earth. Before this was verified, however, it was found that stellar colors were primarily due to a star's temperature, not motion. Only later was Doppler vindicated by verified redshift observations.The first Doppler redshift was described by French physicist Hippolyte Fizeau in 1848, who pointed to the shift in spectral lines seen in stars as being due to the Doppler effect. The effect is sometimes called the ""Doppler–Fizeau effect"". In 1868, British astronomer William Huggins was the first to determine the velocity of a star moving away from the Earth by this method. In 1871, optical redshift was confirmed when the phenomenon was observed in Fraunhofer lines using solar rotation, about 0.1 Å in the red. In 1887, Vogel and Scheiner discovered the annual Doppler effect, the yearly change in the Doppler shift of stars located near the ecliptic due to the orbital velocity of the Earth. In 1901, Aristarkh Belopolsky verified optical redshift in the laboratory using a system of rotating mirrors.Arthur Eddington used the term red shift as early as 1923. The word does not appear unhyphenated until about 1934 by Willem de Sitter.Beginning with observations in 1912, Vesto Slipher discovered that most spiral galaxies, then mostly thought to be spiral nebulae, had considerable redshifts. Slipher first reports on his measurement in the inaugural volume of the Lowell Observatory Bulletin. Three years later, he wrote a review in the journal Popular Astronomy. In it he states that ""the early discovery that the great Andromeda spiral had the quite exceptional velocity of –300 km(/s) showed the means then available, capable of investigating not only the spectra of the spirals but their velocities as well."" Slipher reported the velocities for 15 spiral nebulae spread across the entire celestial sphere, all but three having observable ""positive"" (that is recessional) velocities. Subsequently, Edwin Hubble discovered an approximate relationship between the redshifts of such ""nebulae"" and the distances to them with the formulation of his eponymous Hubble's law. These observations corroborated Alexander Friedmann's 1922 work, in which he derived the Friedmann–Lemaître equations. In the present day they are considered strong evidence for an expanding universe and the Big Bang theory.","The star is moving away from the Earth with a heliocentric radial velocity of +22 km/s, having come as close as some 2.7 million years ago.
The first Doppler redshift was described by French physicist Hippolyte Fizeau in 1848, who pointed to the shift in spectral lines seen in stars as being due to the Doppler effectIn 1887, Vogel and Scheiner discovered the annual Doppler effect, the yearly change in the Doppler shift of stars located near the ecliptic due to the orbital velocity of the EarthIn 1887, Vogel and Scheiner discovered the ""annual Doppler effect"", the yearly change in the Doppler shift of stars located near the ecliptic due to the orbital velocity of the EarthIn 1868, British astronomer William Huggins was the first to determine the velocity of a star moving away from the Earth by this methodOnly later was Doppler vindicated by verified redshift observations.The first Doppler redshift was described by French physicist Hippolyte Fizeau in 1848, who pointed to the shift in spectral lines seen in stars as being due to the Doppler effectDoppler correctly predicted that the phenomenon should apply to all waves, and in particular suggested that the varying colors of stars could be attributed to their motion with respect to the EarthThe effect is named after Christian Doppler, who offered the first known physical explanation for the phenomenon in 1842In particular, this has been used to determine the velocity distribution of interstellar gas clouds.
In the following table, it is assumed that for formula_1 the receiver and the source are moving away from each other, formula_2 being the relative velocity and formula_3 the speed of light, and formula_4.
37 Cancri is a star in the zodiac constellation of CancerBefore this was verified, however, it was found that stellar colors were primarily due to a star's temperature, not motionThe observed frequency shift is a good indicator of the velocity of the illuminated moving particlesIn 1871, optical redshift was confirmed when the phenomenon was observed in Fraunhofer lines using solar rotation, about 0.1 Å in the red","The star is moving away from the Earth with a heliocentric radial velocity of +22 km/s, having come as close as some 2.7 million years ago.
The first Doppler redshift was described by French physicist Hippolyte Fizeau in 1848, who pointed to the shift in spectral lines seen in stars as being due to the Doppler effectIn 1887, Vogel and Scheiner discovered the annual Doppler effect, the yearly change in the Doppler shift of stars located near the ecliptic due to the orbital velocity of the EarthIn 1887, Vogel and Scheiner discovered the ""annual Doppler effect"", the yearly change in the Doppler shift of stars located near the ecliptic due to the orbital velocity of the EarthIn 1868, British astronomer William Huggins was the first to determine the velocity of a star moving away from the Earth by this methodOnly later was Doppler vindicated by verified redshift observations.The first Doppler redshift was described by French physicist Hippolyte Fizeau in 1848, who pointed to the shift in spectral lines seen in stars as being due to the Doppler effectDoppler correctly predicted that the phenomenon should apply to all waves, and in particular suggested that the varying colors of stars could be attributed to their motion with respect to the EarthThe effect is named after Christian Doppler, who offered the first known physical explanation for the phenomenon in 1842In particular, this has been used to determine the velocity distribution of interstellar gas clouds.
In the following table, it is assumed that for formula_1 the receiver and the source are moving away from each other, formula_2 being the relative velocity and formula_3 the speed of light, and formula_4.
37 Cancri is a star in the zodiac constellation of CancerBefore this was verified, however, it was found that stellar colors were primarily due to a star's temperature, not motionThe observed frequency shift is a good indicator of the velocity of the illuminated moving particlesIn 1871, optical redshift was confirmed when the phenomenon was observed in Fraunhofer lines using solar rotation, about 0.1 Å in the red[SEP]Who was the first to determine the velocity of a star moving away from the Earth using the Doppler effect?","['C', 'D', 'E']",0.0
What is the information loss paradox in black holes?,"This perspective holds that Hawking's computation is reliable until the final stages of black-hole evaporation when information suddenly escapes. The information paradox appears when one considers a process in which a black hole is formed through a physical process and then evaporates away entirely through Hawking radiation. It is now generally believed that information is preserved in black-hole evaporation. Starting in the mid-1970s, Stephen Hawking and Jacob Bekenstein put forward theoretical arguments that suggested that black-hole evaporation loses information, and is therefore inconsistent with unitarity. As explained above, one way to frame the information paradox is that Hawking's calculation appears to show that the von Neumann entropy of Hawking radiation increases throughout the lifetime of the black hole. Moreover, the argument for information loss relied on the causal structure of the black-hole spacetime, which suggests that information in the interior should not affect any observation in the exterior including observations performed on the radiation emitted by the black hole. On the other hand, this idea implies that just before the sudden escape of information, a very small black hole must be able to store an arbitrary amount of information and have a very large number of internal states. According to the external observer, infalling information heats up the stretched horizon, which then reradiates it as Hawking radiation, with the entire evolution being unitary. *Information is stored in a large remnant This idea suggests that Hawking radiation stops before the black hole reaches the Planck size. Taken together these puzzles about black hole evaporation have implications for how gravity and quantum mechanics must be combined, leading to the information paradox remaining an active field of research within quantum gravity. == Relevant principles == In quantum mechanics, the evolution of the state is governed by the Schrödinger equation. Therefore Hawking's argument suggests that the process of black-hole evaporation cannot be described within the framework of unitary evolution. Within, what might be termed, the loop-quantum-gravity approach to black holes, it is believed that understanding this phase of evaporation is crucial to resolving the information paradox. Since the black hole never evaporates, information about its initial state can remain inside the black hole and the paradox disappears. However, if the black hole formed from a pure state with zero entropy, unitarity implies that the entropy of the Hawking radiation must decrease back to zero once the black hole evaporates completely. Once the black holes evaporate completely, in both cases, one will be left with a featureless gas of radiation. Hawking argued that the process of radiation would continue until the black hole had evaporated completely. In 2004, Hawking also conceded the 1997 bet, paying Preskill with a baseball encyclopedia ""from which information can be retrieved at will"" although Thorne refused to concede. == Solutions == Since the 1997 proposal of the AdS/CFT correspondence, the predominant belief among physicists is that information is indeed preserved in black hole evaporation. Hawking also argued that the detailed form of the radiation would be independent of the initial state of the black hole, and would depend only on its mass, electric charge and angular momentum. These scenarios are broadly called remnant scenarios since information does not emerge gradually but remains in the black-hole interior only to emerge at the end of black-hole evaporation. Therefore, Hawking argued that if the star or material that collapsed to form the black hole started in a specific pure quantum state, the process of evaporation would transform the pure state into a mixed state. ","Black holes have an infinite number of internal parameters, so all the information about the matter that went into forming the black hole is preserved. Regardless of the type of matter which goes into a black hole, it appears that all the information is conserved. As black holes evaporate by emitting Hawking radiation, the information is lost forever.","Black holes have an infinite number of internal parameters, so all the information about the matter that went into forming the black hole is preserved. Regardless of the type of matter which goes into a black hole, it appears that all the information is conserved. As black holes evaporate by emitting Hawking radiation, the information is lost temporarily but reappears once the black hole has fully evaporated.","Black holes have only a few internal parameters, so most of the information about the matter that went into forming the black hole is lost. Regardless of the type of matter which goes into a black hole, it appears that only information concerning the total mass, charge, and angular momentum are conserved. As black holes evaporate by emitting Hawking radiation, the information is lost forever.","Black holes have only a few internal parameters, so most of the information about the matter that went into forming the black hole is lost. Regardless of the type of matter which goes into a black hole, it appears that only information concerning the total mass, charge, and angular momentum are conserved. As black holes evaporate by emitting Hawking radiation, the information is preserved and reappears once the black hole has fully evaporated.","Black holes have only a few internal parameters, so most of the information about the matter that went into forming the black hole is preserved. Regardless of the type of matter which goes into a black hole, it appears that all the information is conserved. As black holes evaporate by emitting Hawking radiation, the information is preserved and reappears once the black hole has fully evaporated.",C,kaggle200,"Hawking radiation is black-body radiation that is predicted to be released by black holes, due to quantum effects near the event horizon. This radiation reduces the mass and energy of black holes, causing them to shrink and ultimately vanish. If black holes evaporate via Hawking radiation, a non-rotating and uncharged stupendously large black hole with a mass of will evaporate in around . Black holes formed during the predicted collapse of superclusters of galaxies in the far future with would evaporate over a timescale of up to .
In 1974, Hawking predicted that black holes are not entirely black but emit small amounts of thermal radiation at a temperature ħ""c""/(8π""GM""""k""); this effect has become known as Hawking radiation. By applying quantum field theory to a static black hole background, he determined that a black hole should emit particles that display a perfect black body spectrum. Since Hawking's publication, many others have verified the result through various approaches. If Hawking's theory of black hole radiation is correct, then black holes are expected to shrink and evaporate over time as they lose mass by the emission of photons and other particles. The temperature of this thermal spectrum (Hawking temperature) is proportional to the surface gravity of the black hole, which, for a Schwarzschild black hole, is inversely proportional to the mass. Hence, large black holes emit less radiation than small black holes.
There are three reasons the black holes in the ELIRGs could be massive. First, the embryonic black holes might be bigger than thought possible. Second, the Eddington limit was exceeded. When a black hole feeds, gas falls in and heats, emitting light. The pressure of the emitted light forces the gas outward, creating a limit to how fast the black hole can continuously absorb matter. If a black hole broke this limit, it could theoretically increase in size at a fast rate. Black holes have previously been observed breaking this limit; the black hole in the study would have had to repeatedly break the limit to grow this large. Third, the black holes might just be bending this limit, absorbing gas faster than thought possible, if the black hole is not spinning fast. If a black hole spins slowly, it will not repel its gas absorption as much. A slow-spinning black hole can absorb more matter than a fast-spinning black hole. The massive black holes in ELIRGs could be absorbing matter for a longer time.
Because a black hole has only a few internal parameters, most of the information about the matter that went into forming the black hole is lost. Regardless of the type of matter which goes into a black hole, it appears that only information concerning the total mass, charge, and angular momentum are conserved. As long as black holes were thought to persist forever this information loss is not that problematic, as the information can be thought of as existing inside the black hole, inaccessible from the outside, but represented on the event horizon in accordance with the holographic principle. However, black holes slowly evaporate by emitting Hawking radiation. This radiation does not appear to carry any additional information about the matter that formed the black hole, meaning that this information appears to be gone forever.","In supersymmetric theories, near-extremal black holes are often small perturbations of supersymmetric black holes. Such black holes have a very small Hawking temperature and consequently emit a small amount of Hawking radiation. Their black hole entropy can often be calculated in string theory, much like in the case of extremal black holes, at least to the first order in non-extremality.
Evaporation In 1974, Hawking predicted that black holes are not entirely black but emit small amounts of thermal radiation at a temperature ℏc3/(8πGMkB); this effect has become known as Hawking radiation. By applying quantum field theory to a static black hole background, he determined that a black hole should emit particles that display a perfect black body spectrum. Since Hawking's publication, many others have verified the result through various approaches. If Hawking's theory of black hole radiation is correct, then black holes are expected to shrink and evaporate over time as they lose mass by the emission of photons and other particles. The temperature of this thermal spectrum (Hawking temperature) is proportional to the surface gravity of the black hole, which, for a Schwarzschild black hole, is inversely proportional to the mass. Hence, large black holes emit less radiation than small black holes.A stellar black hole of 1 M☉ has a Hawking temperature of 62 nanokelvins. This is far less than the 2.7 K temperature of the cosmic microwave background radiation. Stellar-mass or larger black holes receive more mass from the cosmic microwave background than they emit through Hawking radiation and thus will grow instead of shrinking. To have a Hawking temperature larger than 2.7 K (and be able to evaporate), a black hole would need a mass less than the Moon. Such a black hole would have a diameter of less than a tenth of a millimeter.If a black hole is very small, the radiation effects are expected to become very strong. A black hole with the mass of a car would have a diameter of about 10−24 m and take a nanosecond to evaporate, during which time it would briefly have a luminosity of more than 200 times that of the Sun. Lower-mass black holes are expected to evaporate even faster; for example, a black hole of mass 1 TeV/c2 would take less than 10−88 seconds to evaporate completely. For such a small black hole, quantum gravity effects are expected to play an important role and could hypothetically make such a small black hole stable, although current developments in quantum gravity do not indicate this is the case.The Hawking radiation for an astrophysical black hole is predicted to be very weak and would thus be exceedingly difficult to detect from Earth. A possible exception, however, is the burst of gamma rays emitted in the last stage of the evaporation of primordial black holes. Searches for such flashes have proven unsuccessful and provide stringent limits on the possibility of existence of low mass primordial black holes. NASA's Fermi Gamma-ray Space Telescope launched in 2008 will continue the search for these flashes.If black holes evaporate via Hawking radiation, a solar mass black hole will evaporate (beginning once the temperature of the cosmic microwave background drops below that of the black hole) over a period of 1064 years. A supermassive black hole with a mass of 1011 M☉ will evaporate in around 2×10100 years. Some monster black holes in the universe are predicted to continue to grow up to perhaps 1014 M☉ during the collapse of superclusters of galaxies. Even these would evaporate over a timescale of up to 10106 years.Some models of quantum gravity predict modifications of the Hawking description of black holes. In particular, the evolution equations describing the mass loss rate and charge loss rate get modified.
Information loss paradox Because a black hole has only a few internal parameters, most of the information about the matter that went into forming the black hole is lost. Regardless of the type of matter which goes into a black hole, it appears that only information concerning the total mass, charge, and angular momentum are conserved. As long as black holes were thought to persist forever this information loss is not that problematic, as the information can be thought of as existing inside the black hole, inaccessible from the outside, but represented on the event horizon in accordance with the holographic principle. However, black holes slowly evaporate by emitting Hawking radiation. This radiation does not appear to carry any additional information about the matter that formed the black hole, meaning that this information appears to be gone forever.The question whether information is truly lost in black holes (the black hole information paradox) has divided the theoretical physics community. In quantum mechanics, loss of information corresponds to the violation of a property called unitarity, and it has been argued that loss of unitarity would also imply violation of conservation of energy, though this has also been disputed. Over recent years evidence has been building that indeed information and unitarity are preserved in a full quantum gravitational treatment of the problem.One attempt to resolve the black hole information paradox is known as black hole complementarity. In 2012, the ""firewall paradox"" was introduced with the goal of demonstrating that black hole complementarity fails to solve the information paradox. According to quantum field theory in curved spacetime, a single emission of Hawking radiation involves two mutually entangled particles. The outgoing particle escapes and is emitted as a quantum of Hawking radiation; the infalling particle is swallowed by the black hole. Assume a black hole formed a finite time in the past and will fully evaporate away in some finite time in the future. Then, it will emit only a finite amount of information encoded within its Hawking radiation. According to research by physicists like Don Page and Leonard Susskind, there will eventually be a time by which an outgoing particle must be entangled with all the Hawking radiation the black hole has previously emitted. This seemingly creates a paradox: a principle called ""monogamy of entanglement"" requires that, like any quantum system, the outgoing particle cannot be fully entangled with two other systems at the same time; yet here the outgoing particle appears to be entangled both with the infalling particle and, independently, with past Hawking radiation. In order to resolve this contradiction, physicists may eventually be forced to give up one of three time-tested principles: Einstein's equivalence principle, unitarity, or local quantum field theory. One possible solution, which violates the equivalence principle, is that a ""firewall"" destroys incoming particles at the event horizon. In general, which—if any—of these assumptions should be abandoned remains a topic of debate.","As long as black holes were thought to persist forever this information loss is not that problematic, as the information can be thought of as existing inside the black hole, inaccessible from the outside, but represented on the event horizon in accordance with the holographic principleIn particular, the evolution equations describing the mass loss rate and charge loss rate get modified.
Information loss paradox Because a black hole has only a few internal parameters, most of the information about the matter that went into forming the black hole is lostIn 2012, the ""firewall paradox"" was introduced with the goal of demonstrating that black hole complementarity fails to solve the information paradoxOver recent years evidence has been building that indeed information and unitarity are preserved in a full quantum gravitational treatment of the problem.One attempt to resolve the black hole information paradox is known as black hole complementarityRegardless of the type of matter which goes into a black hole, it appears that only information concerning the total mass, charge, and angular momentum are conservedThis radiation does not appear to carry any additional information about the matter that formed the black hole, meaning that this information appears to be gone forever.The question whether information is truly lost in black holes (the black hole information paradox) has divided the theoretical physics communityIn quantum mechanics, loss of information corresponds to the violation of a property called unitarity, and it has been argued that loss of unitarity would also imply violation of conservation of energy, though this has also been disputedThen, it will emit only a finite amount of information encoded within its Hawking radiationThis radiation reduces the mass and energy of black holes, causing them to shrink and ultimately vanishIf a black hole broke this limit, it could theoretically increase in size at a fast rateThe massive black holes in ELIRGs could be absorbing matter for a longer time.
Because a black hole has only a few internal parameters, most of the","As long as black holes were thought to persist forever this information loss is not that problematic, as the information can be thought of as existing inside the black hole, inaccessible from the outside, but represented on the event horizon in accordance with the holographic principleIn particular, the evolution equations describing the mass loss rate and charge loss rate get modified.
Information loss paradox Because a black hole has only a few internal parameters, most of the information about the matter that went into forming the black hole is lostIn 2012, the ""firewall paradox"" was introduced with the goal of demonstrating that black hole complementarity fails to solve the information paradoxOver recent years evidence has been building that indeed information and unitarity are preserved in a full quantum gravitational treatment of the problem.One attempt to resolve the black hole information paradox is known as black hole complementarityRegardless of the type of matter which goes into a black hole, it appears that only information concerning the total mass, charge, and angular momentum are conservedThis radiation does not appear to carry any additional information about the matter that formed the black hole, meaning that this information appears to be gone forever.The question whether information is truly lost in black holes (the black hole information paradox) has divided the theoretical physics communityIn quantum mechanics, loss of information corresponds to the violation of a property called unitarity, and it has been argued that loss of unitarity would also imply violation of conservation of energy, though this has also been disputedThen, it will emit only a finite amount of information encoded within its Hawking radiationThis radiation reduces the mass and energy of black holes, causing them to shrink and ultimately vanishIf a black hole broke this limit, it could theoretically increase in size at a fast rateThe massive black holes in ELIRGs could be absorbing matter for a longer time.
Because a black hole has only a few internal parameters, most of the[SEP]What is the information loss paradox in black holes?","['C', 'D', 'E']",1.0
What is the Kutta condition?,"The Kutta condition is a principle in steady-flow fluid dynamics, especially aerodynamics, that is applicable to solid bodies with sharp corners, such as the trailing edges of airfoils. The value of circulation of the flow around the airfoil must be that value which would cause the Kutta condition to exist. == The Kutta condition applied to airfoils == thumb|right|400px|Upper figure: Zero- circulation flow pattern around an airfoil. One of the consequences of the Kutta condition is that the airflow over the topside of the airfoil travels much faster than the airflow under the underside. The Kutta condition is an alternative method of incorporating some aspects of viscous effects, while neglecting others, such as skin friction and some other boundary layer effects. In fluid flow around a body with a sharp corner, the Kutta condition refers to the flow pattern in which fluid approaches the corner from above and below, meets at the corner, and then flows away from the body. The Kutta condition is significant when using the Kutta–Joukowski theorem to calculate the lift created by an airfoil with a sharp trailing edge. This is known as the Kutta condition.Clancy, L.J. Aerodynamics, Sections 4.5 and 4.8 When an airfoil is moving with an angle of attack, the starting vortex has been cast off and the Kutta condition has become established, there is a finite circulation of the air around the airfoil. Xu (1998) ""Kutta condition for sharp edge flows"", Mechanics Research Communications == The Kutta condition in aerodynamics == The Kutta condition allows an aerodynamicist to incorporate a significant effect of viscosity while neglecting viscous effects in the underlying conservation of momentum equation. Mathematically, the Kutta condition enforces a specific choice among the infinite allowed values of circulation. == See also == * Kutta–Joukowski theorem * Horseshoe vortex * Starting vortex ==References== * L. J. Clancy (1975) Aerodynamics, Pitman Publishing Limited, London. Kuethe and Schetzer state the Kutta condition as follows: > A body with a sharp trailing edge which is moving through a fluid will > create about itself a circulation of sufficient strength to hold the rear > stagnation point at the trailing edge. In irrotational, inviscid, incompressible flow (potential flow) over an airfoil, the Kutta condition can be implemented by calculating the stream function over the airfoil surface.Farzad Mohebbi and Mathieu Sellier (2014) ""On the Kutta Condition in Potential Flow over Airfoil"", Journal of Aerodynamics Farzad Mohebbi (2018) ""FOILincom: A fast and robust program for solving two dimensional inviscid steady incompressible flows (potential flows) over isolated airfoils"", The same Kutta condition implementation method is also used for solving two dimensional subsonic (subcritical) inviscid steady compressible flows over isolated airfoils.Farzad Mohebbi (2018) ""FOILcom: A fast and robust program for solving two dimensional subsonic (subcritical) inviscid steady compressible flows over isolated airfoils"", Farzad Mohebbi (2019) ""On the Kutta Condition in Compressible Flow over Isolated Airfoils"", Fluids The viscous correction for the Kutta condition can be found in some of the recent studies. *""Flow around an airfoil"" at the University of Geneva *""Kutta condition for lifting flows"" by Praveen Chandrashekar of the National Aerospace Laboratories of India * * A.M. Kuethe and J.D. Schetzer, Foundations of Aerodynamics, John Wiley & Sons, Inc. Lower figure: Flow pattern with circulation consistent with the Kutta condition, in which both the upper and lower flows leave the trailing edge smoothly. The Kutta condition does not apply to unsteady flow. This weak starting vortex causes the Kutta condition to be re-established for the new speed or angle of attack. Van Nostrand Reinhold Co. London (1970) Library of Congress Catalog Card No. 67-25005 * C. Xu, ""Kutta condition for sharp edge flows"", Mechanics Research Communications 25(4):415-420 (1998). Millikan, Clark B. (1941), Aerodynamics of the Airplane, p.65, John Wiley & Sons, New York The Kutta condition gives some insight into why airfoils usually have sharp trailing edges, even though this is undesirable from structural and manufacturing viewpoints. The airfoil is generating lift, and the magnitude of the lift is given by the Kutta–Joukowski theorem. Whenever the speed or angle of attack of an airfoil changes there is a weak starting vortex which begins to form, either above or below the trailing edge. The flow over the topside conforms to the upper surface of the airfoil. ","The Kutta condition is a physical requirement that the fluid moving along the lower and upper surfaces of an airfoil meet smoothly, with no fluid moving around the trailing edge of the airfoil.","The Kutta condition is a physical requirement that the fluid moving along the lower and upper surfaces of an airfoil meet smoothly, with no fluid moving around the leading edge of the airfoil.",The Kutta condition is a mathematical requirement that the loop used in applying the Kutta-Joukowski theorem must be chosen outside the boundary layer of the airfoil.,The Kutta condition is a mathematical requirement that the flow can be assumed inviscid in the entire region outside the airfoil provided the Reynolds number is large and the angle of attack is small.,The Kutta condition is a physical requirement that the circulation calculated using the loop corresponding to the surface of the airfoil must be zero for a viscous fluid.,A,kaggle200,"When an airfoil is moving with an angle of attack, the starting vortex has been cast off and the Kutta condition has become established, there is a finite circulation of the air around the airfoil. The airfoil is generating lift, and the magnitude of the lift is given by the Kutta–Joukowski theorem.
The Kutta condition is significant when using the Kutta–Joukowski theorem to calculate the lift created by an airfoil with a sharp trailing edge. The value of circulation of the flow around the airfoil must be that value which would cause the Kutta condition to exist.
In irrotational, inviscid, incompressible flow (potential flow) over an airfoil, the Kutta condition can be implemented by calculating the stream function over the airfoil surface.
The sharp trailing edge requirement corresponds physically to a flow in which the fluid moving along the lower and upper surfaces of the airfoil meet smoothly, with no fluid moving around the trailing edge of the airfoil. This is known as the Kutta condition.","The Kutta condition is a principle in steady-flow fluid dynamics, especially aerodynamics, that is applicable to solid bodies with sharp corners, such as the trailing edges of airfoils. It is named for German mathematician and aerodynamicist Martin Kutta.
Kuethe and Schetzer state the Kutta condition as follows:: § 4.11 A body with a sharp trailing edge which is moving through a fluid will create about itself a circulation of sufficient strength to hold the rear stagnation point at the trailing edge.
In fluid flow around a body with a sharp corner, the Kutta condition refers to the flow pattern in which fluid approaches the corner from above and below, meets at the corner, and then flows away from the body. None of the fluid flows around the sharp corner.
The Kutta condition is significant when using the Kutta–Joukowski theorem to calculate the lift created by an airfoil with a sharp trailing edge. The value of circulation of the flow around the airfoil must be that value which would cause the Kutta condition to exist.
In irrotational, inviscid, incompressible flow (potential flow) over an airfoil, the Kutta condition can be implemented by calculating the stream function over the airfoil surface.
The same Kutta condition implementation method is also used for solving two dimensional subsonic (subcritical) inviscid steady compressible flows over isolated airfoils.
The viscous correction for the Kutta condition can be found in some of the recent studies.
Any real fluid is viscous, which implies that the fluid velocity vanishes on the airfoil. Prandtl showed that for large Reynolds number, defined as Re =ρV∞cAμ , and small angle of attack, the flow around a thin airfoil is composed of a narrow viscous region called the boundary layer near the body and an inviscid flow region outside. In applying the Kutta-Joukowski theorem, the loop must be chosen outside this boundary layer. (For example, the circulation calculated using the loop corresponding to the surface of the airfoil would be zero for a viscous fluid.) The sharp trailing edge requirement corresponds physically to a flow in which the fluid moving along the lower and upper surfaces of the airfoil meet smoothly, with no fluid moving around the trailing edge of the airfoil. This is known as the Kutta condition.","The Kutta condition is a principle in steady-flow fluid dynamics, especially aerodynamics, that is applicable to solid bodies with sharp corners, such as the trailing edges of airfoilsIt is named for German mathematician and aerodynamicist Martin Kutta.
Kuethe and Schetzer state the Kutta condition as follows:: § 4.11 A body with a sharp trailing edge which is moving through a fluid will create about itself a circulation of sufficient strength to hold the rear stagnation point at the trailing edge.
In fluid flow around a body with a sharp corner, the Kutta condition refers to the flow pattern in which fluid approaches the corner from above and below, meets at the corner, and then flows away from the bodyThis is known as the Kutta conditionThis is known as the Kutta condition.- When an airfoil is moving with an angle of attack, the starting vortex has been cast off and the Kutta condition has become established, there is a finite circulation of the air around the airfoilThe value of circulation of the flow around the airfoil must be that value which would cause the Kutta condition to exist.
In irrotational, inviscid, incompressible flow (potential flow) over an airfoil, the Kutta condition can be implemented by calculating the stream function over the airfoil surface.
The sharp trailing edge requirement corresponds physically to a flow in which the fluid moving along the lower and upper surfaces of the airfoil meet smoothly, with no fluid moving around the trailing edge of the airfoilThe value of circulation of the flow around the airfoil must be that value which would cause the Kutta condition to exist.
In irrotational, inviscid, incompressible flow (potential flow) over an airfoil, the Kutta condition can be implemented by calculating the stream function over the airfoil surface.
The same Kutta condition implementation method is also used for solving two dimensional subsonic (subcritical) inviscid steady compressible flows over isolated airfoils.
The viscous correction for the Kutta condition can be found in some of the recent studies.
Any real fluid is viscous, ","The Kutta condition is a principle in steady-flow fluid dynamics, especially aerodynamics, that is applicable to solid bodies with sharp corners, such as the trailing edges of airfoilsIt is named for German mathematician and aerodynamicist Martin Kutta.
Kuethe and Schetzer state the Kutta condition as follows:: § 4.11 A body with a sharp trailing edge which is moving through a fluid will create about itself a circulation of sufficient strength to hold the rear stagnation point at the trailing edge.
In fluid flow around a body with a sharp corner, the Kutta condition refers to the flow pattern in which fluid approaches the corner from above and below, meets at the corner, and then flows away from the bodyThis is known as the Kutta conditionThis is known as the Kutta condition.- When an airfoil is moving with an angle of attack, the starting vortex has been cast off and the Kutta condition has become established, there is a finite circulation of the air around the airfoilThe value of circulation of the flow around the airfoil must be that value which would cause the Kutta condition to exist.
In irrotational, inviscid, incompressible flow (potential flow) over an airfoil, the Kutta condition can be implemented by calculating the stream function over the airfoil surface.
The sharp trailing edge requirement corresponds physically to a flow in which the fluid moving along the lower and upper surfaces of the airfoil meet smoothly, with no fluid moving around the trailing edge of the airfoilThe value of circulation of the flow around the airfoil must be that value which would cause the Kutta condition to exist.
In irrotational, inviscid, incompressible flow (potential flow) over an airfoil, the Kutta condition can be implemented by calculating the stream function over the airfoil surface.
The same Kutta condition implementation method is also used for solving two dimensional subsonic (subcritical) inviscid steady compressible flows over isolated airfoils.
The viscous correction for the Kutta condition can be found in some of the recent studies.
Any real fluid is viscous, [SEP]What is the Kutta condition?","['A', 'E', 'B']",1.0
What is classical mechanics?,"Classical mechanics is the branch of physics used to describe the motion of macroscopic objects. Classical mechanics utilises many equations--as well as other mathematical concepts--which relate various physical quantities to one another. The realization that the phase space in classical mechanics admits a natural description as a symplectic manifold (indeed a cotangent bundle in most cases of physical interest), and symplectic topology, which can be thought of as the study of global issues of Hamiltonian mechanics, has been a fertile area of mathematics research since the 1980s. ==See also== * Mechanics * Timeline of classical mechanics ==Notes== ==References== * * * Classical mechanics Category:Classical mechanics Category:Isaac Newton Classical Mechanics is a well-established textbook written by Thomas Walter Bannerman Kibble and Frank Berkshire of the Imperial College Mathematics Department. Classical Mechanics is a textbook about that subject written by Herbert Goldstein, a professor at Columbia University. This article deals with the history of classical mechanics. == Precursors to classical mechanics == === Antiquity === The ancient Greek philosophers, Aristotle in particular, were among the first to propose that abstract principles govern nature. The book provides a thorough coverage of the fundamental principles and techniques of classical mechanics, a long-standing subject which is at the base of all of physics. == Publication history == The English language editions were published as follows:World Cat author listing The first edition was published by Kibble, as Kibble, T. W. B. Classical Mechanics. Mathematical Methods of Classical Mechanics is a textbook by mathematician Vladimir I. Arnold. Although classical mechanics is largely compatible with other ""classical physics"" theories such as classical electrodynamics and thermodynamics, some difficulties were discovered in the late 19th century that could only be resolved by more modern physics. As experiments reached the atomic level, classical mechanics failed to explain, even approximately, such basic things as the energy levels and sizes of atoms. Similarly, the different behaviour of classical electromagnetism and classical mechanics under velocity transformations led to the theory of relativity. == Classical mechanics in the contemporary era == By the end of the 20th century, classical mechanics in physics was no longer an independent theory. == See also == * List of textbooks in classical and quantum mechanics == References == == Bibliography == * Category:1974 non-fiction books Category:Classical mechanics Category:Graduate Texts in Mathematics Category:Physics textbooks Newton and most of his contemporaries hoped that classical mechanics would be able to explain all entities, including (in the form of geometric optics) light. Most of the framework of Hamiltonian mechanics can be seen in quantum mechanics however the exact meanings of the terms differ due to quantum effects. Category:Classical mechanics Category:Physics textbooks Category:1951 non-fiction books Published in the 1950s, this book replaced the outdated and fragmented treatises and supplements typically assigned to beginning graduate students as a modern text on classical mechanics with exercises and examples demonstrating the link between this and other branches of physics, including acoustics, electrodynamics, thermodynamics, geometric optics, and quantum mechanics. Classical mechanics has also been a source of inspiration for mathematicians. Banhagel, an instructor from Detroit, Michigan, observed that despite requiring no more than multivariable and vector calculus, the first edition of Classical Mechanics successfully introduces some sophisticated new ideas in physics to students. Newton also developed the calculus which is necessary to perform the mathematical calculations involved in classical mechanics. The concepts it covers, such as mass, acceleration, and force, are commonly used and known. ","Classical mechanics is the branch of physics that describes the motion of macroscopic objects using concepts such as mass, acceleration, and force. It is based on a three-dimensional Euclidean space with fixed axes, and utilises many equations and mathematical concepts to relate physical quantities to one another.","Classical mechanics is the branch of physics that describes the motion of microscopic objects using concepts such as energy, momentum, and wave-particle duality. It is based on a four-dimensional space-time continuum and utilises many equations and mathematical concepts to relate physical quantities to one another.",Classical mechanics is the branch of physics that studies the behaviour of subatomic particles such as electrons and protons. It is based on the principles of quantum mechanics and utilises many equations and mathematical concepts to describe the properties of these particles.,Classical mechanics is the branch of physics that studies the behaviour of light and electromagnetic radiation. It is based on the principles of wave-particle duality and utilises many equations and mathematical concepts to describe the properties of light.,Classical mechanics is the branch of physics that studies the behaviour of fluids and gases. It is based on the principles of thermodynamics and utilises many equations and mathematical concepts to describe the properties of these substances.,A,kaggle200,"Although classical mechanics is largely compatible with other ""classical physics"" theories such as classical electrodynamics and thermodynamics, some difficulties were discovered in the late 19th century that could only be resolved by more modern physics. When combined with classical thermodynamics, classical mechanics leads to the Gibbs paradox in which entropy is not a well-defined quantity. As experiments reached the atomic level, classical mechanics failed to explain, even approximately, such basic things as the energy levels and sizes of atoms. The effort at resolving these problems led to the development of quantum mechanics. Similarly, the different behaviour of classical electromagnetism and classical mechanics under velocity transformations led to the theory of relativity.
Fluid mechanics is the branch of physics concerned with the mechanics of fluids (liquids, gases, and plasmas) and the forces on them.
Classical Mechanics is a well-established textbook written by Thomas Walter Bannerman Kibble and Frank Berkshire of the Imperial College Mathematics Department. The book provides a thorough coverage of the fundamental principles and techniques of classical mechanics, a long-standing subject which is at the base of all of physics.
Theoretical expositions of this branch of physics has its origins in Ancient Greece, for instance, in the writings of Aristotle and Archimedes (see History of classical mechanics and Timeline of classical mechanics). During the early modern period, scientists such as Galileo, Kepler, Huygens, and Newton laid the foundation for what is now known as classical mechanics.","The ""classical"" in ""classical mechanics"" does not refer classical antiquity, as it might in, say, classical architecture. On the contrary the development of classical mechanics involved substantial change in the methods and philosophy of physics. Instead, the qualifier distinguishes classical mechanics from physics developed after the revolutions of the early 20th century, which revealed limitations of classical mechanics.The earliest formulation of classical mechanics is often referred to as Newtonian mechanics. It consists of the physical concepts based on foundational works of Sir Isaac Newton, and the mathematical methods invented by Gottfried Wilhelm Leibniz, Joseph-Louis Lagrange, Leonhard Euler, and other contemporaries in the 17th century to describe the motion of bodies under the influence of forces. Later, more abstract methods were developed, leading to the reformulations of classical mechanics known as Lagrangian mechanics and Hamiltonian mechanics. These advances, made predominantly in the 18th and 19th centuries, extend substantially beyond earlier works, particularly through their use of analytical mechanics. They are, with some modification, also used in all areas of modern physics.
Goldstein, Herbert (1950). Classical Mechanics (1st ed.). Addison-Wesley.
Goldstein, Herbert (1951). Classical Mechanics (1st ed.). Addison-Wesley. ASIN B000OL8LOM.
Goldstein, Herbert (1980). Classical Mechanics (2nd ed.). Addison-Wesley. ISBN 978-0-201-02918-5.
Goldstein, Herbert; Poole, C. P.; Safko, J. L. (2001). Classical Mechanics (3rd ed.). Addison-Wesley. ISBN 978-0-201-65702-9.
Classical Mechanics is a well-established textbook written by Thomas Walter Bannerman Kibble and Frank Berkshire of the Imperial College Mathematics Department. The book provides a thorough coverage of the fundamental principles and techniques of classical mechanics, a long-standing subject which is at the base of all of physics.","During the early modern period, scientists such as Galileo, Kepler, Huygens, and Newton laid the foundation for what is now known as classical mechanics.ISBN 978-0-201-65702-9.
Classical Mechanics is a well-established textbook written by Thomas Walter Bannerman Kibble and Frank Berkshire of the Imperial College Mathematics DepartmentInstead, the qualifier distinguishes classical mechanics from physics developed after the revolutions of the early 20th century, which revealed limitations of classical mechanics.The earliest formulation of classical mechanics is often referred to as Newtonian mechanicsThe ""classical"" in ""classical mechanics"" does not refer classical antiquity, as it might in, say, classical architectureLater, more abstract methods were developed, leading to the reformulations of classical mechanics known as Lagrangian mechanics and Hamiltonian mechanicsOn the contrary the development of classical mechanics involved substantial change in the methods and philosophy of physics- Although classical mechanics is largely compatible with other ""classical physics"" theories such as classical electrodynamics and thermodynamics, some difficulties were discovered in the late 19th century that could only be resolved by more modern physicsThe book provides a thorough coverage of the fundamental principles and techniques of classical mechanics, a long-standing subject which is at the base of all of physics.
Theoretical expositions of this branch of physics has its origins in Ancient Greece, for instance, in the writings of Aristotle and Archimedes (see History of classical mechanics and Timeline of classical mechanics)Classical Mechanics (1st ed.)It consists of the physical concepts based on foundational works of Sir Isaac Newton, and the mathematical methods invented by Gottfried Wilhelm Leibniz, Joseph-Louis Lagrange, Leonhard Euler, and other contemporaries in the 17th century to describe the motion of bodies under the influence of forcesSimilarly, the different behaviour of classical electromagnetism and classical mechanics under velocity transformations led to ","During the early modern period, scientists such as Galileo, Kepler, Huygens, and Newton laid the foundation for what is now known as classical mechanics.ISBN 978-0-201-65702-9.
Classical Mechanics is a well-established textbook written by Thomas Walter Bannerman Kibble and Frank Berkshire of the Imperial College Mathematics DepartmentInstead, the qualifier distinguishes classical mechanics from physics developed after the revolutions of the early 20th century, which revealed limitations of classical mechanics.The earliest formulation of classical mechanics is often referred to as Newtonian mechanicsThe ""classical"" in ""classical mechanics"" does not refer classical antiquity, as it might in, say, classical architectureLater, more abstract methods were developed, leading to the reformulations of classical mechanics known as Lagrangian mechanics and Hamiltonian mechanicsOn the contrary the development of classical mechanics involved substantial change in the methods and philosophy of physics- Although classical mechanics is largely compatible with other ""classical physics"" theories such as classical electrodynamics and thermodynamics, some difficulties were discovered in the late 19th century that could only be resolved by more modern physicsThe book provides a thorough coverage of the fundamental principles and techniques of classical mechanics, a long-standing subject which is at the base of all of physics.
Theoretical expositions of this branch of physics has its origins in Ancient Greece, for instance, in the writings of Aristotle and Archimedes (see History of classical mechanics and Timeline of classical mechanics)Classical Mechanics (1st ed.)It consists of the physical concepts based on foundational works of Sir Isaac Newton, and the mathematical methods invented by Gottfried Wilhelm Leibniz, Joseph-Louis Lagrange, Leonhard Euler, and other contemporaries in the 17th century to describe the motion of bodies under the influence of forcesSimilarly, the different behaviour of classical electromagnetism and classical mechanics under velocity transformations led to [SEP]What is classical mechanics?","['A', 'B', 'C']",1.0
Who shared the other half of the Nobel Prize with Yoichiro Nambu for discovering the origin of the explicit breaking of CP symmetry in the weak interactions?,"was a Japanese theoretical physicist known for his work on CP-violation who was awarded one quarter of the 2008 Nobel Prize in Physics ""for the discovery of the origin of the broken symmetry which predicts the existence of at least three families of quarks in nature."" is a Japanese physicist known for his work on CP-violation who was awarded one-fourth of the 2008 Nobel Prize in Physics ""for the discovery of the origin of the broken symmetry which predicts the existence of at least three families of quarks in nature."" Known for his contributions to the field of theoretical physics, he was awarded half of the Nobel Prize in Physics in 2008 for the discovery in 1960 of the mechanism of spontaneous broken symmetry in subatomic physics, related at first to the strong interaction's chiral symmetry and later to the electroweak interaction and Higgs mechanism. He was awarded one-half of the 2008 Nobel Prize in Physics ""for the discovery of the mechanism of spontaneous broken symmetry in subatomic physics"". ==See also== * List of Japanese Nobel laureates * List of Nobel laureates affiliated with the University of Tokyo * Nambu, Yoichiro (1985) Quarks, World Scientific, Singapore == References == == External links == * Oral history interview with Yoichiro Nambu on 16 July 2004, American Institute of Physics, Niels Bohr Library & Archives * Yoichiro Nambu, Department of Physics faculty profile, University of Chicago * Profile, Scientific American Magazine * Yoichiro Nambu, Sc.D. Biographical Information * Nambu's most-cited scientific papers * Yoichiro Nambu's earliest book for the scientific layman * Yoichiro Nambu's previously unpublished material, including an original article on spontaneously broken symmetry * ""A History of Nobel Physicists from Wartime Japan"" Article published in the December 1998 issue of Scientific American, co-authored by Laurie Brown and Yoichiro Nambu *Tribute upon Prof. Nambu passing by former student Dr. Madhusree Mukerjee *Guide to the Yoichiro Nambu Papers 1917-2009 at the University of Chicago Special Collections Research Center * Category:1921 births Category:2015 deaths Category:American physicists Category:National Medal of Science laureates Category:People from Fukui Prefecture Category:American string theorists Category:Wolf Prize in Physics laureates Category:Academic staff of the University of Tokyo Category:University of Chicago faculty Category:University of Tokyo alumni Category:Japanese emigrants to the United States Category:American academics of Japanese descent Category:Nobel laureates in Physics Category:American Nobel laureates Category:Recipients of the Order of Culture Category:Members of the United States National Academy of Sciences Category:Institute for Advanced Study visiting scholars Category:J. J. Sakurai Prize for Theoretical Particle Physics recipients Category:Winners of the Max Planck Medal The discovery of CP violation in 1964 in the decays of neutral kaons resulted in the Nobel Prize in Physics in 1980 for its discoverers James Cronin and Val Fitch. In high school, he loved novels, especially detective and mystery stories and novels by Ryūnosuke Akutagawa. ==Career== At Kyoto University in the early 1970s, he collaborated with Makoto Kobayashi on explaining broken symmetry (the CP violation) within the Standard Model of particle physics. Together, with his colleague Toshihide Maskawa, he worked on explaining CP-violation within the Standard Model of particle physics. In 1962, a group of experimentalists at Dubna, on Okun's insistence, unsuccessfully searched for CP-violating kaon decay. ==Experimental status== ===Indirect CP violation=== In 1964, James Cronin, Val Fitch and coworkers provided clear evidence from kaon decay that CP-symmetry could be broken.The Fitch-Cronin Experiment This work won them the 1980 Nobel Prize. This discovery showed that weak interactions violate not only the charge-conjugation symmetry C between particles and antiparticles and the P or parity, but also their combination. Kobayashi and Maskawa's article, ""CP Violation in the Renormalizable Theory of Weak Interaction"", published in 1973, is the fourth most cited high energy physics paper of all time as of 2010. According to the current mathematical formulation of quantum chromodynamics, a violation of CP-symmetry in strong interactions could occur. Maskawa and Kobayashi's 1973 article, ""CP Violation in the Renormalizable Theory of Weak Interaction"", is the fourth most cited high energy physics paper of all time as of 2010. However, no violation of the CP-symmetry has ever been seen in any experiment involving only the strong interaction. The symmetry is known to be broken in the Standard Model through weak interactions, but it is also expected to be broken through strong interactions which govern quantum chromodynamics (QCD), something that has not yet been observed. The lack of an exact CP-symmetry, but also the fact that it is so close to a symmetry, introduced a great puzzle. Since the discovery of CP violation in 1964, physicists have believed that in theory, within the framework of the Standard Model, it is sufficient to search for appropriate Yukawa couplings (equivalent to a mass matrix) in order to generate a complex phase in the CKM matrix, thus automatically breaking CP symmetry. In other words, a process in which all particles are exchanged with their antiparticles was assumed to be equivalent to the mirror image of the original process and so the combined CP-symmetry would be conserved in the weak interaction. Kobayashi and Maskawa were jointly awarded half of the 2008 Nobel Prize in Physics for this work, with the other half going to Yoichiro Nambu. Kobayashi and Maskawa were jointly awarded half of the 2008 Nobel Prize in Physics for this work, with the other half going to Yoichiro Nambu. The other half was split equally between Makoto Kobayashi and Toshihide Maskawa ""for the discovery of the origin of the broken symmetry which predicts the existence of at least three families of quarks in nature."" ",Richard Feynman and Julian Schwinger,Makoto Kobayashi and Toshihide Maskawa,Steven Weinberg and Sheldon Glashow,Peter Higgs and Francois Englert,Murray Gell-Mann and George Zweig,B,kaggle200,"The modern explanation for the shift symmetry is now understood to be the Nambu-Goldstone non-linear symmetry realization mode, due to Yoichiro Nambu and Jeffrey Goldstone.
From 1969 onwards the Center awarded the J. Robert Oppenheimer Memorial Prize to recognize physics research. Jocelyn Bell Burnell was the 1978 recipient for her discovery of pulsars. Several other recipients of the J. Robert Oppenheimer Memorial Prize were later awarded the Nobel Prize in Physics (specifically, Sheldon Glashow, Yoichiro Nambu, Frederick Reines, Abdus Salam, and Steven Weinberg). The inaugural recipient, Paul Dirac, was already a Nobel laureate.
However, this theory allowed a compound symmetry CP to be conserved. CP combines parity P (switching left to right) with charge conjugation C (switching particles with antiparticles). Physicists were again surprised when in 1964, James Cronin and Val Fitch provided clear evidence in kaon decays that ""CP"" symmetry could be broken too, winning them the 1980 Nobel Prize in Physics. In 1973, Makoto Kobayashi and Toshihide Maskawa showed that ""CP"" violation in the weak interaction required more than two generations of particles, effectively predicting the existence of a then unknown third generation. This discovery earned them half of the 2008 Nobel Prize in Physics.
On October 7, 2008, the Royal Swedish Academy of Sciences awarded the 2008 Nobel Prize in Physics to three scientists for their work in subatomic physics symmetry breaking. Yoichiro Nambu, of the University of Chicago, won half of the prize for the discovery of the mechanism of spontaneous broken symmetry in the context of the strong interactions, specifically chiral symmetry breaking. Physicists Makoto Kobayashi and Toshihide Maskawa, of Kyoto University, shared the other half of the prize for discovering the origin of the explicit breaking of CP symmetry in the weak interactions. This origin is ultimately reliant on the Higgs mechanism, but, so far understood as a ""just so"" feature of Higgs couplings, not a spontaneously broken symmetry phenomenon.","However, this theory allowed a compound symmetry CP to be conserved. CP combines parity P (switching left to right) with charge conjugation C (switching particles with antiparticles). Physicists were again surprised when in 1964, James Cronin and Val Fitch provided clear evidence in kaon decays that CP symmetry could be broken too, winning them the 1980 Nobel Prize in Physics. In 1973, Makoto Kobayashi and Toshihide Maskawa showed that CP violation in the weak interaction required more than two generations of particles, effectively predicting the existence of a then unknown third generation. This discovery earned them half of the 2008 Nobel Prize in Physics.Unlike parity violation, CP violation occurs only in rare circumstances. Despite its limited occurrence under present conditions, it is widely believed to be the reason that there is much more matter than antimatter in the universe, and thus forms one of Andrei Sakharov's three conditions for baryogenesis.
Abel Prize 2008 Abel Prize: John G. Thompson and Jacques Tits Nobel Prize 2008 Nobel Prize in Physiology or Medicine: Harald zur Hausen, Françoise Barré-Sinoussi and Luc Montagnier 2008 Nobel Prize in Physics: Makoto Kobayashi, Toshihide Maskawa and Yoichiro Nambu 2008 Nobel Prize in Chemistry: Osamu Shimomura, Martin Chalfie and Roger Y. Tsien
On October 7, 2008, the Royal Swedish Academy of Sciences awarded the 2008 Nobel Prize in Physics to three scientists for their work in subatomic physics symmetry breaking. Yoichiro Nambu, of the University of Chicago, won half of the prize for the discovery of the mechanism of spontaneous broken symmetry in the context of the strong interactions, specifically chiral symmetry breaking. Physicists Makoto Kobayashi and Toshihide Maskawa, of Kyoto University, shared the other half of the prize for discovering the origin of the explicit breaking of CP symmetry in the weak interactions. This origin is ultimately reliant on the Higgs mechanism, but, so far understood as a ""just so"" feature of Higgs couplings, not a spontaneously broken symmetry phenomenon.","Yoichiro Nambu, of the University of Chicago, won half of the prize for the discovery of the mechanism of spontaneous broken symmetry in the context of the strong interactions, specifically chiral symmetry breakingPhysicists Makoto Kobayashi and Toshihide Maskawa, of Kyoto University, shared the other half of the prize for discovering the origin of the explicit breaking of CP symmetry in the weak interactionsPhysicists were again surprised when in 1964, James Cronin and Val Fitch provided clear evidence in kaon decays that CP symmetry could be broken too, winning them the 1980 Nobel Prize in PhysicsPhysicists were again surprised when in 1964, James Cronin and Val Fitch provided clear evidence in kaon decays that ""CP"" symmetry could be broken too, winning them the 1980 Nobel Prize in PhysicsThe inaugural recipient, Paul Dirac, was already a Nobel laureate.
However, this theory allowed a compound symmetry CP to be conservedIn 1973, Makoto Kobayashi and Toshihide Maskawa showed that CP violation in the weak interaction required more than two generations of particles, effectively predicting the existence of a then unknown third generationThis discovery earned them half of the 2008 Nobel Prize in Physics.Unlike parity violation, CP violation occurs only in rare circumstancesIn 1973, Makoto Kobayashi and Toshihide Maskawa showed that ""CP"" violation in the weak interaction required more than two generations of particles, effectively predicting the existence of a then unknown third generationTsien
On October 7, 2008, the Royal Swedish Academy of Sciences awarded the 2008 Nobel Prize in Physics to three scientists for their work in subatomic physics symmetry breakingThis discovery earned them half of the 2008 Nobel Prize in Physics.
On October 7, 2008, the Royal Swedish Academy of Sciences awarded the 2008 Nobel Prize in Physics to three scientists for their work in subatomic physics symmetry breakingHowever, this theory allowed a compound symmetry CP to be conserved- The modern explanation for the shift symmetry is now understood to be the Nambu-Goldstone non-linear sym","Yoichiro Nambu, of the University of Chicago, won half of the prize for the discovery of the mechanism of spontaneous broken symmetry in the context of the strong interactions, specifically chiral symmetry breakingPhysicists Makoto Kobayashi and Toshihide Maskawa, of Kyoto University, shared the other half of the prize for discovering the origin of the explicit breaking of CP symmetry in the weak interactionsPhysicists were again surprised when in 1964, James Cronin and Val Fitch provided clear evidence in kaon decays that CP symmetry could be broken too, winning them the 1980 Nobel Prize in PhysicsPhysicists were again surprised when in 1964, James Cronin and Val Fitch provided clear evidence in kaon decays that ""CP"" symmetry could be broken too, winning them the 1980 Nobel Prize in PhysicsThe inaugural recipient, Paul Dirac, was already a Nobel laureate.
However, this theory allowed a compound symmetry CP to be conservedIn 1973, Makoto Kobayashi and Toshihide Maskawa showed that CP violation in the weak interaction required more than two generations of particles, effectively predicting the existence of a then unknown third generationThis discovery earned them half of the 2008 Nobel Prize in Physics.Unlike parity violation, CP violation occurs only in rare circumstancesIn 1973, Makoto Kobayashi and Toshihide Maskawa showed that ""CP"" violation in the weak interaction required more than two generations of particles, effectively predicting the existence of a then unknown third generationTsien
On October 7, 2008, the Royal Swedish Academy of Sciences awarded the 2008 Nobel Prize in Physics to three scientists for their work in subatomic physics symmetry breakingThis discovery earned them half of the 2008 Nobel Prize in Physics.
On October 7, 2008, the Royal Swedish Academy of Sciences awarded the 2008 Nobel Prize in Physics to three scientists for their work in subatomic physics symmetry breakingHowever, this theory allowed a compound symmetry CP to be conserved- The modern explanation for the shift symmetry is now understood to be the Nambu-Goldstone non-linear sym[SEP]Who shared the other half of the Nobel Prize with Yoichiro Nambu for discovering the origin of the explicit breaking of CP symmetry in the weak interactions?","['B', 'C', 'E']",1.0
What are some models that attempt to account for all observations without invoking supplemental non-baryonic matter?,"There are several proposed types of exotic matter: * Hypothetical particles and states of matter that have ""exotic"" physical properties that would violate known laws of physics, such as a particle having a negative mass. Despite the allowance for CP violation in the Standard Model, it is insufficient to account for the observed baryon asymmetry of the universe (BAU) given the limits on baryon number violation, meaning that beyond-Standard Model sources are needed. * Several particles whose existence has been experimentally confirmed that are conjectured to be exotic hadrons and within the Standard Model. At the same time, a census of baryons in the recent observable universe has found that observed baryonic matter accounts for less than half of that amount. This form of dark matter is composed of ""baryons"", heavy subatomic particles such as protons and neutrons and combinations of these, including non-emitting ordinary atoms. ==Presence== Baryonic dark matter may occur in non-luminous gas or in Massive Astrophysical Compact Halo Objects (MACHOs) – condensed objects such as black holes, neutron stars, white dwarfs, very faint stars, or non-luminous objects like planets and brown dwarfs. ==Estimates of quantity== The total amount of baryonic dark matter can be inferred from models of Big Bang nucleosynthesis, and observations of the cosmic microwave background. In cosmology, the missing baryon problem is an observed discrepancy between the amount of baryonic matter detected from shortly after the Big Bang and from more recent epochs. This is highly nontrivial, since although luminous matter such as stars and galaxies are easily summed, baryonic matter can also exist in highly non-luminous form, such as black holes, planets, and highly diffuse interstellar gas. The missing baryon problem is different from the dark matter problem, which is non-baryonic in nature.See Lambda-CDM model. * Hypothetical particles and states of matter that have not yet been encountered, but whose properties would be within the realm of mainstream physics if found to exist. This effect is sensitive to all free electrons independently of their temperature or the density of the surrounding medium, and thus it can be used to study baryonic matter otherwise not hot enough to be detected. A 2021 article postulated that approximately 50% of all baryonic matter is outside dark matter haloes, filling the space between galaxies, and that this would explain the missing baryons not accounted for in the 2017 paper. == Current state == Currently, many groups have observed the intergalactic medium and circum-galactic medium to obtain more measurements and observations of baryons to support the leading observations. In astronomy and cosmology, baryonic dark matter is hypothetical dark matter composed of baryons. Observations of the cosmic microwave background and Big Bang nucleosynthesis studies have set constraints on the abundance of baryons in the early universe, finding that baryonic matter accounts for approximately 4.8% of the energy contents of the Universe. The missing baryon problem has been resolved but research groups are working to detect the WHIM using varying methods to confirm results. ==References== Category:Physical cosmology Category:Baryons One claim of a solution was published in 2017 when two groups of scientists said they found evidence for the location of missing baryons in intergalactic matter. However, the Standard Model is known to violate the conservation of baryon number only non-perturbatively: a global U(1) anomaly. The CGM accounts for 5% of total baryons in the universe. == Detection methods == There are three main methods of detecting the WHIM where the missing baryons lie: the Sunyaev-Zel'dovich effect, Lyman-alpha emission lines, and metal absorption lines. However, the results do place rigorous constraints on the amount of symmetry violation that a physical model can permit. This model has not shown if it can reproduce certain observations regarding the inflation scenario, such as explaining the uniformity of the cosmos on large scales. * Forms of matter that are poorly understood, such as dark matter and mirror matter. ","The Doppler effect, the photoelectric effect, or the Compton effect.","The Higgs boson, the W boson, or the Z boson.","Modified Newtonian dynamics, tensor–vector–scalar gravity, or entropic gravity.","The strong nuclear force, the weak nuclear force, or the electromagnetic force.","Kepler's laws, Newton's laws, or Einstein's theory of general relativity.",C,kaggle200,"Simple artificial neurons, such as the McCulloch–Pitts model, are sometimes described as ""caricature models"", since they are intended to reflect one or more neurophysiological observations, but without regard to realism.
Tensor–vector–scalar gravity (TeVeS) is a proposed relativistic theory that is equivalent to Modified Newtonian dynamics (MOND) in the non-relativistic limit, which purports to explain the galaxy rotation problem without invoking dark matter. Originated by Jacob Bekenstein in 2004, it incorporates various dynamical and non-dynamical tensor fields, vector fields and scalar fields.
One of the very first approaches to authorship identification, by Mendenhall, can be said to aggregate its observations without averaging them.
Although the scientific community generally accepts dark matter's existence, some astrophysicists, intrigued by specific observations that are not well-explained by ordinary dark matter, argue for various modifications of the standard laws of general relativity. These include modified Newtonian dynamics, tensor–vector–scalar gravity, or entropic gravity. These models attempt to account for all observations without invoking supplemental non-baryonic matter.","Historically, a large majority of astronomers and astrophysicists support the ΛCDM model or close relatives of it, but recent observations that contradict the ΛCDM model have recently led some astronomers and astrophysicists to search for alternatives to the ΛCDM model, which include dropping the Friedmann–Lemaître–Robertson–Walker metric or modifying dark energy. On the other hand, Milgrom, McGaugh, and Kroupa have long been leading critics of the ΛCDM model, attacking the dark matter portions of the theory from the perspective of galaxy formation models and supporting the alternative modified Newtonian dynamics (MOND) theory, which requires a modification of the Einstein field equations and the Friedmann equations as seen in proposals such as modified gravity theory (MOG theory) or tensor–vector–scalar gravity theory (TeVeS theory). Other proposals by theoretical astrophysicists of cosmological alternatives to Einstein's general relativity that attempt to account for dark energy or dark matter include f(R) gravity, scalar–tensor theories such as galileon theories, brane cosmologies, the DGP model, and massive gravity and its extensions such as bimetric gravity.
Theoretical work simultaneously also showed that ancient MACHOs are not likely to account for the large amounts of dark matter now thought to be present in the universe. The Big Bang as it is currently understood could not have produced enough baryons and still be consistent with the observed elemental abundances, including the abundance of deuterium. Furthermore, separate observations of baryon acoustic oscillations, both in the cosmic microwave background and large-scale structure of galaxies, set limits on the ratio of baryons to the total amount of matter. These observations show that a large fraction of non-baryonic matter is necessary regardless of the presence or absence of MACHOs; however MACHO candidates such as primordial black holes could be formed of non-baryonic matter (from pre-baryonic epochs of the early Big Bang).
Tensor–vector–scalar gravity (TeVeS) is a proposed relativistic theory that is equivalent to Modified Newtonian dynamics (MOND) in the non-relativistic limit, which purports to explain the galaxy rotation problem without invoking dark matter. Originated by Jacob Bekenstein in 2004, it incorporates various dynamical and non-dynamical tensor fields, vector fields and scalar fields.","These models attempt to account for all observations without invoking supplemental non-baryonic matter.These observations show that a large fraction of non-baryonic matter is necessary regardless of the presence or absence of MACHOs; however MACHO candidates such as primordial black holes could be formed of non-baryonic matter (from pre-baryonic epochs of the early Big Bang).
Tensor–vector–scalar gravity (TeVeS) is a proposed relativistic theory that is equivalent to Modified Newtonian dynamics (MOND) in the non-relativistic limit, which purports to explain the galaxy rotation problem without invoking dark matterHistorically, a large majority of astronomers and astrophysicists support the ΛCDM model or close relatives of it, but recent observations that contradict the ΛCDM model have recently led some astronomers and astrophysicists to search for alternatives to the ΛCDM model, which include dropping the Friedmann–Lemaître–Robertson–Walker metric or modifying dark energyFurthermore, separate observations of baryon acoustic oscillations, both in the cosmic microwave background and large-scale structure of galaxies, set limits on the ratio of baryons to the total amount of matterOther proposals by theoretical astrophysicists of cosmological alternatives to Einstein's general relativity that attempt to account for dark energy or dark matter include f(R) gravity, scalar–tensor theories such as galileon theories, brane cosmologies, the DGP model, and massive gravity and its extensions such as bimetric gravity.
Theoretical work simultaneously also showed that ancient MACHOs are not likely to account for the large amounts of dark matter now thought to be present in the universeOn the other hand, Milgrom, McGaugh, and Kroupa have long been leading critics of the ΛCDM model, attacking the dark matter portions of the theory from the perspective of galaxy formation models and supporting the alternative modified Newtonian dynamics (MOND) theory, which requires a modification of the Einstein field equations and the Friedmann equations as seen in proposals such as modified grav","These models attempt to account for all observations without invoking supplemental non-baryonic matter.These observations show that a large fraction of non-baryonic matter is necessary regardless of the presence or absence of MACHOs; however MACHO candidates such as primordial black holes could be formed of non-baryonic matter (from pre-baryonic epochs of the early Big Bang).
Tensor–vector–scalar gravity (TeVeS) is a proposed relativistic theory that is equivalent to Modified Newtonian dynamics (MOND) in the non-relativistic limit, which purports to explain the galaxy rotation problem without invoking dark matterHistorically, a large majority of astronomers and astrophysicists support the ΛCDM model or close relatives of it, but recent observations that contradict the ΛCDM model have recently led some astronomers and astrophysicists to search for alternatives to the ΛCDM model, which include dropping the Friedmann–Lemaître–Robertson–Walker metric or modifying dark energyFurthermore, separate observations of baryon acoustic oscillations, both in the cosmic microwave background and large-scale structure of galaxies, set limits on the ratio of baryons to the total amount of matterOther proposals by theoretical astrophysicists of cosmological alternatives to Einstein's general relativity that attempt to account for dark energy or dark matter include f(R) gravity, scalar–tensor theories such as galileon theories, brane cosmologies, the DGP model, and massive gravity and its extensions such as bimetric gravity.
Theoretical work simultaneously also showed that ancient MACHOs are not likely to account for the large amounts of dark matter now thought to be present in the universeOn the other hand, Milgrom, McGaugh, and Kroupa have long been leading critics of the ΛCDM model, attacking the dark matter portions of the theory from the perspective of galaxy formation models and supporting the alternative modified Newtonian dynamics (MOND) theory, which requires a modification of the Einstein field equations and the Friedmann equations as seen in proposals such as modified grav[SEP]What are some models that attempt to account for all observations without invoking supplemental non-baryonic matter?","['C', 'B', 'A']",1.0
What is the purpose of the proximity-focusing design in a RICH detector?,"This is because a ring light can be used to extend the illumination aperture * Can deliver color information * Can measure on rough surfaces Disadvantages: * Can not be used if the surface of the sample does not give structure in the image. In visual perception, the near point is the closest point at which an object can be placed and still form a focused image on the retina, within the eye's accommodation range. # Then for each position the focus over each plane is calculated # The plane with the best focus is used to get a sharp image. the corresponding depth gives the depth at this position- ==Optics== Focus variation requires an optics with very little depth of field. Focus variation is a method used to sharpen images and to measure surface irregularities by means of optics with limited depth of field. ==Algorithm== The algorithm works as follows: # At first images with difference focus are captured. The Dual speed focuser is a focusing mechanism used in precision optics such as advanced amateur astronomical telescopes and laboratory microscopes. Proximity may refer to: * Distance, a numerical description of how far apart objects are * Proxemics, the study of human spatial requirements and the effects of population density * Proximity (2000 film), an action/thriller film * Proximity (2020 film), a science fiction drama film * Proximity fuze, a fuze that detonates an explosive device automatically when the distance to the target becomes smaller than a predetermined value * Proximity sensor, a sensor able to detect the presence of nearby objects without any physical contact * Proximity space, or nearness space, in topology * Proximity (horse) ==See also== * * These objectives have a high numerical aperture which gives a small depth of field. ==Usage== The use of this method is for optical surface metrology and coordinate-measuring machine. This is different from the two separate focusing knobs seen on low level microscopes. For example, if a person has and the typical near point distance at their age is , then the optical power needed is where one diopter is the reciprocal of one meter. ==References== Category:Ophthalmology The proximity effect in electron beam lithography (EBL) is the phenomenon that the exposure dose distribution, and hence the developed pattern, is wider than the scanned pattern due to the interactions of the primary beam electrons with the resist and substrate. The fine focusing knob connects to the input axle while the fast focusing knob connects to the holder shelf. Another common reason to employ a dual-speed focuser is with the use of short focal length eyepieces, where the depth of focus is short, requiring critically accurate focusing. Sometimes, near point is given in diopters (see ), which refers to the inverse of the distance. Focus variation is one of the described methods. ==See also== * Roughness * Surface metrology ==References== Category:Optical metrology Category:Metrology This can be realized if a microscopy like optics and a microscope objective is used. This is done by moving the sample or the optics in relation to each other. This backscattering process originates e.g. from a collision with a heavy particle (i.e. substrate nucleus) and leads to wide-angle scattering of the light electron from a range of depths (micrometres) in the substrate. A dual speed focuser can provide two focusing speeds by using a set of co-axial knobs, one for fast focusing and another for fine focusing when the film or CCD is near the perfect focal plane. For example a normal eye would have a near point of \frac{1}{11\ \text{cm}} = 9\ \text{diopters}. == Vision correction == A person with hyperopia has a near point that is further away than the typical near point for someone their age, and hence the person is unable to bring an object at the typical near point distance into sharp focus. The cylinder case of a dual speed focuser is fixed on the telescope tube. ","To emit a cone of Askaryan radiation that traverses a small distance and is detected on the photon detector plane, creating a ring of light whose radius is defined by the Cherenkov emission angle and the proximity gap.","To emit a cone of Bremsstrahlung radiation that traverses a large distance and is detected on the photon detector plane, creating a ring of light whose radius is defined by the Cherenkov emission angle and the proximity gap.","To emit a cone of Cherenkov light that traverses a large distance and is detected on the photon detector plane, creating a ring of light whose radius is defined by the Cherenkov emission angle and the proximity gap.","To emit a cone of Cherenkov light that traverses a small distance and is detected on the photon detector plane, creating a ring of light whose radius is defined by the Cherenkov emission angle and the proximity gap.","To emit a cone of Bremsstrahlung radiation that traverses a small distance and is detected on the photon detector plane, creating a ring of light whose radius is defined by the Cherenkov emission angle and the proximity gap.",D,kaggle200,"In a DIRC (Detection of Internally Reflected Cherenkov light), another design of a RICH detector, light that is captured by total internal reflection inside the solid radiator reaches the light sensors at the detector perimeter, the precise rectangular cross section of the radiator preserving the angular information of the Cherenkov light cone. One example is the DIRC of the BaBar experiment at SLAC.
In a RICH detector the photons within this light-cone pass through an optical system and impinge upon a position sensitive photon detector. With a suitably focusing optical system this allows reconstruction of a ring, similar to that above, the radius of which gives a measure of the Cherenkov emission angle formula_2. The resolving power of this method is illustrated by comparing the Cherenkov angle ""per photon"", see the first plot above, with the mean Cherenkov angle ""per particle"" (averaged over all photons emitted by that particle) obtained by ring-imaging, shown below; the greatly enhanced separation between particle types is very clear:
In the more compact proximity-focusing design a thin radiator volume emits a cone of Cherenkov light which traverses a small distance, the proximity gap, and is detected on the photon detector plane. The image is a ring of light the radius of which is defined by the Cherenkov emission angle and the proximity gap. The ring thickness is mainly determined by the thickness of the radiator. An example of a proximity gap RICH detector is the High Momentum Particle Identification (HMPID), one of the detectors of ALICE (A Large Ion Collider Experiment), which is one of the five experiments at the LHC (Large Hadron Collider) at CERN.
The most advanced type of a detector is the RICH, or ring-imaging Cherenkov detector, developed in the 1980s. In a RICH detector, a cone of Cherenkov light is produced when a high-speed charged particle traverses a suitable medium, often called radiator. This light cone is detected on a position sensitive planar photon detector, which allows reconstructing a ring or disc, whose radius is a measure for the Cherenkov emission angle. Both focusing and proximity-focusing detectors are in use. In a focusing RICH detector, the photons are collected by a spherical mirror and focused onto the photon detector placed at the focal plane. The result is a circle with a radius independent of the emission point along the particle track. This scheme is suitable for low refractive index radiators—i.e. gases—due to the larger radiator length needed to create enough photons. In the more compact proximity-focusing design, a thin radiator volume emits a cone of Cherenkov light which traverses a small distance—the proximity gap—and is detected on the photon detector plane. The image is a ring of light whose radius is defined by the Cherenkov emission angle and the proximity gap. The ring thickness is determined by the thickness of the radiator. An example of a proximity gap RICH detector is the High Momentum Particle Identification Detector (HMPID), a detector currently under construction for ALICE (A Large Ion Collider Experiment), one of the six experiments at the LHC (Large Hadron Collider) at CERN.","In a RICH detector the photons within this light-cone pass through an optical system and impinge upon a position sensitive photon detector. With a suitably focusing optical system this allows reconstruction of a ring, similar to that above, the radius of which gives a measure of the Cherenkov emission angle θc . The resolving power of this method is illustrated by comparing the Cherenkov angle per photon, see the first plot above, with the mean Cherenkov angle per particle (averaged over all photons emitted by that particle) obtained by ring-imaging, shown below; the greatly enhanced separation between particle types is very clear: This ability of a RICH system to successfully resolve different hypotheses for the particle type depends on two principal factors, which in turn depend upon the listed sub-factors; The effective angular resolution per photon, σ Chromatic dispersion in the radiator ( n varies with photon frequency) Aberrations in the optical system Position resolution of the photon detector The maximum number of detected photons in the ring-image, Nc The length of radiator through which the particle travels Photon transmission through the radiator material Photon transmission through the optical system Quantum efficiency of the photon detectors σ is a measure of the intrinsic optical precision of the RICH detector. Nc is a measure of the optical response of the RICH; it can be thought of as the limiting case of the number of actually detected photons produced by a particle whose velocity approaches that of light, averaged over all relevant particle trajectories in the RICH detector. The average number of Cherenkov photons detected, for a slower particle, of charge q (normally ±1), emitting photons at angle θc is then sin 2(θc)1−1n2 and the precision with which the mean Cherenkov angle can be determined with these photons is approximately σm=σN to which the angular precision of the emitting particle's measured direction must be added in quadrature, if it is not negligible compared to σm Given the known momentum of the emitting particle and the refractive index of the radiator, the expected Cherenkov angle for each particle type can be predicted, and its difference from the observed mean Cherenkov angle calculated. Dividing this difference by σm then gives a measure of the 'number of sigma' deviation of the hypothesis from the observation, which can be used in computing a probability or likelihood for each possible hypothesis. The following figure shows the 'number of sigma' deviation of the kaon hypothesis from a true pion ring image (π not k) and of the pion hypothesis from a true kaon ring image (k not π), as a function of momentum, for a RICH with n = 1.0005, Nc = 25, σ = 0.64 milliradians; Also shown are the average number of detected photons from pions(Ngπ) or from kaons(Ngk). One can see that the RICH's ability to separate the two particle types exceeds 4-sigma everywhere between threshold and 80 GeV/c, finally dropping below 3-sigma at about 100 GeV. It is important to note that this result is for an 'ideal' detector, with homogeneous acceptance and efficiency, normal error distributions and zero background. No such detector exists, of course, and in a real experiment much more sophisticated procedures are actually used to account for those effects; position dependent acceptance and efficiency; non-Gaussian error distributions; non negligible and variable event-dependent backgrounds.In practice, for the multi-particle final states produced in a typical collider experiment, separation of kaons from other final state hadrons, mainly pions, is the most important purpose of the RICH. In that context the two most vital RICH functions, which maximise signal and minimise combinatorial backgrounds, are its ability to correctly identify a kaon as a kaon and its ability not to misidentify a pion as a kaon. The related probabilities, which are the usual measures of signal detection and background rejection in real data, are plotted below to show their variation with momentum (simulation with 10% random background); Note that the ~30% π → k misidentification rate at 100 GeV is, for the most part, due to the presence of 10% background hits (faking photons) in the simulated detector; the 3-sigma separation in the mean Cherenkov angle (shown in the 4th plot above) would, by itself, only account for about 6% misidentification. More detailed analyses of the above type, for operational RICH detectors, can be found in the published literature.
In the more compact proximity-focusing design a thin radiator volume emits a cone of Cherenkov light which traverses a small distance, the proximity gap, and is detected on the photon detector plane. The image is a ring of light the radius of which is defined by the Cherenkov emission angle and the proximity gap. The ring thickness is mainly determined by the thickness of the radiator. An example of a proximity gap RICH detector is the High Momentum Particle Identification (HMPID), one of the detectors of ALICE (A Large Ion Collider Experiment), which is one of the five experiments at the LHC (Large Hadron Collider) at CERN.
The most advanced type of a detector is the RICH, or ring-imaging Cherenkov detector, developed in the 1980s. In a RICH detector, a cone of Cherenkov light is produced when a high-speed charged particle traverses a suitable medium, often called radiator. This light cone is detected on a position sensitive planar photon detector, which allows reconstructing a ring or disc, whose radius is a measure for the Cherenkov emission angle. Both focusing and proximity-focusing detectors are in use. In a focusing RICH detector, the photons are collected by a spherical mirror and focused onto the photon detector placed at the focal plane. The result is a circle with a radius independent of the emission point along the particle track. This scheme is suitable for low refractive index radiators—i.e. gases—due to the larger radiator length needed to create enough photons. In the more compact proximity-focusing design, a thin radiator volume emits a cone of Cherenkov light which traverses a small distance—the proximity gap—and is detected on the photon detector plane. The image is a ring of light whose radius is defined by the Cherenkov emission angle and the proximity gap. The ring thickness is determined by the thickness of the radiator. An example of a proximity gap RICH detector is the High Momentum Particle Identification Detector (HMPID), a detector currently under construction for ALICE (A Large Ion Collider Experiment), one of the six experiments at the LHC (Large Hadron Collider) at CERN.","Both focusing and proximity-focusing detectors are in useIn a focusing RICH detector, the photons are collected by a spherical mirror and focused onto the photon detector placed at the focal planeMore detailed analyses of the above type, for operational RICH detectors, can be found in the published literature.
In the more compact proximity-focusing design a thin radiator volume emits a cone of Cherenkov light which traverses a small distance, the proximity gap, and is detected on the photon detector planeAn example of a proximity gap RICH detector is the High Momentum Particle Identification (HMPID), one of the detectors of ALICE (A Large Ion Collider Experiment), which is one of the five experiments at the LHC (Large Hadron Collider) at CERN.
The most advanced type of a detector is the RICH, or ring-imaging Cherenkov detector, developed in the 1980sAn example of a proximity gap RICH detector is the High Momentum Particle Identification Detector (HMPID), a detector currently under construction for ALICE (A Large Ion Collider Experiment), one of the six experiments at the LHC (Large Hadron Collider) at CERNIn the more compact proximity-focusing design, a thin radiator volume emits a cone of Cherenkov light which traverses a small distance—the proximity gap—and is detected on the photon detector planeAn example of a proximity gap RICH detector is the High Momentum Particle Identification Detector (HMPID), a detector currently under construction for ALICE (A Large Ion Collider Experiment), one of the six experiments at the LHC (Large Hadron Collider) at CERN.In a RICH detector the photons within this light-cone pass through an optical system and impinge upon a position sensitive photon detectorThe resolving power of this method is illustrated by comparing the Cherenkov angle ""per photon"", see the first plot above, with the mean Cherenkov angle ""per particle"" (averaged over all photons emitted by that particle) obtained by ring-imaging, shown below; the greatly enhanced separation between particle types is very clear:
In the more compact proximity-focusing design a th","Both focusing and proximity-focusing detectors are in useIn a focusing RICH detector, the photons are collected by a spherical mirror and focused onto the photon detector placed at the focal planeMore detailed analyses of the above type, for operational RICH detectors, can be found in the published literature.
In the more compact proximity-focusing design a thin radiator volume emits a cone of Cherenkov light which traverses a small distance, the proximity gap, and is detected on the photon detector planeAn example of a proximity gap RICH detector is the High Momentum Particle Identification (HMPID), one of the detectors of ALICE (A Large Ion Collider Experiment), which is one of the five experiments at the LHC (Large Hadron Collider) at CERN.
The most advanced type of a detector is the RICH, or ring-imaging Cherenkov detector, developed in the 1980sAn example of a proximity gap RICH detector is the High Momentum Particle Identification Detector (HMPID), a detector currently under construction for ALICE (A Large Ion Collider Experiment), one of the six experiments at the LHC (Large Hadron Collider) at CERNIn the more compact proximity-focusing design, a thin radiator volume emits a cone of Cherenkov light which traverses a small distance—the proximity gap—and is detected on the photon detector planeAn example of a proximity gap RICH detector is the High Momentum Particle Identification Detector (HMPID), a detector currently under construction for ALICE (A Large Ion Collider Experiment), one of the six experiments at the LHC (Large Hadron Collider) at CERN.In a RICH detector the photons within this light-cone pass through an optical system and impinge upon a position sensitive photon detectorThe resolving power of this method is illustrated by comparing the Cherenkov angle ""per photon"", see the first plot above, with the mean Cherenkov angle ""per particle"" (averaged over all photons emitted by that particle) obtained by ring-imaging, shown below; the greatly enhanced separation between particle types is very clear:
In the more compact proximity-focusing design a th[SEP]What is the purpose of the proximity-focusing design in a RICH detector?","['D', 'E', 'C']",1.0
What is a light-year?,"Light is the means by which human beings see themselves, each other, and their place in the Universe. A particle of negligible mass, that orbits a body of 1 solar mass in this period, has a mean axis for its orbit of 1 astronomical unit by definition. A Gaussian year is defined as 365.2568983 days. The value is derived from Kepler's third law as :\mbox{1 Gaussian year}= \frac {2\pi} {k} \, where :k is the Gaussian gravitational constant. ==See also== ==References== Category:Types of year Category:Astronomical coordinate systems This radiation is a relic of the light that filled the early cosmos almost 14 billion years ago, that can still be observed today across the sky at much longer wavelengths than visible light, in the domain of microwaves. It was adopted by Carl Friedrich Gauss as the length of the sidereal year in his studies of the dynamics of the solar system. People throughout the world and across history have always attached great importance to light. Calculating the speed of propagation of these waves, he obtained the value of the speed of light, and concluded that it was an electromagnetic wave. Light is an essential part of culture and art, and is a unifying symbol for the world. In this context the goals of IYL 2015 align with the 17 Sustainable Development Goals which were adopted by the United Nations General Assembly in 2015. == Anniversaries during 2015 == The year 2015 was a natural candidate for the International Year of Light as it represented the remarkable conjunction of a number of important milestones in the history of the science of light. === Great works on optics by Ibn Al-Haytham - over 1000 years === The year 2015 marks the 1000th anniversary since the appearance of the remarkable seven-volume treatise on optics Kitab al-Manazir, written by the Arab scientist Ibn al-Haytham (also known by the Latinization Alhazen or Alhacen),.The Latin forms of his name, remain in popular use, but are out of use in scholarly contexts. Maxwell also left us outstanding contributions to colour theory, optics, Saturn's rings, statics, dynamics, solids, instruments, and statistical physics. In the General Theory of Relativity, the doctrine of space and time no longer figures as a fundamental independent of the rest of physics. Articles in major newspapers on TV and in other media appeared around the world, and there many dedicated scientific conferences, including a three-day conference in the Philippines, ""Project Einstein 2015: An International Conference Celebrating 100 Years of General Relativity."" At the end of the 18th century, physics was dominated by Newton's particle theory of light. The Lightyear 0 (formerly the Lightyear One) is an all-solar-electric car by Lightyear. However, his most important contributions were to electromagnetism. === Einstein and the General Theory of Relativity - 1915 === The year 2015 marked the 100th anniversary of Einstein's General Theory of Relativity. The Lightyear company claims the 782 solar cells across the car can add of range per day during summer. Initially NASA's COBE and WMAP satellites, and in recent years ESA's Planck satellite, have provided precise maps of the CMB that enable astrophysicists to delve into the history of the Universe, constraining its geometry and the properties of its constituents. Many events on 25 November 2015 celebrated the 100th anniversary of Einstein's General Theory of Relativity. The International Year of Light contributes significantly to fulfilling the missions of UNESCO to the building of peace, the alleviation of poverty, to sustainable development and intercultural dialogue through education, science, culture, and communication. ",A unit of time used to express astronomical distances that is equivalent to the time that an object moving at the speed of light in vacuum would take to travel in one Julian year: approximately 9.46 trillion seconds (9.46×1012 s) or 5.88 trillion minutes (5.88×1012 min).,A unit of length used to express astronomical distances that is equivalent to the distance that an object moving at the speed of light in vacuum would travel in one Julian year: approximately 9.46 trillion kilometres (9.46×1012 km) or 5.88 trillion miles (5.88×1012 mi).,A unit of temperature used to express astronomical distances that is equivalent to the temperature of an object moving at the speed of light in vacuum in one Julian year: approximately 9.46 trillion Kelvin (9.46×1012 K) or 5.88 trillion Celsius (5.88×1012 °.,A unit of energy used to express astronomical distances that is equivalent to the energy of an object moving at the speed of light in vacuum in one Julian year: approximately 9.46 trillion joules (9.46×1012 J) or 5.88 trillion watt-hours (5.88×1012 Wh).,A unit of mass used to express astronomical distances that is equivalent to the mass of an object moving at the speed of light in vacuum in one Julian year: approximately 9.46 trillion kilograms (9.46×1012 kg) or 5.88 trillion pounds (5.88×1012 lb).,B,kaggle200,"Because of this, distances between stars are usually expressed in light-years (defined as the distance that light travels in vacuum in one Julian year) or in parsecs (one parsec is 3.26 ly, the distance at which stellar parallax is exactly one arcsecond, hence the name). Light in a vacuum travels around per second, so 1 light-year is about or AU. Hence, Proxima Centauri is approximately 4.243 light-years from Earth.
Code page 1012 (CCSID 1012), also known as CP1012 or I7DEC, is IBM's code page for the Italian version of ISO 646, also known as ISO 646-IT IR 15. The character set was originally specified in UNI 0204-70. It is also part of DEC's National Replacement Character Set (NRCS) for their VT220 terminals.
Astronomical distances are sometimes expressed in light-years, especially in popular science publications and media. A light-year is the distance light travels in one Julian year, around 9461 billion kilometres, 5879 billion miles, or 0.3066 parsecs. In round figures, a light year is nearly 10 trillion kilometres or nearly 6 trillion miles. Proxima Centauri, the closest star to Earth after the Sun, is around 4.2 light-years away.
A light-year, alternatively spelled light year, is a large unit of length used to express astronomical distances and is equivalent to about 9.46 trillion kilometers (), or 5.88 trillion miles (). As defined by the International Astronomical Union (IAU), a light-year is the distance that light travels in a vacuum in one Julian year (365.25 days). Because it includes the time-measurement word ""year"", the term ""light-year"" is sometimes misinterpreted as a unit of time.","Mathematics: 7,625,597,484,987 – a number that often appears when dealing with powers of 3. It can be expressed as 19683 3 , 27 9 , 27 , 333 and 33 or when using Knuth's up-arrow notation it can be expressed as ↑↑ 3 and ↑↑↑ 2 Astronomy: A light-year, as defined by the International Astronomical Union (IAU), is the distance that light travels in a vacuum in one year, which is equivalent to about 9.46 trillion kilometers (9.46×1012 km).
A light-year, alternatively spelled light year, is a unit of length used to express astronomical distances and is equivalent to about 9.46 trillion kilometers (9.46×1012 km), or 5.88 trillion miles (5.88×1012 mi). As defined by the International Astronomical Union (IAU), a light-year is the distance that light travels in a vacuum in one Julian year (365.25 days). Because it includes the word ""year"", the term is sometimes misinterpreted as a unit of time.The light-year is most often used when expressing distances to stars and other distances on a galactic scale, especially in non-specialist contexts and popular science publications. The unit most commonly used in professional astronomy is the parsec (symbol: pc, about 3.26 light-years) which derives from astrometry; it is the distance at which one astronomical unit (au) subtends an angle of one second of arc.
light-year (ly) A unit of length used to express astronomical distances that is equivalent to the distance that an object moving at the speed of light in vacuum would travel in one Julian year: approximately 9.46 trillion kilometres (9.46×1012 km) or 5.88 trillion miles (5.88×1012 mi). Though the light-year is often used to measure galactic-scale distances in non-specialist publications, the unit of length most commonly used in professional astrometry is the parsec.","As defined by the International Astronomical Union (IAU), a light-year is the distance that light travels in a vacuum in one Julian year (365.25 days)A light-year is the distance light travels in one Julian year, around 9461 billion kilometres, 5879 billion miles, or 0.3066 parsecsBecause it includes the word ""year"", the term is sometimes misinterpreted as a unit of time.The light-year is most often used when expressing distances to stars and other distances on a galactic scale, especially in non-specialist contexts and popular science publicationsLight in a vacuum travels around per second, so 1 light-year is about or AUIn round figures, a light year is nearly 10 trillion kilometres or nearly 6 trillion milesIt can be expressed as 19683 3 , 27 9 , 27 , 333 and 33 or when using Knuth's up-arrow notation it can be expressed as ↑↑ 3 and ↑↑↑ 2 Astronomy: A light-year, as defined by the International Astronomical Union (IAU), is the distance that light travels in a vacuum in one year, which is equivalent to about 9.46 trillion kilometers (9.46×1012 km).
A light-year, alternatively spelled light year, is a unit of length used to express astronomical distances and is equivalent to about 9.46 trillion kilometers (9.46×1012 km), or 5.88 trillion miles (5.88×1012 mi)Because it includes the time-measurement word ""year"", the term ""light-year"" is sometimes misinterpreted as a unit of time.The unit most commonly used in professional astronomy is the parsec (symbol: pc, about 3.26 light-years) which derives from astrometry; it is the distance at which one astronomical unit (au) subtends an angle of one second of arc.
light-year (ly) A unit of length used to express astronomical distances that is equivalent to the distance that an object moving at the speed of light in vacuum would travel in one Julian year: approximately 9.46 trillion kilometres (9.46×1012 km) or 5.88 trillion miles (5.88×1012 mi)Proxima Centauri, the closest star to Earth after the Sun, is around 4.2 light-years away.
A light-year, alternatively spelled light year, is a large unit of length used to expre","As defined by the International Astronomical Union (IAU), a light-year is the distance that light travels in a vacuum in one Julian year (365.25 days)A light-year is the distance light travels in one Julian year, around 9461 billion kilometres, 5879 billion miles, or 0.3066 parsecsBecause it includes the word ""year"", the term is sometimes misinterpreted as a unit of time.The light-year is most often used when expressing distances to stars and other distances on a galactic scale, especially in non-specialist contexts and popular science publicationsLight in a vacuum travels around per second, so 1 light-year is about or AUIn round figures, a light year is nearly 10 trillion kilometres or nearly 6 trillion milesIt can be expressed as 19683 3 , 27 9 , 27 , 333 and 33 or when using Knuth's up-arrow notation it can be expressed as ↑↑ 3 and ↑↑↑ 2 Astronomy: A light-year, as defined by the International Astronomical Union (IAU), is the distance that light travels in a vacuum in one year, which is equivalent to about 9.46 trillion kilometers (9.46×1012 km).
A light-year, alternatively spelled light year, is a unit of length used to express astronomical distances and is equivalent to about 9.46 trillion kilometers (9.46×1012 km), or 5.88 trillion miles (5.88×1012 mi)Because it includes the time-measurement word ""year"", the term ""light-year"" is sometimes misinterpreted as a unit of time.The unit most commonly used in professional astronomy is the parsec (symbol: pc, about 3.26 light-years) which derives from astrometry; it is the distance at which one astronomical unit (au) subtends an angle of one second of arc.
light-year (ly) A unit of length used to express astronomical distances that is equivalent to the distance that an object moving at the speed of light in vacuum would travel in one Julian year: approximately 9.46 trillion kilometres (9.46×1012 km) or 5.88 trillion miles (5.88×1012 mi)Proxima Centauri, the closest star to Earth after the Sun, is around 4.2 light-years away.
A light-year, alternatively spelled light year, is a large unit of length used to expre[SEP]What is a light-year?","['B', 'A', 'C']",1.0
What is the main advantage of ferroelectric memristors?,"The ferroelectric memristor's main advantages are that ferroelectric domain dynamics can be tuned, offering a way to engineer the memristor response, and that the resistance variations are due to purely electronic phenomena, aiding device reliability, as no deep change to the material structure is involved. ===Carbon nanotube memristor=== In 2013, Ageev, Blinov et al. reported observing memristor effect in structure based on vertically aligned carbon nanotubes studying bundles of CNT by scanning tunneling microscope. Later it was found that CNT memristive switching is observed when a nanotube has a non-uniform elastic strain ΔL0. It was shown that the memristive switching mechanism of strained СNT is based on the formation and subsequent redistribution of non-uniform elastic strain and piezoelectric field Edef in the nanotube under the influence of an external electric field E(x,t). ===Biomolecular memristor=== Biomaterials have been evaluated for use in artificial synapses and have shown potential for application in neuromorphic systems. The article was the first to demonstrate that a solid-state device could have the characteristics of a memristor based on the behavior of nanoscale thin films. High switching performance, demonstrated synaptic plasticity and sustainability to mechanical deformations promise to emulate the appealing characteristics of biological neural systems in novel computing technologies. === Atomristor === Atomristor is defined as the electrical devices showing memristive behavior in atomically thin nanomaterials or atomic sheets. In 2020, atomistic understanding of the conductive virtual point mechanism was elucidated in an article in nature nanotechnology. ===Ferroelectric memristor=== The ferroelectric memristor is based on a thin ferroelectric barrier sandwiched between two metallic electrodes. The mechanism of memristive behavior in such structures is based entirely on the electron spin degree of freedom which allows for a more convenient control than the ionic transport in nanostructures. One advantage of memristive networks is that they can be implemented using relatively simple and inexpensive hardware, making them an attractive option for developing low-cost artificial intelligence systems. However, the field of memristive networks is still in the early stages of development, and more research is needed to fully understand their capabilities and limitations. In 2012, Meuffels and Soni discussed some fundamental issues and problems in the realization of memristors. Five years later he and his student Sung Mo Kang generalized the theory of memristors and memristive systems including a property of zero crossing in the Lissajous curve characterizing current vs. voltage behavior. ===Twenty-first century=== On May 1, 2008, Strukov, Snider, Stewart, and Williams published an article in Nature identifying a link between the two- terminal resistance switching behavior found in nanoscale systems and memristors. The identification of memristive properties in electronic devices has attracted controversy. However, hysteretic conductance in silicon has been associated to memristive effect in 2009 only , while Tony Kenyon and his group has clearly demonstrated that the resistive switching in silicon oxide thin films is due to silicon nanoinclusions in highly nonstoichiometric suboxide phases . ===Polymeric memristor=== In 2004, Krieger and Spitzer described dynamic doping of polymer and inorganic dielectric-like materials that improved the switching characteristics and retention required to create functioning nonvolatile memory cells. The memristor plays a crucial role in mimicking the charge storage effect in the diode base, and is also responsible for the conductivity modulation phenomenon (that is so important during forward transients). ===Criticisms=== In 2008, a team at HP Labs found experimental evidence for the Chua's memristor based on an analysis of a thin film of titanium dioxide, thus connecting the operation of ReRAM devices to the memristor concept. At high frequencies, memristive theory predicts the pinched hysteresis effect will degenerate, resulting in a straight line representative of a linear resistor. This highlights the importance of understanding what role oxygen vacancies play in the memristive operation of devices that deploy complex oxides with an intrinsic property such as ferroelectricity or multiferroicity. =====Intrinsic mechanism===== The magnetization state of a MTJ can be controlled by Spin- transfer torque, and can thus, through this intrinsic physical mechanism, exhibit memristive behavior. The memristive behaviour of switches was found to be accompanied by a prominent memcapacitive effect. Other researchers noted that memristor models based on the assumption of linear ionic drift do not account for asymmetry between set time (high-to-low resistance switching) and reset time (low-to-high resistance switching) and do not provide ionic mobility values consistent with experimental data. The video also illustrates how to understand deviations in the pinched hysteresis characteristics of physical memristors. On a short time scale, these structures behave almost as an ideal memristor. ","Ferroelectric memristors have a higher resistance than other types of memristors, making them more suitable for high-power applications.","Ferroelectric domain dynamics can be tuned, allowing for the engineering of memristor response, and resistance variations are due to purely electronic phenomena, making the device more reliable.","Ferroelectric memristors have a more complex structure than other types of memristors, allowing for a wider range of applications.",Ferroelectric memristors have a unique piezoelectric field that allows for the creation of non-uniform elastic strain and a more stable structure.,"Ferroelectric memristors are based on vertically aligned carbon nanotubes, which offer a more efficient and faster switching mechanism than other materials.",B,kaggle200,"First reported in 1971, ferroelectric polymers are polymer chains that must exhibit ferroelectric behavior, hence piezoelectric and pyroelectric behavior.
Soft transducers in the form of ferroelectric polymer foams have been proved to have great potential.
Ferroelectric capacitor is a capacitor based on a ferroelectric material. In contrast, traditional capacitors are based on dielectric materials. Ferroelectric devices are used in digital electronics as part of ferroelectric RAM, or in analog electronics as tunable capacitors (varactors).
The ferroelectric memristor is based on a thin ferroelectric barrier sandwiched between two metallic electrodes. Switching the polarization of the ferroelectric material by applying a positive or negative voltage across the junction can lead to a two order of magnitude resistance variation: (an effect called Tunnel Electro-Resistance). In general, the polarization does not switch abruptly. The reversal occurs gradually through the nucleation and growth of ferroelectric domains with opposite polarization. During this process, the resistance is neither R or R, but in between. When the voltage is cycled, the ferroelectric domain configuration evolves, allowing a fine tuning of the resistance value. The ferroelectric memristor's main advantages are that ferroelectric domain dynamics can be tuned, offering a way to engineer the memristor response, and that the resistance variations are due to purely electronic phenomena, aiding device reliability, as no deep change to the material structure is involved.","The first widespread ferroelectric ceramics material, which had ferroelectric properties not only in the form of a single crystal, but in the polycrystalline state, i.e. in the form of ceramic barium titanate was BaO•TiO2, which is important now. Add to it some m-Liv not significantly change its properties. A significant nonlinearity of capacitance capacitor having ferroelectric ceramics materials, so-called varikondy, types of VC-1 VC-2, VC-3 and others.
Ferroelectric capacitor is a capacitor based on a ferroelectric material. In contrast, traditional capacitors are based on dielectric materials. Ferroelectric devices are used in digital electronics as part of ferroelectric RAM, or in analog electronics as tunable capacitors (varactors).
Ferroelectric memristor The ferroelectric memristor is based on a thin ferroelectric barrier sandwiched between two metallic electrodes. Switching the polarization of the ferroelectric material by applying a positive or negative voltage across the junction can lead to a two order of magnitude resistance variation: ROFF ≫ RON (an effect called Tunnel Electro-Resistance). In general, the polarization does not switch abruptly. The reversal occurs gradually through the nucleation and growth of ferroelectric domains with opposite polarization. During this process, the resistance is neither RON or ROFF, but in between. When the voltage is cycled, the ferroelectric domain configuration evolves, allowing a fine tuning of the resistance value. The ferroelectric memristor's main advantages are that ferroelectric domain dynamics can be tuned, offering a way to engineer the memristor response, and that the resistance variations are due to purely electronic phenomena, aiding device reliability, as no deep change to the material structure is involved.","The ferroelectric memristor's main advantages are that ferroelectric domain dynamics can be tuned, offering a way to engineer the memristor response, and that the resistance variations are due to purely electronic phenomena, aiding device reliability, as no deep change to the material structure is involvedThe ferroelectric memristor's main advantages are that ferroelectric domain dynamics can be tuned, offering a way to engineer the memristor response, and that the resistance variations are due to purely electronic phenomena, aiding device reliability, as no deep change to the material structure is involved.Ferroelectric devices are used in digital electronics as part of ferroelectric RAM, or in analog electronics as tunable capacitors (varactors).
The ferroelectric memristor is based on a thin ferroelectric barrier sandwiched between two metallic electrodesFerroelectric devices are used in digital electronics as part of ferroelectric RAM, or in analog electronics as tunable capacitors (varactors).
Ferroelectric memristor The ferroelectric memristor is based on a thin ferroelectric barrier sandwiched between two metallic electrodesSwitching the polarization of the ferroelectric material by applying a positive or negative voltage across the junction can lead to a two order of magnitude resistance variation: ROFF ≫ RON (an effect called Tunnel Electro-Resistance)Switching the polarization of the ferroelectric material by applying a positive or negative voltage across the junction can lead to a two order of magnitude resistance variation: (an effect called Tunnel Electro-Resistance)When the voltage is cycled, the ferroelectric domain configuration evolves, allowing a fine tuning of the resistance valueThe first widespread ferroelectric ceramics material, which had ferroelectric properties not only in the form of a single crystal, but in the polycrystalline state, i.e- First reported in 1971, ferroelectric polymers are polymer chains that must exhibit ferroelectric behavior, hence piezoelectric and pyroelectric behavior.
Soft transducers in the form of ferroelectric p","The ferroelectric memristor's main advantages are that ferroelectric domain dynamics can be tuned, offering a way to engineer the memristor response, and that the resistance variations are due to purely electronic phenomena, aiding device reliability, as no deep change to the material structure is involvedThe ferroelectric memristor's main advantages are that ferroelectric domain dynamics can be tuned, offering a way to engineer the memristor response, and that the resistance variations are due to purely electronic phenomena, aiding device reliability, as no deep change to the material structure is involved.Ferroelectric devices are used in digital electronics as part of ferroelectric RAM, or in analog electronics as tunable capacitors (varactors).
The ferroelectric memristor is based on a thin ferroelectric barrier sandwiched between two metallic electrodesFerroelectric devices are used in digital electronics as part of ferroelectric RAM, or in analog electronics as tunable capacitors (varactors).
Ferroelectric memristor The ferroelectric memristor is based on a thin ferroelectric barrier sandwiched between two metallic electrodesSwitching the polarization of the ferroelectric material by applying a positive or negative voltage across the junction can lead to a two order of magnitude resistance variation: ROFF ≫ RON (an effect called Tunnel Electro-Resistance)Switching the polarization of the ferroelectric material by applying a positive or negative voltage across the junction can lead to a two order of magnitude resistance variation: (an effect called Tunnel Electro-Resistance)When the voltage is cycled, the ferroelectric domain configuration evolves, allowing a fine tuning of the resistance valueThe first widespread ferroelectric ceramics material, which had ferroelectric properties not only in the form of a single crystal, but in the polycrystalline state, i.e- First reported in 1971, ferroelectric polymers are polymer chains that must exhibit ferroelectric behavior, hence piezoelectric and pyroelectric behavior.
Soft transducers in the form of ferroelectric p[SEP]What is the main advantage of ferroelectric memristors?","['B', 'C', 'D']",1.0
What is the term used to describe the conduction that occurs in non-crystalline semiconductors by charges quantum tunnelling from one localised site to another?,"The modern understanding of the properties of a semiconductor relies on quantum physics to explain the movement of charge carriers in a crystal lattice. An (intrinsic) semiconductor has a band gap that is smaller than that of an insulator and at room temperature, significant numbers of electrons can be excited to cross the band gap.Charles Kittel (1995) Introduction to Solid State Physics, 7th ed. Wiley, . thumb|right|Schematic representation of an electron tunneling through a barrier In electronics/spintronics, a tunnel junction is a barrier, such as a thin insulating layer or electric potential, between two electrically conducting materials. A semiconductor is a material which has an electrical conductivity value falling between that of a conductor, such as copper, and an insulator, such as glass. They function as an ohmic electrical contact in the middle of a semiconductor device. ==Magnetic tunnel junction== In magnetic tunnel junctions, electrons tunnel through a thin insulating barrier from one magnetic material to another. Some materials, such as titanium dioxide, can even be used as insulating materials for some applications, while being treated as wide-gap semiconductors for other applications. ===Charge carriers (electrons and holes)=== The partial filling of the states at the bottom of the conduction band can be understood as adding electrons to that band. The actual concentration of electrons is typically very dilute, and so (unlike in metals) it is possible to think of the electrons in the conduction band of a semiconductor as a sort of classical ideal gas, where the electrons fly around freely without being subject to the Pauli exclusion principle. The amount of impurity, or dopant, added to an intrinsic (pure) semiconductor varies its level of conductivity. Tunnel injection is a field electron emission effect; specifically a quantum process called Fowler–Nordheim tunneling, whereby charge carriers are injected to an electric conductor through a thin layer of an electric insulator. Electrons (or quasiparticles) pass through the barrier by the process of quantum tunnelling. Such carrier traps are sometimes purposely added to reduce the time needed to reach the steady-state. ===Doping=== The conductivity of semiconductors may easily be modified by introducing impurities into their crystal lattice. These are tunnel junctions, the study of which requires understanding quantum tunnelling. Electrical conductivity arises due to the presence of electrons in states that are delocalized (extending through the material), however in order to transport electrons a state must be partially filled, containing an electron only part of the time.As in the Mott formula for conductivity, see If the state is always occupied with an electron, then it is inert, blocking the passage of other electrons via that state. After the process is completed and the silicon has reached room temperature, the doping process is done and the semiconducting material is ready to be used in an integrated circuit. ==Physics of semiconductors== ===Energy bands and electrical conduction=== Semiconductors are defined by their unique electric conductive behavior, somewhere between that of a conductor and an insulator. This phenomenon is known as dynamical tunnelling. === Tunnelling in phase space === The concept of dynamical tunnelling is particularly suited to address the problem of quantum tunnelling in high dimensions (d>1). In physics, quantum tunnelling, barrier penetration, or simply tunnelling is a quantum mechanical phenomenon in which an object such as an electron or atom passes through a potential energy barrier that, according to classical mechanics, the object does not have sufficient energy to enter or surmount. Because the electrons behave like an ideal gas, one may also think about conduction in very simplistic terms such as the Drude model, and introduce concepts such as electron mobility. An alternative to tunnel injection is the spin injection. == See also == * Hot carrier injection == References == Category:Quantum mechanics Category:Semiconductors When two differently doped regions exist in the same crystal, a semiconductor junction is created. A quantum heterostructure is a heterostructure in a substrate (usually a semiconductor material), where size restricts the movements of the charge carriers forcing them into a quantum confinement. ",Intrinsic semiconductors,Electrical impedance tomography,Quantum conduction,Carrier mobility,Variable range hopping,E,kaggle200,"Quantum tunnelling composites (QTCs) are composite materials of metals and non-conducting elastomeric binder, used as pressure sensors. They use quantum tunnelling: without pressure, the conductive elements are too far apart to conduct electricity; when pressure is applied, they move closer and electrons can tunnel through the insulator. The effect is far more pronounced than would be expected from classical (non-quantum) effects alone, as classical electrical resistance is linear (proportional to distance), while quantum tunnelling is exponential with decreasing distance, allowing the resistance to change by a factor of up to 10 between pressured and unpressured states.
Tunneling is the cause of some important macroscopic physical phenomena. Quantum tunnelling has important implications on functioning of nanotechnology.
At low temperature, or in system with a large degree of structural disorder (such as fully amorphous systems), electrons cannot access delocalized states. In such a system, electrons can only travel by tunnelling for one site to another, in a process called ""variable range hopping"". In the original theory of variable range hopping, as developed by Mott and Davis, the probability formula_50, of an electron hopping from one site formula_51, to another site formula_52, depends on their separation in space formula_53, and their separation in energy formula_54.
In non-crystalline semiconductors, conduction can occur by charges quantum tunnelling from one localised site to another. This is known as variable range hopping and has the characteristic form of","For a rectangular barrier, this expression simplifies to:
Quantum tunnelling composites (QTCs) are composite materials of metals and non-conducting elastomeric binder, used as pressure sensors. They use quantum tunnelling: without pressure, the conductive elements are too far apart to conduct electricity; when pressure is applied, they move closer and electrons can tunnel through the insulator. The effect is far more pronounced than would be expected from classical (non-quantum) effects alone, as classical electrical resistance is linear (proportional to distance), while quantum tunnelling is exponential with decreasing distance, allowing the resistance to change by a factor of up to 1012 between pressured and unpressured states.Quantum tunneling composites hold multiple designations in specialized literature, such as: conductive/semi-conductive polymer composite, piezo-resistive sensor and force-sensing resistor (FSR). However, in some cases Force-sensing resistors may operate predominantly under percolation regime; this implies that the composite resistance grows for an incremental applied stress or force.
Extrinsic (doped) semiconductors have a far more complicated temperature profile. As temperature increases starting from absolute zero they first decrease steeply in resistance as the carriers leave the donors or acceptors. After most of the donors or acceptors have lost their carriers, the resistance starts to increase again slightly due to the reducing mobility of carriers (much as in a metal). At higher temperatures, they behave like intrinsic semiconductors as the carriers from the donors/acceptors become insignificant compared to the thermally generated carriers.In non-crystalline semiconductors, conduction can occur by charges quantum tunnelling from one localised site to another. This is known as variable range hopping and has the characteristic form of where n = 2, 3, 4, depending on the dimensionality of the system.","At higher temperatures, they behave like intrinsic semiconductors as the carriers from the donors/acceptors become insignificant compared to the thermally generated carriers.In non-crystalline semiconductors, conduction can occur by charges quantum tunnelling from one localised site to anotherIn such a system, electrons can only travel by tunnelling for one site to another, in a process called ""variable range hopping""In the original theory of variable range hopping, as developed by Mott and Davis, the probability formula_50, of an electron hopping from one site formula_51, to another site formula_52, depends on their separation in space formula_53, and their separation in energy formula_54.
In non-crystalline semiconductors, conduction can occur by charges quantum tunnelling from one localised site to anotherQuantum tunnelling has important implications on functioning of nanotechnology.
At low temperature, or in system with a large degree of structural disorder (such as fully amorphous systems), electrons cannot access delocalized statesThe effect is far more pronounced than would be expected from classical (non-quantum) effects alone, as classical electrical resistance is linear (proportional to distance), while quantum tunnelling is exponential with decreasing distance, allowing the resistance to change by a factor of up to 1012 between pressured and unpressured states.Quantum tunneling composites hold multiple designations in specialized literature, such as: conductive/semi-conductive polymer composite, piezo-resistive sensor and force-sensing resistor (FSR)The effect is far more pronounced than would be expected from classical (non-quantum) effects alone, as classical electrical resistance is linear (proportional to distance), while quantum tunnelling is exponential with decreasing distance, allowing the resistance to change by a factor of up to 10 between pressured and unpressured states.
Tunneling is the cause of some important macroscopic physical phenomenaFor a rectangular barrier, this expression simplifies to:
Quantum tunnelling composites (QTCs) are co","At higher temperatures, they behave like intrinsic semiconductors as the carriers from the donors/acceptors become insignificant compared to the thermally generated carriers.In non-crystalline semiconductors, conduction can occur by charges quantum tunnelling from one localised site to anotherIn such a system, electrons can only travel by tunnelling for one site to another, in a process called ""variable range hopping""In the original theory of variable range hopping, as developed by Mott and Davis, the probability formula_50, of an electron hopping from one site formula_51, to another site formula_52, depends on their separation in space formula_53, and their separation in energy formula_54.
In non-crystalline semiconductors, conduction can occur by charges quantum tunnelling from one localised site to anotherQuantum tunnelling has important implications on functioning of nanotechnology.
At low temperature, or in system with a large degree of structural disorder (such as fully amorphous systems), electrons cannot access delocalized statesThe effect is far more pronounced than would be expected from classical (non-quantum) effects alone, as classical electrical resistance is linear (proportional to distance), while quantum tunnelling is exponential with decreasing distance, allowing the resistance to change by a factor of up to 1012 between pressured and unpressured states.Quantum tunneling composites hold multiple designations in specialized literature, such as: conductive/semi-conductive polymer composite, piezo-resistive sensor and force-sensing resistor (FSR)The effect is far more pronounced than would be expected from classical (non-quantum) effects alone, as classical electrical resistance is linear (proportional to distance), while quantum tunnelling is exponential with decreasing distance, allowing the resistance to change by a factor of up to 10 between pressured and unpressured states.
Tunneling is the cause of some important macroscopic physical phenomenaFor a rectangular barrier, this expression simplifies to:
Quantum tunnelling composites (QTCs) are co[SEP]What is the term used to describe the conduction that occurs in non-crystalline semiconductors by charges quantum tunnelling from one localised site to another?","['E', 'D', 'C']",1.0
What is resistivity?,"Soil resistivity is a measure of how much the soil resists or conducts electric current. A resist, used in many areas of manufacturing and art, is something that is added to parts of an object to create a pattern by protecting these parts from being affected by a subsequent stage in the process.OED, ""Resist"", 3. Actual resistivity measurements are required to fully qualify the resistivity and its effects on the overall transmission system. The resistivity measured for a given current probe spacing represents, to a first approximation, the apparent resistivity of the soil to a depth equal to that spacing. A force-sensing resistor is a material whose resistance changes when a force, pressure or mechanical stress is applied. In physics, resistive force is a force, or the vector sum of numerous forces, whose direction is opposite to the motion of a body, and may refer to: * Friction, during sliding and/or rolling * Drag (physics), during movement through a fluid (see fluid dynamics) * Normal force, exerted reactionally back on the acting body by the compressive, tensile or shear stress within the recipient body * Intermolecular forces, when separating adhesively bonded surfaces * Magnetic repulsion, when a magnetic object moves against another magnetic field * Gravity, during vertical takeoff * Mechanical load, in a simple machine Chemical and physical changes occur in the exposed areas of the resist layer. In semiconductor fabrication, a resist is a thin layer used to transfer a circuit pattern to the semiconductor substrate which it is deposited upon. A resist is not always necessary. Resists are generally proprietary mixtures of a polymer or its precursor and other small molecules (e.g. photoacid generators) that have been specially formulated for a given lithography technology. The soil resistivity value is subject to great variation, due to moisture, temperature and chemical content. Several methods of resistivity measurement are frequently employed: For measurement the user can use Grounding resistance tester. ===Wenner method=== 4 pins The Wenner four-pin method, as shown in figure above, is the most commonly used technique for soil resistivity measurements. A wide range of typical soil resistivity values can be found in literature. Typical values are: * Usual values: from 10 up to 1000 (Ω-m) * Exceptional values: from 1000 up to 10000 (Ω-m) The SI unit of resistivity is the Ohm-meter (Ω-m); in the United States the Ohm-centimeter (Ω-cm) is often used instead. Resists may also be formulated to be sensitive to charged particles, such as the electron beams produced in scanning electron microscopes. Sometimes the conductivity, the reciprocal of the resistivity, is quoted instead. This is particularly true for large or long objects. ==Variability== Electrical conduction in soil is essentially electrolytic and for this reason the soil resistivity depends on: * moisture content * salt content * temperature (above the freezing point 0 °C) Because of the variability of soil resistivity, IEC standards require that the seasonal variation in resistivity be accounted for in transmission system design.IEC Std 61936-1 ""Power Installations Exceeding 1 kV ac – Part 1: Common Rules"" Section 10.3.1 General Clause b. Being copyright free, these numbers are widely copied, sometimes without acknowledgement. ==Measurement== Because soil quality may vary greatly with depth and over a wide lateral area, estimation of soil resistivity based on soil classification provide only a rough approximation. A force-sensing resistor operating based on percolation exhibits a positive coefficient of pressure, and therefore, an increment in the applied pressure causes an increment in the electrical resistance R, For a given applied stress \sigma, the electrical resistivity \rho of the conductive polymer can be computed from: :\rho=\rho_0(\phi-\phi_c)^{-x} where \rho_0 matches for a prefactor depending on the transport properties of the conductive polymer, and x is the critical conductivity exponent. The soil resistivity measurements will be affected by existing nearby grounded electrodes. ",Resistivity is an extrinsic property of a material that describes how difficult it is to make electrical current flow through it. It is measured in ohms and is dependent on the material's shape and size.,Resistivity is a measure of the resistance of a material to electrical current flow. It is measured in ohm-meters and is dependent on the material's shape and size.,Resistivity is an intrinsic property of a material that describes how difficult it is to make electrical current flow through it. It is measured in ohm-meters and is independent of the material's shape and size.,Resistivity is a measure of the electrical current that can flow through a material. It is measured in ohms and is dependent on the material's shape and size.,Resistivity is a measure of the electrical current that can flow through a material. It is measured in ohm-meters and is independent of the material's shape and size.,C,kaggle200,"Electrical resistivity (also called specific electrical resistance or volume resistivity) is a fundamental property of a material that measures how strongly it resists electric current. A low resistivity indicates a material that readily allows electric current. Resistivity is commonly represented by the Greek letter (rho). The SI unit of electrical resistivity is the ohm-meter (Ω⋅m). For example, if a solid cube of material has sheet contacts on two opposite faces, and the resistance between these contacts is , then the resistivity of the material is .
where formula_5 is the length of the conductor, measured in metres (m), is the cross-sectional area of the conductor measured in square metres (m), (sigma) is the electrical conductivity measured in siemens per meter (S·m), and (rho) is the electrical resistivity (also called ""specific electrical resistance"") of the material, measured in ohm-metres (Ω·m). The resistivity and conductivity are proportionality constants, and therefore depend only on the material the wire is made of, not the geometry of the wire. Resistivity and conductivity are reciprocals: formula_6. Resistivity is a measure of the material's ability to oppose electric current.
Even if the material's resistivity is known, calculating the resistance of something made from it may, in some cases, be much more complicated than the formula formula_62 above. One example is spreading resistance profiling, where the material is inhomogeneous (different resistivity in different places), and the exact paths of current flow are not obvious.
Both ""resistance"" and ""resistivity"" describe how difficult it is to make electrical current flow through a material, but unlike resistance, resistivity is an intrinsic property. This means that all pure copper wires (which have not been subjected to distortion of their crystalline structure etc.), irrespective of their shape and size, have the same resistivity, but a long, thin copper wire has a much larger resistance than a thick, short copper wire. Every material has its own characteristic resistivity. For example, rubber has a far larger resistivity than copper.","Electrical resistivity (also called volume resistivity or specific electrical resistance) is a fundamental specific property of a material that measures its electrical resistance or how strongly it resists electric current. A low resistivity indicates a material that readily allows electric current. Resistivity is commonly represented by the Greek letter ρ (rho). The SI unit of electrical resistivity is the ohm-metre (Ω⋅m). For example, if a 1 m3 solid cube of material has sheet contacts on two opposite faces, and the resistance between these contacts is 1 Ω, then the resistivity of the material is 1 Ω⋅m.
Resistivity logging is a method of well logging that works by characterizing the rock or sediment in a borehole by measuring its electrical resistivity. Resistivity is a fundamental material property which represents how strongly a material opposes the flow of electric current. In these logs, resistivity is measured using four electrical probes to eliminate the resistance of the contact leads. The log must run in holes containing electrically conductive mud or water, i.e., with enough ions present in the drilling fluid.
Both resistance and resistivity describe how difficult it is to make electrical current flow through a material, but unlike resistance, resistivity is an intrinsic property and doesn't depend on geometric properties of a material. This means that all pure copper (Cu) wires (which have not been subjected to distortion of their crystalline structure etc.), irrespective of their shape and size, have the same resistivity, but a long, thin copper wire has a much larger resistance than a thick, short copper wire. Every material has its own characteristic resistivity. For example, rubber has a far larger resistivity than copper.","Resistivity is a fundamental material property which represents how strongly a material opposes the flow of electric currentElectrical resistivity (also called volume resistivity or specific electrical resistance) is a fundamental specific property of a material that measures its electrical resistance or how strongly it resists electric current- Electrical resistivity (also called specific electrical resistance or volume resistivity) is a fundamental property of a material that measures how strongly it resists electric currentResistivity is a measure of the material's ability to oppose electric current.
Even if the material's resistivity is known, calculating the resistance of something made from it may, in some cases, be much more complicated than the formula formula_62 aboveFor example, if a solid cube of material has sheet contacts on two opposite faces, and the resistance between these contacts is , then the resistivity of the material is .
where formula_5 is the length of the conductor, measured in metres (m), is the cross-sectional area of the conductor measured in square metres (m), (sigma) is the electrical conductivity measured in siemens per meter (S·m), and (rho) is the electrical resistivity (also called ""specific electrical resistance"") of the material, measured in ohm-metres (Ω·m)Every material has its own characteristic resistivityFor example, if a 1 m3 solid cube of material has sheet contacts on two opposite faces, and the resistance between these contacts is 1 Ω, then the resistivity of the material is 1 Ω⋅m.
Resistivity logging is a method of well logging that works by characterizing the rock or sediment in a borehole by measuring its electrical resistivityA low resistivity indicates a material that readily allows electric currentResistivity is commonly represented by the Greek letter (rho)Resistivity is commonly represented by the Greek letter ρ (rho)For example, rubber has a far larger resistivity than copperFor example, rubber has a far larger resistivity than copper.The SI unit of electrical resistivity is the ohm-meter (Ω⋅m)Resistivity and","Resistivity is a fundamental material property which represents how strongly a material opposes the flow of electric currentElectrical resistivity (also called volume resistivity or specific electrical resistance) is a fundamental specific property of a material that measures its electrical resistance or how strongly it resists electric current- Electrical resistivity (also called specific electrical resistance or volume resistivity) is a fundamental property of a material that measures how strongly it resists electric currentResistivity is a measure of the material's ability to oppose electric current.
Even if the material's resistivity is known, calculating the resistance of something made from it may, in some cases, be much more complicated than the formula formula_62 aboveFor example, if a solid cube of material has sheet contacts on two opposite faces, and the resistance between these contacts is , then the resistivity of the material is .
where formula_5 is the length of the conductor, measured in metres (m), is the cross-sectional area of the conductor measured in square metres (m), (sigma) is the electrical conductivity measured in siemens per meter (S·m), and (rho) is the electrical resistivity (also called ""specific electrical resistance"") of the material, measured in ohm-metres (Ω·m)Every material has its own characteristic resistivityFor example, if a 1 m3 solid cube of material has sheet contacts on two opposite faces, and the resistance between these contacts is 1 Ω, then the resistivity of the material is 1 Ω⋅m.
Resistivity logging is a method of well logging that works by characterizing the rock or sediment in a borehole by measuring its electrical resistivityA low resistivity indicates a material that readily allows electric currentResistivity is commonly represented by the Greek letter (rho)Resistivity is commonly represented by the Greek letter ρ (rho)For example, rubber has a far larger resistivity than copperFor example, rubber has a far larger resistivity than copper.The SI unit of electrical resistivity is the ohm-meter (Ω⋅m)Resistivity and[SEP]What is resistivity?","['C', 'E', 'B']",1.0
What did Newton adopt after his correspondence with Hooke in 1679-1680?,"Newton and Hooke had brief exchanges in 1679–80, when Hooke, appointed to manage the Royal Society's correspondence, opened up a correspondence intended to elicit contributions from Newton to Royal Society transactions, which had the effect of stimulating Newton to work out a proof that the elliptical form of planetary orbits would result from a centripetal force inversely proportional to the square of the radius vector. Newton was well-versed in both classics and modern languages. In the , Newton formulated the laws of motion and universal gravitation that formed the dominant scientific viewpoint for centuries until it was superseded by the theory of relativity. A draft letter regarding the matter is included in Newton's personal first edition of Philosophiæ Naturalis Principia Mathematica, which he must have been amending at the time. This followed stimulation by a brief exchange of letters in 1679–80 with Hooke, who had been appointed to manage the Royal Society's correspondence, and who opened a correspondence intended to elicit contributions from Newton to Royal Society transactions. Popular Science Monthly Volume 17, July. s:Popular Science Monthly/Volume 17/July 1880/Goethe's Farbenlehre: Theory of Colors II === Gravity === In 1679, Newton returned to his work on celestial mechanics by considering gravitation and its effect on the orbits of planets with reference to Kepler's laws of planetary motion. Newton also made seminal contributions to optics, and shares credit with German mathematician Gottfried Wilhelm Leibniz for developing infinitesimal calculus. Some of the content contained in Newton's papers could have been considered heretical by the church. He guessed the same force was responsible for other orbital motions, and hence named it ""universal gravitation"". Newton used his mathematical description of gravity to derive Kepler's laws of planetary motion, account for tides, the trajectories of comets, the precession of the equinoxes and other phenomena, eradicating doubt about the Solar System's heliocentricity. and explained why he put his expositions in this form,Newton, Principia, 1729 English translation, p. Newton later became involved in a dispute with Leibniz over priority in the development of calculus (the Leibniz–Newton calculus controversy). Newton was a fellow of Trinity College and the second Lucasian Professor of Mathematics at the University of Cambridge. Newton was elected a Fellow of the Royal Society (FRS) in 1672. == Mid-life == === Calculus === Newton's work has been said ""to distinctly advance every branch of mathematics then studied"". Most modern historians believe that Newton and Leibniz developed calculus independently, although with very different mathematical notations. * Opticks (1704) * Reports as Master of the Mint (1701–1725) * Arithmetica Universalis (1707) === Published posthumously === * De mundi systemate (The System of the World) (1728) * Optical Lectures (1728) * The Chronology of Ancient Kingdoms Amended (1728) * Observations on Daniel and The Apocalypse of St. John (1733) * Method of Fluxions (1671, published 1736) * An Historical Account of Two Notable Corruptions of Scripture (1754) == See also == * Elements of the Philosophy of Newton, a book by Voltaire * List of multiple discoveries: seventeenth century * List of things named after Isaac Newton * List of presidents of the Royal Society == References == === Notes === === Citations === === Bibliography === * * This well documented work provides, in particular, valuable information regarding Newton's knowledge of Patristics * * * * * * * * * * == Further reading == === Primary === * Newton, Isaac. At the time, Cambridge's teachings were based on those of Aristotle, whom Newton read along with then more modern philosophers, including Descartes and astronomers such as Galileo Galilei and Thomas Street. Places selections from Newton's Principia in the context of selected writings by Copernicus, Kepler, Galileo and Einstein * * Newton, Isaac. Subsequent to Newton, much has been amended. Sir Isaac Newton (25 December 1642 – 20 March 1726/27) was an English mathematician, physicist, astronomer, alchemist, theologian, and author who was described in his time as a natural philosopher. ",The language of inward or centripetal force.,The language of gravitational force.,The language of outward or centrifugal force.,The language of tangential and radial displacements.,The language of electromagnetic force.,A,kaggle200,"Hooke had started an exchange of correspondence in November 1679 by writing to Newton, to tell Newton that Hooke had been appointed to manage the Royal Society's correspondence. Hooke therefore wanted to hear from members about their researches, or their views about the researches of others; and as if to whet Newton's interest, he asked what Newton thought about various matters, and then gave a whole list, mentioning ""compounding the celestial motions of the planetts of a direct motion by the tangent and an attractive motion towards the central body"", and ""my hypothesis of the lawes or causes of springinesse"", and then a new hypothesis from Paris about planetary motions (which Hooke described at length), and then efforts to carry out or improve national surveys, the difference of latitude between London and Cambridge, and other items. Newton replied with ""a fansy of my own"" about determining the Earth's motion, using a falling body. Hooke disagreed with Newton's idea of how the falling body would move, and a short correspondence developed.
On the other hand, Newton did accept and acknowledge, in all editions of the ""Principia"", that Hooke (but not exclusively Hooke) had separately appreciated the inverse square law in the solar system. Newton acknowledged Wren, Hooke, and Halley in this connection in the Scholium to Proposition 4 in Book 1. Newton also acknowledged to Halley that his correspondence with Hooke in 1679–80 had reawakened his dormant interest in astronomical matters, but that did not mean, according to Newton, that Hooke had told Newton anything new or original: ""yet am I not beholden to him for any light into that business but only for the diversion he gave me from my other studies to think on these things & for his dogmaticalness in writing as if he had found the motion in the Ellipsis, which inclined me to try it ...""
In the 1660s Newton studied the motion of colliding bodies, and deduced that the centre of mass of two colliding bodies remains in uniform motion. Surviving manuscripts of the 1660s also show Newton's interest in planetary motion and that by 1669 he had shown, for a circular case of planetary motion, that the force he called ""endeavour to recede"" (now called centrifugal force) had an inverse-square relation with distance from the center. After his 1679–1680 correspondence with Hooke, described below, Newton adopted the language of inward or centripetal force. According to Newton scholar J. Bruce Brackenridge, although much has been made of the change in language and difference of point of view, as between centrifugal or centripetal forces, the actual computations and proofs remained the same either way. They also involved the combination of tangential and radial displacements, which Newton was making in the 1660s. The difference between the centrifugal and centripetal points of view, though a significant change of perspective, did not change the analysis. Newton also clearly expressed the concept of linear inertia in the 1660s: for this Newton was indebted to Descartes' work published 1644.
In regard to evidence that still survives of the earlier history, manuscripts written by Newton in the 1660s show that Newton himself had, by 1669, arrived at proofs that in a circular case of planetary motion, ""endeavour to recede"" (what was later called centrifugal force) had an inverse-square relation with distance from the center. After his 1679–1680 correspondence with Hooke, Newton adopted the language of inward or centripetal force. According to Newton scholar J. Bruce Brackenridge, although much has been made of the change in language and difference of point of view, as between centrifugal or centripetal forces, the actual computations and proofs remained the same either way. They also involved the combination of tangential and radial displacements, which Newton was making in the 1660s. The lesson offered by Hooke to Newton here, although significant, was one of perspective and did not change the analysis. This background shows there was basis for Newton to deny deriving the inverse square law from Hooke.","Newton's early work on motion In the 1660s Newton studied the motion of colliding bodies and deduced that the centre of mass of two colliding bodies remains in uniform motion. Surviving manuscripts of the 1660s also show Newton's interest in planetary motion and that by 1669 he had shown, for a circular case of planetary motion, that the force he called ""endeavour to recede"" (now called centrifugal force) had an inverse-square relation with distance from the center. After his 1679–1680 correspondence with Hooke, described below, Newton adopted the language of inward or centripetal force. According to Newton scholar J. Bruce Brackenridge, although much has been made of the change in language and difference of point of view, as between centrifugal or centripetal forces, the actual computations and proofs remained the same either way. They also involved the combination of tangential and radial displacements, which Newton was making in the 1660s. The difference between the centrifugal and centripetal points of view, though a significant change of perspective, did not change the analysis. Newton also clearly expressed the concept of linear inertia in the 1660s: for this Newton was indebted to Descartes' work published 1644.
Newton acknowledged in 1686 that an initial stimulus on him in 1679/80 to extend his investigations of the movements of heavenly bodies had arisen from correspondence with Robert Hooke in 1679/80.Hooke had started an exchange of correspondence in November 1679 by writing to Newton, to tell Newton that Hooke had been appointed to manage the Royal Society's correspondence. Hooke therefore wanted to hear from members about their researches, or their views about the researches of others; and as if to whet Newton's interest, he asked what Newton thought about various matters, and then gave a whole list, mentioning ""compounding the celestial motions of the planetts of a direct motion by the tangent and an attractive motion towards the central body"", and ""my hypothesis of the lawes or causes of springinesse"", and then a new hypothesis from Paris about planetary motions (which Hooke described at length), and then efforts to carry out or improve national surveys, the difference of latitude between London and Cambridge, and other items. Newton replied with ""a fansy of my own"" about determining the Earth's motion, using a falling body. Hooke disagreed with Newton's idea of how the falling body would move, and a short correspondence developed.
In regard to evidence that still survives of the earlier history, manuscripts written by Newton in the 1660s show that Newton himself had, by 1669, arrived at proofs that in a circular case of planetary motion, ""endeavour to recede"" (what was later called centrifugal force) had an inverse-square relation with distance from the center. After his 1679–1680 correspondence with Hooke, Newton adopted the language of inward or centripetal force. According to Newton scholar J. Bruce Brackenridge, although much has been made of the change in language and difference of point of view, as between centrifugal or centripetal forces, the actual computations and proofs remained the same either way. They also involved the combination of tangential and radial displacements, which Newton was making in the 1660s. The lesson offered by Hooke to Newton here, although significant, was one of perspective and did not change the analysis. This background shows there was basis for Newton to deny deriving the inverse square law from Hooke.","- Hooke had started an exchange of correspondence in November 1679 by writing to Newton, to tell Newton that Hooke had been appointed to manage the Royal Society's correspondenceAfter his 1679–1680 correspondence with Hooke, described below, Newton adopted the language of inward or centripetal forceAfter his 1679–1680 correspondence with Hooke, Newton adopted the language of inward or centripetal forceNewton also clearly expressed the concept of linear inertia in the 1660s: for this Newton was indebted to Descartes' work published 1644.
Newton acknowledged in 1686 that an initial stimulus on him in 1679/80 to extend his investigations of the movements of heavenly bodies had arisen from correspondence with Robert Hooke in 1679/80.Hooke had started an exchange of correspondence in November 1679 by writing to Newton, to tell Newton that Hooke had been appointed to manage the Royal Society's correspondenceNewton also acknowledged to Halley that his correspondence with Hooke in 1679–80 had reawakened his dormant interest in astronomical matters, but that did not mean, according to Newton, that Hooke had told Newton anything new or original: ""yet am I not beholden to him for any light into that business but only for the diversion he gave me from my other studies to think on these things & for his dogmaticalness in writing as if he had found the motion in the Ellipsis, which inclined me to try it ...""
In the 1660s Newton studied the motion of colliding bodies, and deduced that the centre of mass of two colliding bodies remains in uniform motionHooke therefore wanted to hear from members about their researches, or their views about the researches of others; and as if to whet Newton's interest, he asked what Newton thought about various matters, and then gave a whole list, mentioning ""compounding the celestial motions of the planetts of a direct motion by the tangent and an attractive motion towards the central body"", and ""my hypothesis of the lawes or causes of springinesse"", and then a new hypothesis from Paris about planetary motions (which Hooke described at length), a","- Hooke had started an exchange of correspondence in November 1679 by writing to Newton, to tell Newton that Hooke had been appointed to manage the Royal Society's correspondenceAfter his 1679–1680 correspondence with Hooke, described below, Newton adopted the language of inward or centripetal forceAfter his 1679–1680 correspondence with Hooke, Newton adopted the language of inward or centripetal forceNewton also clearly expressed the concept of linear inertia in the 1660s: for this Newton was indebted to Descartes' work published 1644.
Newton acknowledged in 1686 that an initial stimulus on him in 1679/80 to extend his investigations of the movements of heavenly bodies had arisen from correspondence with Robert Hooke in 1679/80.Hooke had started an exchange of correspondence in November 1679 by writing to Newton, to tell Newton that Hooke had been appointed to manage the Royal Society's correspondenceNewton also acknowledged to Halley that his correspondence with Hooke in 1679–80 had reawakened his dormant interest in astronomical matters, but that did not mean, according to Newton, that Hooke had told Newton anything new or original: ""yet am I not beholden to him for any light into that business but only for the diversion he gave me from my other studies to think on these things & for his dogmaticalness in writing as if he had found the motion in the Ellipsis, which inclined me to try it ...""
In the 1660s Newton studied the motion of colliding bodies, and deduced that the centre of mass of two colliding bodies remains in uniform motionHooke therefore wanted to hear from members about their researches, or their views about the researches of others; and as if to whet Newton's interest, he asked what Newton thought about various matters, and then gave a whole list, mentioning ""compounding the celestial motions of the planetts of a direct motion by the tangent and an attractive motion towards the central body"", and ""my hypothesis of the lawes or causes of springinesse"", and then a new hypothesis from Paris about planetary motions (which Hooke described at length), a[SEP]What did Newton adopt after his correspondence with Hooke in 1679-1680?","['A', 'B', 'C']",1.0
What is the metallicity of Kapteyn's star estimated to be?,"Kapteyn's Star is a class M1 red subdwarf about 12.83 light-years from Earth in the southern constellation Pictor; it is the closest halo star to the Solar System. During this process, the stars in the group, including Kapteyn's Star, may have been stripped away as tidal debris. thumb|left|250px|Comparison with Sun, Jupiter and Earth Kapteyn's Star is between one quarter and one third the size and mass of the Sun and has a much cooler effective temperature at about 3500 K, with some disagreement in the exact measurements between different observers. Kapteyn's Star is distinctive in a number of regards: it has a high radial velocity, orbits the Milky Way retrograde, and is the nearest-known halo star to the Sun. In comparison, the Sun is about 4.6 billion years old and has a surface temperature of 5,778 K. Stars like Kapteyn's Star have the ability to live up to 100–200 billion years, ten to twenty times longer than the Sun will live. ==Search for planets== In 2014, Kapteyn's Star was announced to host two planets, Kapteyn b and Kapteyn c, based on Doppler spectroscopy observations by the HARPS spectrometer which is housed at the European Southern Observatory's La Silla Observatory in Chile, at the Keck Observatory in Hawaii, and at the PFS Observatory, also in Chile. The abundance of elements other than hydrogen and helium, what astronomers term the metallicity, is about 14% of the abundance in the Sun. In 2014, two super-Earth planet candidates in orbit around the star were announced, but later refuted. ==Characteristics== Based upon parallax measurements, Kapteyn's Star is from the Earth. Kapteyn's Star at SolStations.com. Kapteyn b was thought to make a complete orbit around its parent star about every 48.62 days at a distance of 0.17 AU, with an eccentricity of 0.21, meaning its orbit is mildly elliptical. There is currently no evidence for planets orbiting Kapteyn's Star. ==See also== * List of nearest stars and brown dwarfs * Stars named after people ==References== ==Further reading== *. *. *. *. *. ==External links== * SolStation.com: Kapteyn's Star * Press release on planetary system Category:M-type main-sequence stars Category:M-type subdwarfs Category:BY Draconis variables Category:High-proper-motion stars Category:Local Bubble Category:Hypothetical planetary systems Category:Pictor CD-45 01841 0191 033793 024186 Pictoris, VZ However, subsequent research by Robertson et al. (2015) found that the orbital period of Kapteyn b is an integer fraction (1/3) of their estimated stellar rotation period, and thus the planetary signal is most likely an artifact of stellar activity. Guinan et al. (2016) suggested that the present day star could potentially support life on Kapteyn b, but that the planet's atmosphere may have been stripped away when the star was young (~0.5 Gyr) and highly active. The metallicity distribution function is an important concept in stellar and galactic evolution. The star has a mass of 0.27 , a radius of 0.29 and has about 1.2% of the Sun's luminosity. Much of the iron in a star will have come from earlier type Ia supernovae. An Am star or metallic-line star is a type of chemically peculiar star of spectral type A whose spectrum has strong and often variable absorption lines of metals such as zinc, strontium, zirconium, and barium, and deficiencies of others, such as calcium and scandium. A much smaller percentage show stronger peculiarities, such as the dramatic under-abundance of iron peak elements in λ Boötis stars. ==sn stars== Another group of stars sometimes considered to be chemically peculiar are the 'sn' stars. While he was reviewing star charts and photographic plates, Kapteyn noted that a star, previously catalogued in 1873 by B. A. Gould as C.Z. V 243, seemed to be missing. The Am stars (CP1 stars) show weak lines of singly ionized Ca and/or Sc, but show enhanced abundances of heavy metals. It is a curve of what proportion of stars have a particular metallicity ([Fe/H], the relative abundance of iron and hydrogen) of a population of stars such as in a cluster or galaxy. The ""planets"" are in fact artifacts of the star's rotation and activity. ==History of observations== Attention was first drawn to what is now known as Kapteyn's Star by the Dutch astronomer Jacobus Kapteyn in 1898. Kapteyn b was described as the oldest-known potentially habitable planet, estimated to be 11 billion years old, while Kapteyn c was described as beyond the host star's habitable zone.David Dickinson, Discovered: Two New Planets for Kapteyn’s Star (June 4, 2014). ",8 times more than the Sun,8 times less than the Sun,13 light years away from Earth,Unknown,Equal to the Sun,B,kaggle200,"Kapteyn's series are important in physical problems. Among other applications, the solution formula_12 of Kepler's equation formula_13 can be expressed via a Kapteyn series:
Nearby K and M stars that are BY Draconis variables include Barnard's Star, Kapteyn's Star, 61 Cygni, Ross 248, Lacaille 8760, Lalande 21185, and Luyten 726-8. Ross 248 is the first discovered BY Draconis variable, the variability having been identified by Gerald Edward Kron in 1950. The variability of BY Draconis itself was discovered in 1966 and studied in detail by Pavel Fedorovich Chugainov over the period 1973–1976.
Kapteyn b, discovered in June 2014 is a possible rocky world of about 4.8 Earth masses and about 1.5 earth radii was found orbiting the habitable zone of the red subdwarf Kapteyn's Star, 12.8 light-years away.
In 2014, the first planets around a halo star were announced around Kapteyn's star, the nearest halo star to Earth, around 13 light years away. However, later research suggests that Kapteyn b is just an artefact of stellar activity and that Kapteyn c needs more study to be confirmed. The metallicity of Kapteyn's star is estimated to be about 8 times less than the Sun.","The publication of the measurements performed with the instrument of Kapteyn marked a major breakthrough for Kapteyn in the field of astronomy. In 1901 Kapteyn was the first Dutchman to receive a golden medal from the British Royal Astronomical Society. Kapteyn had been a member of this organisation since 1892. Furthermore, working with the instrument may have inspired the theories of Kapteyn about the shape of the Milky Way. Kapteyn first discussed these theories in 1891 during a rectorial speech.The American astronomer Simon Newcomb praised Kapteyn and his work: ""This work [the Cape Photographic Durchmusterung] of Kapteyn offers a remarkable example of the spirit which animates the born investigator of the heavens.""Jacob Halm remarked that the results of the Cape Photographic Durchmusterung had an accuracy comparable to that of the results of the northern hemisphere. The astronomer Henry Sawerthal, who visited the laboratory of Kapteyn in 1889, described the results as ""...sufficient in the present instance to give results more accurate than those of the Northern Durchmusterung, a remark which not only applies to positions, but to magnitude (also).""The German astronomer Max Wolf had such admiration for the instrument of Kapteyn that he built his own 'improved' version of the instrument.
Kapteyn and Gill published their Durchmusterung in three volumes that together formed the Cape Photographic Durchmusterung: declination zones -18° to -37° (1896), -38° to -52° (1897) and -53° to -89° (1900).
In 2014, the first planets around a halo star were announced around Kapteyn's star, the nearest halo star to Earth, around 13 light years away. However, later research suggests that Kapteyn b is just an artefact of stellar activity and that Kapteyn c needs more study to be confirmed. The metallicity of Kapteyn's star is estimated to be about 8 times less than the Sun.Different types of galaxies have different histories of star formation and hence planet formation. Planet formation is affected by the ages, metallicities, and orbits of stellar populations within a galaxy. Distribution of stellar populations within a galaxy varies between the different types of galaxies.","The metallicity of Kapteyn's star is estimated to be about 8 times less than the Sun.The metallicity of Kapteyn's star is estimated to be about 8 times less than the Sun.Different types of galaxies have different histories of star formation and hence planet formationThe publication of the measurements performed with the instrument of Kapteyn marked a major breakthrough for Kapteyn in the field of astronomyHowever, later research suggests that Kapteyn b is just an artefact of stellar activity and that Kapteyn c needs more study to be confirmedThe variability of BY Draconis itself was discovered in 1966 and studied in detail by Pavel Fedorovich Chugainov over the period 1973–1976.
Kapteyn b, discovered in June 2014 is a possible rocky world of about 4.8 Earth masses and about 1.5 earth radii was found orbiting the habitable zone of the red subdwarf Kapteyn's Star, 12.8 light-years away.
In 2014, the first planets around a halo star were announced around Kapteyn's star, the nearest halo star to Earth, around 13 light years awayIn 1901 Kapteyn was the first Dutchman to receive a golden medal from the British Royal Astronomical SocietyAmong other applications, the solution formula_12 of Kepler's equation formula_13 can be expressed via a Kapteyn series:
Nearby K and M stars that are BY Draconis variables include Barnard's Star, Kapteyn's Star, 61 Cygni, Ross 248, Lacaille 8760, Lalande 21185, and Luyten 726-8Furthermore, working with the instrument may have inspired the theories of Kapteyn about the shape of the Milky WayThe astronomer Henry Sawerthal, who visited the laboratory of Kapteyn in 1889, described the results as ""...sufficient in the present instance to give results more accurate than those of the Northern Durchmusterung, a remark which not only applies to positions, but to magnitude (also).""The German astronomer Max Wolf had such admiration for the instrument of Kapteyn that he built his own 'improved' version of the instrument.
Kapteyn and Gill published their Durchmusterung in three volumes that together formed the Cape Photographic Durchmusterung: declin","The metallicity of Kapteyn's star is estimated to be about 8 times less than the Sun.The metallicity of Kapteyn's star is estimated to be about 8 times less than the Sun.Different types of galaxies have different histories of star formation and hence planet formationThe publication of the measurements performed with the instrument of Kapteyn marked a major breakthrough for Kapteyn in the field of astronomyHowever, later research suggests that Kapteyn b is just an artefact of stellar activity and that Kapteyn c needs more study to be confirmedThe variability of BY Draconis itself was discovered in 1966 and studied in detail by Pavel Fedorovich Chugainov over the period 1973–1976.
Kapteyn b, discovered in June 2014 is a possible rocky world of about 4.8 Earth masses and about 1.5 earth radii was found orbiting the habitable zone of the red subdwarf Kapteyn's Star, 12.8 light-years away.
In 2014, the first planets around a halo star were announced around Kapteyn's star, the nearest halo star to Earth, around 13 light years awayIn 1901 Kapteyn was the first Dutchman to receive a golden medal from the British Royal Astronomical SocietyAmong other applications, the solution formula_12 of Kepler's equation formula_13 can be expressed via a Kapteyn series:
Nearby K and M stars that are BY Draconis variables include Barnard's Star, Kapteyn's Star, 61 Cygni, Ross 248, Lacaille 8760, Lalande 21185, and Luyten 726-8Furthermore, working with the instrument may have inspired the theories of Kapteyn about the shape of the Milky WayThe astronomer Henry Sawerthal, who visited the laboratory of Kapteyn in 1889, described the results as ""...sufficient in the present instance to give results more accurate than those of the Northern Durchmusterung, a remark which not only applies to positions, but to magnitude (also).""The German astronomer Max Wolf had such admiration for the instrument of Kapteyn that he built his own 'improved' version of the instrument.
Kapteyn and Gill published their Durchmusterung in three volumes that together formed the Cape Photographic Durchmusterung: declin[SEP]What is the metallicity of Kapteyn's star estimated to be?","['B', 'D', 'C']",1.0
What is the SI base unit of time and how is it defined?,"The base unit of time in the International System of Units (SI), and by extension most of the Western world, is the second, defined as about 9 billion oscillations of the caesium atom. Moreover, most other SI base units are defined by their relationship to the second: the metre is defined by setting the speed of light (in vacuum) to be 299 792 458 m/s, exactly; definitions of the SI base units kilogram, ampere, kelvin, and candela also depend on the second. SI base units Name Symbol Measure Post-2019 formal definition Historical origin / justification Dimension symbol second s time ""The second, symbol s, is the SI unit of time. The SI base units are the standard units of measurement defined by the International System of Units (SI) for the seven base quantities of what is now known as the International System of Quantities: they are notably a basic set from which all other SI units can be derived. Units of time based on orders of magnitude of the second include the nanosecond and the millisecond. ==Historical== The natural units for timekeeping used by most historical societies are the day, the solar year and the lunation. From 2005 to early 2019, the definitions of the SI base units were as follows: SI base units Name Symbol Measure Pre-2019 (2005) formal definition Historical origin / justification Dimension symbol metre m length ""The metre is the length of the path travelled by light in vacuum during a time interval of 1 / of a second."" The second, symbol s, is the SI unit of time. The second (symbol: s) is the unit of time in the International System of Units (SI), historically defined as of a day – this factor derived from the division of the day first into 24 hours, then to 60 minutes and finally to 60 seconds each (24 × 60 × 60 = 86400). The only base unit whose definition does not depend on the second is the mole, and only two of the 22 named derived units, radian and steradian, do not depend on the second either. ==Timekeeping standards== A set of atomic clocks throughout the world keeps time by consensus: the clocks ""vote"" on the correct time, and all voting clocks are steered to agree with the consensus, which is called International Atomic Time (TAI). The SI base units are a fundamental part of modern metrology, and thus part of the foundation of modern science and technology. Though many derivative units for everyday things are reported in terms of larger units of time, not seconds, they are ultimately defined in terms of the SI second; this includes time expressed in hours and minutes, velocity of a car in kilometers per hour or miles per hour, kilowatt hours of electricity usage, and speed of a turntable in rotations per minute. The current and formal definition in the International System of Units (SI) is more precise: > The second [...] is defined by taking the fixed numerical value of the > caesium frequency, ΔνCs, the unperturbed ground-state hyperfine transition > frequency of the caesium 133 atom, to be when expressed in the unit Hz, > which is equal to s−1. This note was intended to make it clear that the definition of the SI second is based on a Cs atom unperturbed by black-body radiation, that is, in an environment whose temperature is 0 K, and that the frequencies of primary frequency standards should therefore be corrected for the shift due to ambient radiation, as stated at the meeting of the CCTF in 1999. footnote added by the 14th meeting of the Consultative Committee for Time and Frequency in 1999 the footnote was added at the 86th (1997) meeting of the CIPM GCPM 1998 7th Edition SI Brochure The definition of a unit refers to an idealized situation that can be reached in the practical realization with some uncertainty only. The SI base units form a set of mutually independent dimensions as required by dimensional analysis commonly employed in science and technology. The most common units are the second, defined in terms of an atomic process; the day, an integral multiple of seconds; and the year, usually 365 days. Because the next higher SI unit is 1000 times larger, times of 10−14 and 10−13 seconds are typically expressed as tens or hundreds of femtoseconds. The exact modern SI definition is ""[The second] is defined by taking the fixed numerical value of the cesium frequency, , the unperturbed ground-state hyperfine transition frequency of the cesium 133 atom, to be 9 192 631 770 when expressed in the unit Hz, which is equal to s−1."" SI prefixes are frequently combined with the word second to denote subdivisions of the second: milliseconds (thousandths), microseconds (millionths), nanoseconds (billionths), and sometimes smaller units of a second. The units and their physical quantities are the second for time, the metre (sometimes spelled meter) for length or distance, the kilogram for mass, the ampere for electric current, the kelvin for thermodynamic temperature, the mole for amount of substance, and the candela for luminous intensity. The definition of the second should be understood as the definition of the unit of proper time: it applies in a small spatial domain which shares the motion of the caesium atom used to realize the definition. ","The SI base unit of time is the week, which is defined by measuring the electronic transition frequency of caesium atoms.","The SI base unit of time is the second, which is defined by measuring the electronic transition frequency of caesium atoms.","The SI base unit of time is the hour, which is defined by measuring the electronic transition frequency of caesium atoms.","The SI base unit of time is the day, which is defined by measuring the electronic transition frequency of caesium atoms.","The SI base unit of time is the minute, which is defined by measuring the electronic transition frequency of caesium atoms.",B,kaggle200,"Caesium-133 is the only stable isotope of caesium. The SI base unit of time, the second, is defined by a specific caesium-133 transition. Since 1967, the official definition of a second is:
A unit of time is any particular time interval, used as a standard way of measuring or expressing duration. The base unit of time in the International System of Units (SI) and by extension most of the Western world, is the second, defined as about 9 billion oscillations of the caesium atom. The exact modern SI definition is ""[The second] is defined by taking the fixed numerical value of the cesium frequency, , the unperturbed ground-state hyperfine transition frequency of the cesium 133 atom, to be 9 192 631 770 when expressed in the unit Hz, which is equal to s.""
The gal is a derived unit, defined in terms of the centimeter–gram–second (CGS) base unit of length, the centimeter, and the second, which is the base unit of time in both the CGS and the modern SI system. In SI base units, 1 Gal is equal to 0.01 m/s.
Time is one of the seven fundamental physical quantities in both the International System of Units (SI) and International System of Quantities. The SI base unit of time is the second, which is defined by measuring the electronic transition frequency of caesium atoms. Time is used to define other quantities, such as velocity, so defining time in terms of such quantities would result in circularity of definition. An operational definition of time, wherein one says that observing a certain number of repetitions of one or another standard cyclical event (such as the passage of a free-swinging pendulum) constitutes one standard unit such as the second, is highly useful in the conduct of both advanced experiments and everyday affairs of life. To describe observations of an event, a location (position in space) and time are typically noted.","Caesium-133 is the only stable isotope of caesium. The SI base unit of time, the second, is defined by a specific caesium-133 transition. Since 1967, the official definition of a second is: The second, symbol s, is defined by taking the fixed numerical value of the caesium frequency, ΔνCs, the unperturbed ground-state hyperfine transition frequency of the caesium-133 atom, to be 9192631770 when expressed in the unit Hz, which is equal to s−1.
A unit of time is any particular time interval, used as a standard way of measuring or expressing duration. The base unit of time in the International System of Units (SI), and by extension most of the Western world, is the second, defined as about 9 billion oscillations of the caesium atom. The exact modern SI definition is ""[The second] is defined by taking the fixed numerical value of the cesium frequency, ΔνCs, the unperturbed ground-state hyperfine transition frequency of the cesium 133 atom, to be 9 192 631 770 when expressed in the unit Hz, which is equal to s−1.""Historically, many units of time were defined by the movements of astronomical objects.
Time is the continued sequence of existence and events that occurs in an apparently irreversible succession from the past, through the present, into the future. It is a component quantity of various measurements used to sequence events, to compare the duration of events or the intervals between them, and to quantify rates of change of quantities in material reality or in the conscious experience. Time is often referred to as a fourth dimension, along with three spatial dimensions.Time is one of the seven fundamental physical quantities in both the International System of Units (SI) and International System of Quantities. The SI base unit of time is the second, which is defined by measuring the electronic transition frequency of caesium atoms. General relativity is the primary framework for understanding how spacetime works. Through advances in both theoretical and experimental investigations of spacetime, it has been shown that time can be distorted and dilated, particularly at the edges of black holes.","The SI base unit of time is the second, which is defined by measuring the electronic transition frequency of caesium atomsThe base unit of time in the International System of Units (SI) and by extension most of the Western world, is the second, defined as about 9 billion oscillations of the caesium atomThe base unit of time in the International System of Units (SI), and by extension most of the Western world, is the second, defined as about 9 billion oscillations of the caesium atomThe exact modern SI definition is ""[The second] is defined by taking the fixed numerical value of the cesium frequency, ΔνCs, the unperturbed ground-state hyperfine transition frequency of the cesium 133 atom, to be 9 192 631 770 when expressed in the unit Hz, which is equal to s−1.""Historically, many units of time were defined by the movements of astronomical objects.
Time is the continued sequence of existence and events that occurs in an apparently irreversible succession from the past, through the present, into the futureThe SI base unit of time, the second, is defined by a specific caesium-133 transitionSince 1967, the official definition of a second is: The second, symbol s, is defined by taking the fixed numerical value of the caesium frequency, ΔνCs, the unperturbed ground-state hyperfine transition frequency of the caesium-133 atom, to be 9192631770 when expressed in the unit Hz, which is equal to s−1.
A unit of time is any particular time interval, used as a standard way of measuring or expressing durationThe exact modern SI definition is ""[The second] is defined by taking the fixed numerical value of the cesium frequency, , the unperturbed ground-state hyperfine transition frequency of the cesium 133 atom, to be 9 192 631 770 when expressed in the unit Hz, which is equal to s.""
The gal is a derived unit, defined in terms of the centimeter–gram–second (CGS) base unit of length, the centimeter, and the second, which is the base unit of time in both the CGS and the modern SI systemIn SI base units, 1 Gal is equal to 0.01 m/s.
Time is one of the seven fundamental physical quantit","The SI base unit of time is the second, which is defined by measuring the electronic transition frequency of caesium atomsThe base unit of time in the International System of Units (SI) and by extension most of the Western world, is the second, defined as about 9 billion oscillations of the caesium atomThe base unit of time in the International System of Units (SI), and by extension most of the Western world, is the second, defined as about 9 billion oscillations of the caesium atomThe exact modern SI definition is ""[The second] is defined by taking the fixed numerical value of the cesium frequency, ΔνCs, the unperturbed ground-state hyperfine transition frequency of the cesium 133 atom, to be 9 192 631 770 when expressed in the unit Hz, which is equal to s−1.""Historically, many units of time were defined by the movements of astronomical objects.
Time is the continued sequence of existence and events that occurs in an apparently irreversible succession from the past, through the present, into the futureThe SI base unit of time, the second, is defined by a specific caesium-133 transitionSince 1967, the official definition of a second is: The second, symbol s, is defined by taking the fixed numerical value of the caesium frequency, ΔνCs, the unperturbed ground-state hyperfine transition frequency of the caesium-133 atom, to be 9192631770 when expressed in the unit Hz, which is equal to s−1.
A unit of time is any particular time interval, used as a standard way of measuring or expressing durationThe exact modern SI definition is ""[The second] is defined by taking the fixed numerical value of the cesium frequency, , the unperturbed ground-state hyperfine transition frequency of the cesium 133 atom, to be 9 192 631 770 when expressed in the unit Hz, which is equal to s.""
The gal is a derived unit, defined in terms of the centimeter–gram–second (CGS) base unit of length, the centimeter, and the second, which is the base unit of time in both the CGS and the modern SI systemIn SI base units, 1 Gal is equal to 0.01 m/s.
Time is one of the seven fundamental physical quantit[SEP]What is the SI base unit of time and how is it defined?","['B', 'C', 'D']",1.0
What is a planetary system?,"Generally speaking, systems with one or more planets constitute a planetary system, although such systems may also consist of bodies such as dwarf planets, asteroids, natural satellites, meteoroids, comets, planetesimalsp. 394, The Universal Book of Astronomy, from the Andromeda Galaxy to the Zone of Avoidance, David J. Dsrling, Hoboken, New Jersey: Wiley, 2004. . thumb|250px|An artist's concept of a planetary system A planetary system is a set of gravitationally bound non-stellar objects in or out of orbit around a star or star system. Planetary means relating to a planet or planets. The different types of planetary systems, when classified using planetary masses, are: * Similar: When the masses of all planets in a system are similar to each other, the system's architecture is Similar. The term exoplanetary system is sometimes used in reference to other planetary systems. The four classes of planetary system architecture are defined based on how the mass of the planets is distributed around the host star. In hierarchical systems the planets are arranged so that the system can be gravitationally considered as a nested system of two-bodies, e.g. in a star with a close-in hot jupiter with another gas giant much further out, the star and hot jupiter form a pair that appears as a single object to another planet that is far enough out. At present, few systems have been found to be analogous to the Solar System with terrestrial planets close to the parent star. A planetary coordinate system (also referred to as planetographic, planetodetic, or planetocentric) is a generalization of the geographic, geodetic, and the geocentric coordinate systems for planets other than Earth. More commonly, systems consisting of multiple Super-Earths have been detected.Types and Attributes at Astro Washington.com. ===Classification of Planetary System Architectures=== Research has shown that there are four classes of planetary system architecture. Planetary science (or more rarely, planetology) is the scientific study of planets (including Earth), celestial bodies (such as moons, asteroids, comets) and planetary systems (in particular those of the Solar System) and the processes of their formation. A satellite system is a set of gravitationally bound objects in orbit around a planetary mass object (incl. sub-brown dwarfs and rogue planets) or minor planet, or its barycenter. Apart from the Earth-Moon system and Mars' system of two tiny natural satellites, the other terrestrial planets are generally not considered satellite systems, although some have been orbited by artificial satellites originating from Earth. If an evolved star is in a binary or multiple system, then the mass it loses can transfer to another star, forming new protoplanetary disks and second- and third-generation planets which may differ in composition from the original planets, which may also be affected by the mass transfer. ==System architectures== The Solar System consists of an inner region of small rocky planets and outer region of large gas giants. The Solar System, with small rocky planets in the inner part and giant planets in the outer part is a type of Ordered system. ===Components=== ====Planets and stars==== 300px|thumb|right|The Morgan-Keenan spectral classification Most known exoplanets orbit stars roughly similar to the Sun: that is, main-sequence stars of spectral categories F, G, or K. Studies suggest that architectures of planetary systems are dependent on the conditions of their initial formation. Several objects farther from the Sun also have satellite systems consisting of multiple moons, including the complex Plutonian system where multiple objects orbit a common center of mass, as well as many asteroids and plutinos. In fact all of the giant planets of the Solar System possess large satellite systems as well as planetary rings, and it is inferred that this is a general pattern. During formation of a system, much material is gravitationally-scattered into distant orbits, and some planets are ejected completely from the system, becoming rogue planets. ===Evolved systems=== ====High-mass stars==== Planets orbiting pulsars have been discovered. The most notable system is the Plutonian system, which is also dwarf planet binary. ",A system of planets that are all located in the same solar system.,A system of planets that are all the same size and shape.,Any set of gravitationally bound non-stellar objects in or out of orbit around a star or star system.,A system of planets that are all located in the same galaxy.,A system of planets that are all made of gas.,C,kaggle200,"Kepler-90, also designated 2MASS J18574403+4918185, is an F-type star located about from Earth in the constellation of Draco. It is notable for possessing a planetary system that has the same number of observed planets as the Solar System.
Within a planetary system, planets, dwarf planets, asteroids and other minor planets, comets, and space debris orbit the system's barycenter in elliptical orbits. A comet in a parabolic or hyperbolic orbit about a barycenter is not gravitationally bound to the star and therefore is not considered part of the star's planetary system. Bodies that are gravitationally bound to one of the planets in a planetary system, either natural or artificial satellites, follow orbits about a barycenter near or within that planet.
A planetary system around HD 40307 contains four confirmed planets and two other possible planets, all within of the star.
A planetary system is a set of gravitationally bound non-stellar objects in or out of orbit around a star or star system. Generally speaking, systems with one or more planets constitute a planetary system, although such systems may also consist of bodies such as dwarf planets, asteroids, natural satellites, meteoroids, comets, planetesimals and circumstellar disks. The Sun together with the planetary system revolving around it, including Earth, forms the Solar System. The term exoplanetary system is sometimes used in reference to other planetary systems.","Kepler-51 has three planets, all super-puffs. Kepler-51b, c and d have some of the lowest known densities of any exoplanet.
A planetary system is a set of gravitationally bound non-stellar objects in or out of orbit around a star or star system. Generally speaking, systems with one or more planets constitute a planetary system, although such systems may also consist of bodies such as dwarf planets, asteroids, natural satellites, meteoroids, comets, planetesimals and circumstellar disks. The Sun together with the planetary system revolving around it, including Earth, forms the Solar System. The term exoplanetary system is sometimes used in reference to other planetary systems.
A planetary system around HD 40307 contains four confirmed planets and two other possible planets, all within 0.6 AU of the star.","Generally speaking, systems with one or more planets constitute a planetary system, although such systems may also consist of bodies such as dwarf planets, asteroids, natural satellites, meteoroids, comets, planetesimals and circumstellar disksThe term exoplanetary system is sometimes used in reference to other planetary systems.Bodies that are gravitationally bound to one of the planets in a planetary system, either natural or artificial satellites, follow orbits about a barycenter near or within that planet.
A planetary system around HD 40307 contains four confirmed planets and two other possible planets, all within of the star.
A planetary system is a set of gravitationally bound non-stellar objects in or out of orbit around a star or star systemKepler-51b, c and d have some of the lowest known densities of any exoplanet.
A planetary system is a set of gravitationally bound non-stellar objects in or out of orbit around a star or star systemThe term exoplanetary system is sometimes used in reference to other planetary systems.
A planetary system around HD 40307 contains four confirmed planets and two other possible planets, all within 0.6 AU of the starIt is notable for possessing a planetary system that has the same number of observed planets as the Solar System.
Within a planetary system, planets, dwarf planets, asteroids and other minor planets, comets, and space debris orbit the system's barycenter in elliptical orbitsThe Sun together with the planetary system revolving around it, including Earth, forms the Solar SystemA comet in a parabolic or hyperbolic orbit about a barycenter is not gravitationally bound to the star and therefore is not considered part of the star's planetary systemKepler-51 has three planets, all super-puffs- Kepler-90, also designated 2MASS J18574403+4918185, is an F-type star located about from Earth in the constellation of Draco","Generally speaking, systems with one or more planets constitute a planetary system, although such systems may also consist of bodies such as dwarf planets, asteroids, natural satellites, meteoroids, comets, planetesimals and circumstellar disksThe term exoplanetary system is sometimes used in reference to other planetary systems.Bodies that are gravitationally bound to one of the planets in a planetary system, either natural or artificial satellites, follow orbits about a barycenter near or within that planet.
A planetary system around HD 40307 contains four confirmed planets and two other possible planets, all within of the star.
A planetary system is a set of gravitationally bound non-stellar objects in or out of orbit around a star or star systemKepler-51b, c and d have some of the lowest known densities of any exoplanet.
A planetary system is a set of gravitationally bound non-stellar objects in or out of orbit around a star or star systemThe term exoplanetary system is sometimes used in reference to other planetary systems.
A planetary system around HD 40307 contains four confirmed planets and two other possible planets, all within 0.6 AU of the starIt is notable for possessing a planetary system that has the same number of observed planets as the Solar System.
Within a planetary system, planets, dwarf planets, asteroids and other minor planets, comets, and space debris orbit the system's barycenter in elliptical orbitsThe Sun together with the planetary system revolving around it, including Earth, forms the Solar SystemA comet in a parabolic or hyperbolic orbit about a barycenter is not gravitationally bound to the star and therefore is not considered part of the star's planetary systemKepler-51 has three planets, all super-puffs- Kepler-90, also designated 2MASS J18574403+4918185, is an F-type star located about from Earth in the constellation of Draco[SEP]What is a planetary system?","['C', 'A', 'B']",1.0
What is the result of the collapse of a cavitation bubble?,"The cavitation phenomenon may manifest in any of the following situations: * imposed hydrostatic tensile stress acting on a pre-existing void * void pressurization due to gases that are generated due to chemical action (as in volatilization of low-molecular weight waxes or oils: 'blowpoint' for insufficiently cured rubber, or 'thermal blowout' for systems operating at very high temperature) * void pressurization due to gases that come out of solution (as in gases dissolved at high pressure) ==References== Category:Rubber properties In cavitation, pressure is responsible for the mass transfer between liquid and vapor phases. Cavitation is the unstable unhindered expansion of a microscopic void in a solid elastomer under the action of tensile hydrostatic stresses. In chemistry, a cavitand is a container-shaped molecule. Cavitations are an area of dead or dying bone. There are two general categories of phase change models used for cavitation: the barotropic models and equilibrium models. Cavitation modelling is a type of computational fluid dynamic (CFD) that represents the flow of fluid during cavitation. A supercavitating torpedo is a torpedo using the effect of supercavitation to create a bubble around the torpedo to move at high velocity under water. * VA-111 Shkval, 1977 * Hoot, 2006 * Superkavitierender Unterwasserlaufkörper (Supercavitating underwater-travelling munition) Barracuda, 2005 prototype * Unnamed prototype, mentioned 2004Supercavitating Torpedo - A rocket torpedo that swims in an air bubble (2004) PopularScience The DARPA also considered building supercavitating minisubs dubbed ""Underwater Express"".A super fast, (super loud) minisub (2009) Defense Tech ==References== * The cavity of the cavitand allows it to engage in host–guest chemistry with guest molecules of a complementary shape and size. These types of cavitands were extensively investigated by Rebek, and Gibb, among others. == Applications of Cavitands == Specific cavitands form the basis of rigid templates onto which de novo proteins can be chemically linked. Cavitands that have an extended aromatic bridging unit, or an extended cavity containing 3 rows of aromatic rings are referred to as deep-cavity cavitands and have broad applications in host-guest chemistry. It covers a wide range of applications, such as pumps, water turbines, pump inducers, and fuel cavitation in orifices as commonly encountered in fuel injection systems. == Modelling categories == Modelling efforts can be divided into two broad categories: vapor transport models and discrete bubble models. === Vapor transport model === Vapor transport models are best suited to large-scale cavitation, like sheet cavitation that often occurs on rudders and propellers. Proponents claim they primarily affect the jawbone, yet that cavitations are able to affect any bone. There is little evidence to support the theory of cavitation in the jawbone, and their diagnosis is highly controversial. However, modern usage in the field of supramolecular chemistry specifically refers to cavitands formed on a resorcinarene scaffold by bridging adjacent phenolic units. The equation for state of water is used, with the energy absorbed or released by phase change creating local temperature gradients which control the rate of phase change. == Bubble dynamics models == Several models for the bubble dynamics have been proposed: ===Rayleigh=== The Rayleigh model is the oldest, dating from 1917. This is different from the sharp interface models in that the vapor and liquid are modeled as distinct phases separated by an interface. === Sharp interface models === In sharp interface models, the interface is not diffused by advection. This section will briefly discuss the advantages and disadvantages of each type. === Barotropic model === If the pressure is greater than vapor pressure, then the fluid is liquid, otherwise vapor. The disadvantage of this approach is that when the cavities are larger than one cell, the vapor fraction is diffused across neighboring cells by the vapor transport model. ","The collapse of a cavitation bubble causes the surrounding liquid to expand, resulting in the formation of a low-pressure vapor bubble.","The collapse of a cavitation bubble causes a decrease in pressure and temperature of the vapor within, releasing a small amount of energy in the form of an acoustic shock wave and visible light.","The collapse of a cavitation bubble causes a sharp increase in pressure and temperature of the vapor within, releasing a significant amount of energy in the form of an acoustic shock wave and visible light.","The collapse of a cavitation bubble causes the surrounding liquid to implode, resulting in the formation of a vacuum.",The collapse of a cavitation bubble has no effect on the surrounding liquid or vapor.,C,kaggle200,"A key feature of the supercavitating object is the nose, which typically has a sharp edge around its perimeter to form the cavitation bubble. The nose may be articulated and shaped as a flat disk or cone. The shape of the supercavitating object is generally slender so the cavitation bubble encompasses the object. If the bubble is not long enough to encompass the object, especially at slower speeds, the bubble can be enlarged and extended by injecting high-pressure gas near the object's nose.
Other ways of generating cavitation voids involve the local deposition of energy, such as an intense focused laser pulse (optic cavitation) or with an electrical discharge through a spark. Vapor gases evaporate into the cavity from the surrounding medium; thus, the cavity is not a vacuum at all, but rather a low-pressure vapor (gas) bubble. Once the conditions which caused the bubble to form are no longer present, such as when the bubble moves downstream, the surrounding liquid begins to implode due its higher pressure, building up inertia as it moves inward. As the bubble finally collapses, the inward inertia of the surrounding liquid causes a sharp increase of pressure and temperature of the vapor within. The bubble eventually collapses to a minute fraction of its original size, at which point the gas within dissipates into the surrounding liquid via a rather violent mechanism which releases a significant amount of energy in the form of an acoustic shock wave and as visible light. At the point of total collapse, the temperature of the vapor within the bubble may be several thousand kelvin, and the pressure several hundred atmospheres.
Supercavitation is the use of a cavitation bubble to reduce skin friction drag on a submerged object and enable high speeds. Applications include torpedoes and propellers, but in theory, the technique could be extended to an entire underwater vessel.
Cavitation (bubble formation/collapse in a fluid) involves an implosion process. When a cavitation bubble forms in a liquid (for example, by a high-speed water propeller), this bubble is typically rapidly collapsed—imploded—by the surrounding liquid.","A key feature of the supercavitating object is the nose, which typically has a sharp edge around its perimeter to form the cavitation bubble. The nose may be articulated and shaped as a flat disk or cone. The shape of the supercavitating object is generally slender so the cavitation bubble encompasses the object. If the bubble is not long enough to encompass the object, especially at slower speeds, the bubble can be enlarged and extended by injecting high-pressure gas near the object's nose.The very high speed required for supercavitation can be temporarily reached by underwater-fired projectiles and projectiles entering water. For sustained supercavitation, rocket propulsion is used, and the high-pressure rocket gas can be routed to the nose to enhance the cavitation bubble. In principle, supercavitating objects can be maneuvered using various methods, including the following: Drag fins that project through the bubble into the surrounding liquid (p. 22) A tilted object nose Gas injected asymmetrically near the nose to distort the cavity's geometry Vectoring rocket thrust through gimbaling for a single nozzle Differential thrust from multiple nozzles
Inertial cavitation Inertial cavitation was first observed in the late 19th century, considering the collapse of a spherical void within a liquid. When a volume of liquid is subjected to a sufficiently low pressure, it may rupture and form a cavity. This phenomenon is coined cavitation inception and may occur behind the blade of a rapidly rotating propeller or on any surface vibrating in the liquid with sufficient amplitude and acceleration. A fast-flowing river can cause cavitation on rock surfaces, particularly when there is a drop-off, such as on a waterfall.Other ways of generating cavitation voids involve the local deposition of energy, such as an intense focused laser pulse (optic cavitation) or with an electrical discharge through a spark. Vapor gases evaporate into the cavity from the surrounding medium; thus, the cavity is not a vacuum at all, but rather a low-pressure vapor (gas) bubble. Once the conditions which caused the bubble to form are no longer present, such as when the bubble moves downstream, the surrounding liquid begins to implode due its higher pressure, building up inertia as it moves inward. As the bubble finally collapses, the inward inertia of the surrounding liquid causes a sharp increase of pressure and temperature of the vapor within. The bubble eventually collapses to a minute fraction of its original size, at which point the gas within dissipates into the surrounding liquid via a rather violent mechanism which releases a significant amount of energy in the form of an acoustic shock wave and as visible light. At the point of total collapse, the temperature of the vapor within the bubble may be several thousand kelvin, and the pressure several hundred atmospheres.Inertial cavitation can also occur in the presence of an acoustic field. Microscopic gas bubbles that are generally present in a liquid will be forced to oscillate due to an applied acoustic field. If the acoustic intensity is sufficiently high, the bubbles will first grow in size and then rapidly collapse. Hence, inertial cavitation can occur even if the rarefaction in the liquid is insufficient for a Rayleigh-like void to occur. High-power ultrasonics usually utilize the inertial cavitation of microscopic vacuum bubbles for treatment of surfaces, liquids, and slurries.
Supercavitation is the use of a cavitation bubble to reduce skin friction drag on a submerged object and enable high speeds. Applications include torpedoes and propellers, but in theory, the technique could be extended to an entire underwater vessel.","At the point of total collapse, the temperature of the vapor within the bubble may be several thousand kelvin, and the pressure several hundred atmospheres.Inertial cavitation can also occur in the presence of an acoustic fieldWhen a cavitation bubble forms in a liquid (for example, by a high-speed water propeller), this bubble is typically rapidly collapsed—imploded—by the surrounding liquid.The bubble eventually collapses to a minute fraction of its original size, at which point the gas within dissipates into the surrounding liquid via a rather violent mechanism which releases a significant amount of energy in the form of an acoustic shock wave and as visible lightAt the point of total collapse, the temperature of the vapor within the bubble may be several thousand kelvin, and the pressure several hundred atmospheres.
Supercavitation is the use of a cavitation bubble to reduce skin friction drag on a submerged object and enable high speedsThe shape of the supercavitating object is generally slender so the cavitation bubble encompasses the objectAs the bubble finally collapses, the inward inertia of the surrounding liquid causes a sharp increase of pressure and temperature of the vapor withinOnce the conditions which caused the bubble to form are no longer present, such as when the bubble moves downstream, the surrounding liquid begins to implode due its higher pressure, building up inertia as it moves inwardIf the bubble is not long enough to encompass the object, especially at slower speeds, the bubble can be enlarged and extended by injecting high-pressure gas near the object's nose.
Other ways of generating cavitation voids involve the local deposition of energy, such as an intense focused laser pulse (optic cavitation) or with an electrical discharge through a sparkIf the acoustic intensity is sufficiently high, the bubbles will first grow in size and then rapidly collapseFor sustained supercavitation, rocket propulsion is used, and the high-pressure rocket gas can be routed to the nose to enhance the cavitation bubble- A key feature of the supercavitating o","At the point of total collapse, the temperature of the vapor within the bubble may be several thousand kelvin, and the pressure several hundred atmospheres.Inertial cavitation can also occur in the presence of an acoustic fieldWhen a cavitation bubble forms in a liquid (for example, by a high-speed water propeller), this bubble is typically rapidly collapsed—imploded—by the surrounding liquid.The bubble eventually collapses to a minute fraction of its original size, at which point the gas within dissipates into the surrounding liquid via a rather violent mechanism which releases a significant amount of energy in the form of an acoustic shock wave and as visible lightAt the point of total collapse, the temperature of the vapor within the bubble may be several thousand kelvin, and the pressure several hundred atmospheres.
Supercavitation is the use of a cavitation bubble to reduce skin friction drag on a submerged object and enable high speedsThe shape of the supercavitating object is generally slender so the cavitation bubble encompasses the objectAs the bubble finally collapses, the inward inertia of the surrounding liquid causes a sharp increase of pressure and temperature of the vapor withinOnce the conditions which caused the bubble to form are no longer present, such as when the bubble moves downstream, the surrounding liquid begins to implode due its higher pressure, building up inertia as it moves inwardIf the bubble is not long enough to encompass the object, especially at slower speeds, the bubble can be enlarged and extended by injecting high-pressure gas near the object's nose.
Other ways of generating cavitation voids involve the local deposition of energy, such as an intense focused laser pulse (optic cavitation) or with an electrical discharge through a sparkIf the acoustic intensity is sufficiently high, the bubbles will first grow in size and then rapidly collapseFor sustained supercavitation, rocket propulsion is used, and the high-pressure rocket gas can be routed to the nose to enhance the cavitation bubble- A key feature of the supercavitating o[SEP]What is the result of the collapse of a cavitation bubble?","['C', 'D', 'E']",1.0
Who was Giordano Bruno?,"Bruno Giordano (born 7 June 1954 in Aosta) is an Italian politician. He is an important scholar of twentieth-century Italy, in particular of the Fascist period and the relationship between Italians and the Catholic Church. == Biography == Giordano Bruno Guerri was born in Iesa, a district of Monticiano, in the province of Siena. Giordano was also a cousin to the Licavolis. thumb|Bruno in 2007 Giordano Bruno Guerri (born 21 December 1950) is an Italian historian, writer, and journalist. Giovanna Bruno (born 28 June 1975 in Andria) is an Italian politician. His works have been translated into French, English, Dutch, Polish, Portuguese, Serbian, Croatian, Spanish, German, and Hungarian. == Writings == * Giuseppe Bottai, un fascista critico. They married in 2014 and have two sons, Nicola Giordano (2006) and Pietro Tancredi (2011). == Politics == Guerri defines himself as liberal, libertarian, laissez-faire, and an ex-libertine, like the Partito Radicale, which he has sometimes supported in the past and shares views with, such as the struggle against the death penalty. * Filippo Tommaso Marinetti. Brunori is an Italian surname. He then enrolled in the Department of Modern Literature (specializing in contemporary history) at the Università Cattolica del Sacro Cuore, in Milan. For two years (1963–64), they worked in Viareggio as domestic help, and in 1965 they moved with Giordano to Ospiate di Bollate, on the outskirts of Milan, to be a worker. Notable people with the surname include: * Federigo Brunori (1566–1649), Italian painter * Matteo Brunori (born 1994), Italian footballer Category:Italian-language surnames Benito, Edda e Galeazzo, Milano, Mondadori, 2005. . * Patrizio Peci, Io, l'infame, Milano, A. Mondadori, 1983. He and Ida Magli founded a cultural movement, ‘ItalianiLiberi’, anti-Europe and free-thinking, for which he has directed the online journal italianiliberi.it. Giordano was known for his explosive temper. Da Romolo a Giovanni Paolo II, Milano, Mondadori, 1997. . * Paolo Garretto, Matera, La Bautta, 1994. Appearing at the same time as Renzo De Felice's book on popular acceptance of the Fascist regime, the essay placed him amongst the most authoritative Italian historic ‘revisionists’. Chicago Tribune, August 30, 1980, pp. W19. ==Early life== Anthony Giordano, nicknamed ""Tony G"", was born June 24, 1914, in St. Louis, Missouri. ",A German philosopher who supported the Keplerian theory that planets move in elliptical orbits around the Sun and believed that fixed stars are similar to the Sun but have different designs and are not subject to the dominion of One.,An English philosopher who supported the Ptolemaic theory that Earth is the center of the universe and believed that fixed stars are not similar to the Sun and do not have planets orbiting them.,A French philosopher who supported the Aristotelian theory that Earth is at the center of the universe and believed that fixed stars are similar to the Sun but do not have planets orbiting them.,An Italian philosopher who supported the Copernican theory that Earth and other planets orbit the Sun and believed that fixed stars are similar to the Sun and have planets orbiting them.,A Spanish philosopher who supported the Galilean theory that Earth and other planets orbit the Sun and believed that fixed stars are not similar to the Sun and do not have planets orbiting them.,D,kaggle200,"Tycho Brahe’s (1546-1601) system of the universe has been called “geo-heliocentric” due to its twofold structure. At its center lies the stationary Earth, which is orbited by the moon and sun. The planets then revolve about the sun while it revolves about the Earth. Beyond all of these heavenly bodies lies a sphere of fixed stars. This sphere rotates about the stationary Earth, creating the perceived motion of the stars in the sky. This system has an interesting feature in that the sun and planets cannot be contained in solid orbs (their orbs would collide), but yet the stars are represented as being contained in a fixed sphere at the boundary of the cosmos.
On 6 January 2015, NASA announced further observations conducted from May 2009 to April 2013 which included eight candidates between one and two times the size of Earth, orbiting in a habitable zone. Of these eight, six orbit stars that are similar to the Sun in size and temperature. Three of the newly confirmed exoplanets were found to orbit within habitable zones of stars similar to the Sun: two of the three, Kepler-438b and Kepler-442b, are near-Earth-size and likely rocky; the third, Kepler-440b, is a super-Earth.
In the 16th century the Italian philosopher Giordano Bruno, an early supporter of the Copernican theory that Earth and other planets orbit the Sun, put forward the view that the fixed stars are similar to the Sun and are likewise accompanied by planets. He was burned at the stake for his ideas by the Roman Inquisition.
In the sixteenth century, the Italian philosopher Giordano Bruno, an early supporter of the Copernican theory that Earth and other planets orbit the Sun (heliocentrism), put forward the view that the fixed stars are similar to the Sun and are likewise accompanied by planets.","Tycho Brahe Tycho Brahe’s (1546-1601) system of the universe has been called “geo-heliocentric” due to its twofold structure. At its center lies the stationary Earth, which is orbited by the Moon and Sun. The planets then revolve about the Sun while it revolves about the Earth. Beyond all of these heavenly bodies lies a sphere of fixed stars. This sphere rotates about the stationary Earth, creating the perceived motion of the stars in the sky. This system has an interesting feature in that the Sun and planets cannot be contained in solid orbs (their orbs would collide), but yet the stars are represented as being contained in a fixed sphere at the boundary of the cosmos.
Speculation on extrasolar planetary systems In the 16th century the Italian philosopher Giordano Bruno, an early supporter of the Copernican theory that Earth and other planets orbit the Sun, put forward the view that the fixed stars are similar to the Sun and are likewise accompanied by planets. He was burned at the stake for his ideas by the Roman Inquisition.In the 18th century the same possibility was mentioned by Sir Isaac Newton in the ""General Scholium"" that concludes his Principia. Making a comparison to the Sun's planets, he wrote ""And if the fixed stars are the centres of similar systems, they will all be constructed according to a similar design and subject to the dominion of One.""His theories gained traction through the 19th and 20th centuries despite a lack of supporting evidence. Long before their confirmation by astronomers, conjecture on the nature of planetary systems had been a focus of the search for extraterrestrial intelligence and has been a prevalent theme in fiction, particularly science fiction.
Early speculations This space we declare to be infinite... In it are an infinity of worlds of the same kind as our own.
In the sixteenth century, the Italian philosopher Giordano Bruno, an early supporter of the Copernican theory that Earth and other planets orbit the Sun (heliocentrism), put forward the view that the fixed stars are similar to the Sun and are likewise accompanied by planets.","He was burned at the stake for his ideas by the Roman Inquisition.
In the sixteenth century, the Italian philosopher Giordano Bruno, an early supporter of the Copernican theory that Earth and other planets orbit the Sun (heliocentrism), put forward the view that the fixed stars are similar to the Sun and are likewise accompanied by planets.In it are an infinity of worlds of the same kind as our own.
In the sixteenth century, the Italian philosopher Giordano Bruno, an early supporter of the Copernican theory that Earth and other planets orbit the Sun (heliocentrism), put forward the view that the fixed stars are similar to the Sun and are likewise accompanied by planetsHe was burned at the stake for his ideas by the Roman Inquisition.In the 18th century the same possibility was mentioned by Sir Isaac Newton in the ""General Scholium"" that concludes his PrincipiaThis system has an interesting feature in that the Sun and planets cannot be contained in solid orbs (their orbs would collide), but yet the stars are represented as being contained in a fixed sphere at the boundary of the cosmos.
Speculation on extrasolar planetary systems In the 16th century the Italian philosopher Giordano Bruno, an early supporter of the Copernican theory that Earth and other planets orbit the Sun, put forward the view that the fixed stars are similar to the Sun and are likewise accompanied by planetsThree of the newly confirmed exoplanets were found to orbit within habitable zones of stars similar to the Sun: two of the three, Kepler-438b and Kepler-442b, are near-Earth-size and likely rocky; the third, Kepler-440b, is a super-Earth.
In the 16th century the Italian philosopher Giordano Bruno, an early supporter of the Copernican theory that Earth and other planets orbit the Sun, put forward the view that the fixed stars are similar to the Sun and are likewise accompanied by planetsTycho Brahe Tycho Brahe’s (1546-1601) system of the universe has been called “geo-heliocentric” due to its twofold structureMaking a comparison to the Sun's planets, he wrote ""And if the fixed stars are the cen","He was burned at the stake for his ideas by the Roman Inquisition.
In the sixteenth century, the Italian philosopher Giordano Bruno, an early supporter of the Copernican theory that Earth and other planets orbit the Sun (heliocentrism), put forward the view that the fixed stars are similar to the Sun and are likewise accompanied by planets.In it are an infinity of worlds of the same kind as our own.
In the sixteenth century, the Italian philosopher Giordano Bruno, an early supporter of the Copernican theory that Earth and other planets orbit the Sun (heliocentrism), put forward the view that the fixed stars are similar to the Sun and are likewise accompanied by planetsHe was burned at the stake for his ideas by the Roman Inquisition.In the 18th century the same possibility was mentioned by Sir Isaac Newton in the ""General Scholium"" that concludes his PrincipiaThis system has an interesting feature in that the Sun and planets cannot be contained in solid orbs (their orbs would collide), but yet the stars are represented as being contained in a fixed sphere at the boundary of the cosmos.
Speculation on extrasolar planetary systems In the 16th century the Italian philosopher Giordano Bruno, an early supporter of the Copernican theory that Earth and other planets orbit the Sun, put forward the view that the fixed stars are similar to the Sun and are likewise accompanied by planetsThree of the newly confirmed exoplanets were found to orbit within habitable zones of stars similar to the Sun: two of the three, Kepler-438b and Kepler-442b, are near-Earth-size and likely rocky; the third, Kepler-440b, is a super-Earth.
In the 16th century the Italian philosopher Giordano Bruno, an early supporter of the Copernican theory that Earth and other planets orbit the Sun, put forward the view that the fixed stars are similar to the Sun and are likewise accompanied by planetsTycho Brahe Tycho Brahe’s (1546-1601) system of the universe has been called “geo-heliocentric” due to its twofold structureMaking a comparison to the Sun's planets, he wrote ""And if the fixed stars are the cen[SEP]Who was Giordano Bruno?","['D', 'E', 'A']",1.0
What are the Navier-Stokes equations?,"The Navier-Stokes equations are a set of partial differential equations that describe the motion of fluids. The Navier–Stokes equations ( ) are partial differential equations which describe the motion of viscous fluid substances, named after French engineer and physicist Claude-Louis Navier and Anglo-Irish physicist and mathematician George Gabriel Stokes. The Navier–Stokes equations are useful because they describe the physics of many phenomena of scientific and engineering interest. The Navier–Stokes equations mathematically express momentum balance and conservation of mass for Newtonian fluids. One way to understand the nonlinearity of the Navier-Stokes equations is to consider the term (v · ∇)v in the equations. The nonlinear nature of the Navier-Stokes equations can be seen in the term (\mathbf{v}\cdot abla ) \mathbf{v}, which represents the acceleration of the fluid due to its own velocity. The Navier-Stokes equations are nonlinear and highly coupled, making them difficult to solve in general. The Navier–Stokes equations, in their full and simplified forms, help with the design of aircraft and cars, the study of blood flow, the design of power stations, the analysis of pollution, and many other things. The Navier-Stokes equations are nonlinear because the terms in the equations do not have a simple linear relationship with each other. The Navier–Stokes existence and smoothness problem concerns the mathematical properties of solutions to the Navier–Stokes equations, a system of partial differential equations that describe the motion of a fluid in space. The above solution is key to deriving Navier–Stokes equations from the equation of motion in fluid dynamics when density and viscosity are constant. ===Non-Newtonian fluids=== A non-Newtonian fluid is a fluid whose flow properties differ in any way from those of Newtonian fluids. * * * * Smits, Alexander J. (2014), A Physical Introduction to Fluid Mechanics, Wiley, * Temam, Roger (1984): Navier–Stokes Equations: Theory and Numerical Analysis, ACM Chelsea Publishing, * ==External links== * Simplified derivation of the Navier–Stokes equations * Three-dimensional unsteady form of the Navier–Stokes equations Glenn Research Center, NASA Category:Aerodynamics Category:Computational fluid dynamics Category:Concepts in physics Category:Equations of fluid dynamics Category:Functions of space and time Category:Partial differential equations Category:Transport phenomena The cross differentiated Navier–Stokes equation becomes two equations and one meaningful equation. For this reason, these equations are usually written for Newtonian fluids where the viscosity model is linear; truly general models for the flow of other kinds of fluids (such as blood) do not exist. ==Application to specific problems== The Navier–Stokes equations, even when written explicitly for specific fluids, are rather generic in nature and their proper application to specific problems can be very diverse. The Navier–Stokes equations are also of great interest in a purely mathematical sense. Using these properties, the Navier–Stokes equations of motion, expressed in tensor notation, are (for an incompressible Newtonian fluid): \frac{\partial u_i}{\partial x_i} = 0 \frac{\partial u_i}{\partial t} + u_j \frac{\partial u_i}{\partial x_j} = f_i \- \frac{1}{\rho} \frac{\partial p}{\partial x_i} \+ u \frac{\partial^2 u_i}{\partial x_i \partial x_j} where f_i is a vector representing external forces. For different types of fluid flow this results in specific forms of the Navier–Stokes equations. ===Newtonian fluid=== ====Compressible Newtonian fluid==== The formulation for Newtonian fluids stems from an observation made by Newton that, for most fluids, :\tau \propto \frac{\partial u}{\partial y} In order to apply this to the Navier–Stokes equations, three assumptions were made by Stokes: :* The stress tensor is a linear function of the strain rate tensor or equivalently the velocity gradient. Solutions to the Navier–Stokes equations are used in many practical applications. As a result, the Navier–Stokes are a parabolic equation and therefore have better analytic properties, at the expense of having less mathematical structure (e.g. they are never completely integrable). Each term in any case of the Navier–Stokes equations is a body force. ","The Navier-Stokes equations are partial differential equations that describe the motion of viscous fluid substances, expressing momentum balance and conservation of mass for Newtonian fluids.","The Navier-Stokes equations are partial differential equations that describe the motion of viscous fluid substances, expressing momentum balance and conservation of mass for non-Newtonian fluids.","The Navier-Stokes equations are partial differential equations that describe the motion of non-viscous fluid substances, expressing momentum balance and conservation of mass for Newtonian fluids.","The Navier-Stokes equations are algebraic equations that describe the motion of non-viscous fluid substances, expressing momentum balance and conservation of mass for Newtonian fluids.","The Navier-Stokes equations are algebraic equations that describe the motion of viscous fluid substances, expressing momentum balance and conservation of mass for Newtonian fluids.",A,kaggle200,"The Navier-Stokes equations are nonlinear because the terms in the equations do not have a simple linear relationship with each other. This means that the equations cannot be solved using traditional linear techniques, and more advanced methods must be used instead. Nonlinearity is important in the Navier-Stokes equations because it allows the equations to describe a wide range of fluid dynamics phenomena, including the formation of shock waves and other complex flow patterns. However, the nonlinearity of the Navier-Stokes equations also makes them more difficult to solve, as traditional linear methods may not work.
In physics, the Navier–Stokes equations ( ) are partial differential equations which describe the motion of viscous fluid substances, named after French engineer and physicist Claude-Louis Navier and Anglo-Irish physicist and mathematician George Gabriel Stokes. They were developed over several decades of progressively building the theories, from 1822 (Navier) to 1842–1850 (Stokes).
In 1845, George Gabriel Stokes published another important set of equations, today known as the Navier-Stokes equations. Claude-Louis Navier developed the equations first using molecular theory, which was further confirmed by Stokes using continuum theory. The Navier-Stokes equations describe the motion of fluids:
The Navier-Stokes equations are a set of partial differential equations that describe the motion of fluids. They are given by:","Navier-Stokes equations In 1845, George Gabriel Stokes published another important set of equations, today known as the Navier-Stokes equations. Claude-Louis Navier developed the equations first using molecular theory, which was further confirmed by Stokes using continuum theory. The Navier-Stokes equations describe the motion of fluids: ρDvDt=−∇p+μ∇2v+ρg When the fluid is inviscid, or the viscosity can be assumed to be negligible, the Navier-Stokes equation simplifies to the Euler equation: This simplification is much easier to solve, and can apply to many types of flow in which viscosity is negligible. Some examples include flow around an airplane wing, upstream flow around bridge supports in a river, and ocean currents.The Navier-Stokes equation reduces to the Euler equation when μ=0 . Another condition that leads to the elimination of viscous force is ∇2v=0 , and this results in an ""inviscid flow arrangement"". Such flows are found to be vortex-like.
The Navier–Stokes equations mathematically express momentum balance and conservation of mass for Newtonian fluids. They are sometimes accompanied by an equation of state relating pressure, temperature and density. They arise from applying Isaac Newton's second law to fluid motion, together with the assumption that the stress in the fluid is the sum of a diffusing viscous term (proportional to the gradient of velocity) and a pressure term—hence describing viscous flow. The difference between them and the closely related Euler equations is that Navier–Stokes equations take viscosity into account while the Euler equations model only inviscid flow. As a result, the Navier–Stokes are a parabolic equation and therefore have better analytic properties, at the expense of having less mathematical structure (e.g. they are never completely integrable).
The Navier–Stokes equations ( nav-YAY STOHKS) are partial differential equations which describe the motion of viscous fluid substances, named after French engineer and physicist Claude-Louis Navier and Irish physicist and mathematician George Gabriel Stokes. They were developed over several decades of progressively building the theories, from 1822 (Navier) to 1842-1850 (Stokes).","The Navier-Stokes equations describe the motion of fluids:
The Navier-Stokes equations are a set of partial differential equations that describe the motion of fluidsthey are never completely integrable).
The Navier–Stokes equations ( nav-YAY STOHKS) are partial differential equations which describe the motion of viscous fluid substances, named after French engineer and physicist Claude-Louis Navier and Irish physicist and mathematician George Gabriel StokesHowever, the nonlinearity of the Navier-Stokes equations also makes them more difficult to solve, as traditional linear methods may not work.
In physics, the Navier–Stokes equations ( ) are partial differential equations which describe the motion of viscous fluid substances, named after French engineer and physicist Claude-Louis Navier and Anglo-Irish physicist and mathematician George Gabriel StokesThe Navier-Stokes equations describe the motion of fluids: ρDvDt=−∇p+μ∇2v+ρg When the fluid is inviscid, or the viscosity can be assumed to be negligible, the Navier-Stokes equation simplifies to the Euler equation: This simplification is much easier to solve, and can apply to many types of flow in which viscosity is negligibleAs a result, the Navier–Stokes are a parabolic equation and therefore have better analytic properties, at the expense of having less mathematical structure (e.gClaude-Louis Navier developed the equations first using molecular theory, which was further confirmed by Stokes using continuum theoryThey were developed over several decades of progressively building the theories, from 1822 (Navier) to 1842–1850 (Stokes).
In 1845, George Gabriel Stokes published another important set of equations, today known as the Navier-Stokes equationsNavier-Stokes equations In 1845, George Gabriel Stokes published another important set of equations, today known as the Navier-Stokes equations- The Navier-Stokes equations are nonlinear because the terms in the equations do not have a simple linear relationship with each otherThe difference between them and the closely related Euler equations is that Navier–Stokes equ","The Navier-Stokes equations describe the motion of fluids:
The Navier-Stokes equations are a set of partial differential equations that describe the motion of fluidsthey are never completely integrable).
The Navier–Stokes equations ( nav-YAY STOHKS) are partial differential equations which describe the motion of viscous fluid substances, named after French engineer and physicist Claude-Louis Navier and Irish physicist and mathematician George Gabriel StokesHowever, the nonlinearity of the Navier-Stokes equations also makes them more difficult to solve, as traditional linear methods may not work.
In physics, the Navier–Stokes equations ( ) are partial differential equations which describe the motion of viscous fluid substances, named after French engineer and physicist Claude-Louis Navier and Anglo-Irish physicist and mathematician George Gabriel StokesThe Navier-Stokes equations describe the motion of fluids: ρDvDt=−∇p+μ∇2v+ρg When the fluid is inviscid, or the viscosity can be assumed to be negligible, the Navier-Stokes equation simplifies to the Euler equation: This simplification is much easier to solve, and can apply to many types of flow in which viscosity is negligibleAs a result, the Navier–Stokes are a parabolic equation and therefore have better analytic properties, at the expense of having less mathematical structure (e.gClaude-Louis Navier developed the equations first using molecular theory, which was further confirmed by Stokes using continuum theoryThey were developed over several decades of progressively building the theories, from 1822 (Navier) to 1842–1850 (Stokes).
In 1845, George Gabriel Stokes published another important set of equations, today known as the Navier-Stokes equationsNavier-Stokes equations In 1845, George Gabriel Stokes published another important set of equations, today known as the Navier-Stokes equations- The Navier-Stokes equations are nonlinear because the terms in the equations do not have a simple linear relationship with each otherThe difference between them and the closely related Euler equations is that Navier–Stokes equ[SEP]What are the Navier-Stokes equations?","['A', 'E', 'D']",1.0
What is the revised view of the atmosphere's nature based on the time-varying multistability that is associated with the modulation of large-scale processes and aggregated feedback of small-scale processes?,"An atmospheric model is a mathematical model constructed around the full set of primitive dynamical equations which govern atmospheric motions. Dynamic lifting and mixing produces cloud, precipitation and storms often on a synoptic scale. == Cause of instability == Whether or not the atmosphere has stability depends partially on the moisture content. Atmospheric instability is a condition where the Earth's atmosphere is considered to be unstable and as a result local weather is highly variable through distance and time.Stability of Air Atmospheric stability is a measure of the atmosphere's tendency to discourage vertical motion, and vertical motion is directly correlated to different types of weather systems and their severity. The U.S. Standard Atmosphere is a static atmospheric model of how the pressure, temperature, density, and viscosity of the Earth's atmosphere change over a wide range of altitudes or elevations. Atmosphere is a monthly peer-reviewed open access scientific journal covering research related to the Earth's atmosphere. Category:Atmosphere Coupling, Energetics and Dynamics of Atmospheric Regions (""CEDAR"") is a US NSF funded program targeting understanding of middle and upper atmospheric dynamics. Stable atmospheres can be associated with drizzle, fog, increased air pollution, a lack of turbulence, and undular bore formation. ==Forms== There are two primary forms of atmospheric instability:Explanation of Atmospheric Stability/Instability - by Steve W. Woodruff * Convective instability * Dynamic instability (fluid mechanics) Under convective instability thermal mixing through convection in the form of warm air rising leads to the development of clouds and possibly precipitation or convective storms. A mathematical model of the 1976 U.S. Standard Atmosphere. A barotropic model tries to solve a simplified form of atmospheric dynamics based on the assumption that the atmosphere is in geostrophic balance; that is, that the Rossby number of the air in the atmosphere is small. Effects of atmospheric instability in moist atmospheres include thunderstorm development, which over warm oceans can lead to tropical cyclogenesis, and turbulence. Most atmospheric models are numerical, i.e. they discretize equations of motion. ""The Meso-NH atmospheric simulation system. These rates of change predict the state of the atmosphere a short time into the future, with each time increment known as a time step. Forecasts are computed using mathematical equations for the physics and dynamics of the atmosphere. These indices, as well as atmospheric instability itself, involve temperature changes through the troposphere with height, or lapse rate. It is largely consistent in methodology with the International Standard Atmosphere, differing mainly in the assumed temperature distribution at higher altitudes. thumb|250px|Visualization of composition by volume of Earth's atmosphere. Some of the model types make assumptions about the atmosphere which lengthens the time steps used and increases computational speed. As of 2009, dynamical guidance remained less skillful than statistical methods. ==See also== * Atmospheric reanalysis * Climate model * Numerical weather prediction * Upper-atmospheric models * Static atmospheric model * Chemistry transport model == References == ==Further reading== * ==External links== * WRF Source Codes and Graphics Software Download Page * RAMS source code available under the GNU General Public License * MM5 Source Code download * The source code of ARPS * Model Visualisation Category:Numerical climate and weather models Category:Articles containing video clips Data is from [http://www.nasa.gov/centers/langley/pdf/245893main_MeteorologyTeacherRes- Ch2.r4.pdf NASA Langley]. ==Methodology== The USSA mathematical model divides the atmosphere into layers with an assumed linear distribution of absolute temperature T against geopotential altitude h.Gyatt, Graham (2006-01-14): ""The Standard Atmosphere"". ",The atmosphere is a system that is only influenced by large-scale processes and does not exhibit any small-scale feedback.,"The atmosphere possesses both chaos and order, including emerging organized systems and time-varying forcing from recurrent seasons.",The atmosphere is a system that is only influenced by small-scale processes and does not exhibit any large-scale modulation.,The atmosphere is a completely chaotic system with no order or organization.,The atmosphere is a completely ordered system with no chaos or randomness.,B,kaggle200,"If the atmosphere of a celestial body is very tenuous, like the atmosphere of the Moon or that of Mercury, the whole atmosphere is considered exosphere.
Also recall that the cyclostrophic conditions apply to small-scale processes, so extrapolation to higher radii is physically meaningless.
Analogue modelling has been widely used for geodynamic analysis and to illustrate the development of different geological phenomena. Models can explore small-scale processes, such as folding and faulting, or large-scale processes, such as tectonic movement and interior Earth structures.
By taking into consideration time-varying multistability that is associated with the modulation of large-scale processes (e.g., seasonal forcing) and aggregated feedback of small-scale processes (e.g., convection), the above revised view is refined as follows:","Atmospheres in the Solar System Atmosphere of the Sun Atmosphere of Mercury Atmosphere of Venus Atmosphere of Earth Atmosphere of the Moon Atmosphere of Mars Atmosphere of Ceres Atmosphere of Jupiter Atmosphere of Io Atmosphere of Callisto Atmosphere of Europa Atmosphere of Ganymede Atmosphere of Saturn Atmosphere of Titan Atmosphere of Enceladus Atmosphere of Uranus Atmosphere of Titania Atmosphere of Neptune Atmosphere of Triton Atmosphere of Pluto Outside the Solar System Main article: Extraterrestrial atmosphere Atmosphere of HD 209458 b
The dual nature with distinct predictability Over 50 years since Lorenz’s 1963 study and a follow-up presentation in 1972, the statement “weather is chaotic” has been well accepted. Such a view turns our attention from regularity associated with Laplace’s view of determinism to irregularity associated with chaos. In contrast to single-type chaotic solutions, recent studies using a generalized Lorenz model have focused on the coexistence of chaotic and regular solutions that appear within the same model using the same modeling configurations but different initial conditions. The results, with attractor coexistence, suggest that the entirety of weather possesses a dual nature of chaos and order with distinct predictability.Using a slowly varying, periodic heating parameter within a generalized Lorenz model, Shen and his co-authors suggested a revised view: “The atmosphere possesses chaos and order; it includes, as examples, emerging organized systems (such as tornadoes) and time varying forcing from recurrent seasons”.
By taking into consideration time-varying multistability that is associated with the modulation of large-scale processes (e.g., seasonal forcing) and aggregated feedback of small-scale processes (e.g., convection), the above revised view is refined as follows: ""The atmosphere possesses chaos and order; it includes, as examples, emerging organized systems (such as tornadoes) and time varying forcing from recurrent seasons."" In quantum mechanics The potential for sensitive dependence on initial conditions (the butterfly effect) has been studied in a number of cases in semiclassical and quantum physics including atoms in strong fields and the anisotropic Kepler problem. Some authors have argued that extreme (exponential) dependence on initial conditions is not expected in pure quantum treatments; however, the sensitive dependence on initial conditions demonstrated in classical motion is included in the semiclassical treatments developed by Martin Gutzwiller and John B. Delos and co-workers. The random matrix theory and simulations with quantum computers prove that some versions of the butterfly effect in quantum mechanics do not exist.Other authors suggest that the butterfly effect can be observed in quantum systems. Zbyszek P. Karkuszewski et al. consider the time evolution of quantum systems which have slightly different Hamiltonians. They investigate the level of sensitivity of quantum systems to small changes in their given Hamiltonians. David Poulin et al. presented a quantum algorithm to measure fidelity decay, which ""measures the rate at which identical initial states diverge when subjected to slightly different dynamics"". They consider fidelity decay to be ""the closest quantum analog to the (purely classical) butterfly effect"". Whereas the classical butterfly effect considers the effect of a small change in the position and/or velocity of an object in a given Hamiltonian system, the quantum butterfly effect considers the effect of a small change in the Hamiltonian system with a given initial position and velocity. This quantum butterfly effect has been demonstrated experimentally. Quantum and semiclassical treatments of system sensitivity to initial conditions are known as quantum chaos.","The results, with attractor coexistence, suggest that the entirety of weather possesses a dual nature of chaos and order with distinct predictability.Using a slowly varying, periodic heating parameter within a generalized Lorenz model, Shen and his co-authors suggested a revised view: “The atmosphere possesses chaos and order; it includes, as examples, emerging organized systems (such as tornadoes) and time varying forcing from recurrent seasons”.
By taking into consideration time-varying multistability that is associated with the modulation of large-scale processes (e.g., seasonal forcing) and aggregated feedback of small-scale processes (e.g., convection), the above revised view is refined as follows: ""The atmosphere possesses chaos and order; it includes, as examples, emerging organized systems (such as tornadoes) and time varying forcing from recurrent seasons."" In quantum mechanics The potential for sensitive dependence on initial conditions (the butterfly effect) has been studied in a number of cases in semiclassical and quantum physics including atoms in strong fields and the anisotropic Kepler problemModels can explore small-scale processes, such as folding and faulting, or large-scale processes, such as tectonic movement and interior Earth structures.
By taking into consideration time-varying multistability that is associated with the modulation of large-scale processes (e.g., seasonal forcing) and aggregated feedback of small-scale processes (e.g., convection), the above revised view is refined as follows:Atmospheres in the Solar System Atmosphere of the Sun Atmosphere of Mercury Atmosphere of Venus Atmosphere of Earth Atmosphere of the Moon Atmosphere of Mars Atmosphere of Ceres Atmosphere of Jupiter Atmosphere of Io Atmosphere of Callisto Atmosphere of Europa Atmosphere of Ganymede Atmosphere of Saturn Atmosphere of Titan Atmosphere of Enceladus Atmosphere of Uranus Atmosphere of Titania Atmosphere of Neptune Atmosphere of Triton Atmosphere of Pluto Outside the Solar System Main article: Extraterrestrial atmosphere Atmosphere of HD 209458 b
The dual n","The results, with attractor coexistence, suggest that the entirety of weather possesses a dual nature of chaos and order with distinct predictability.Using a slowly varying, periodic heating parameter within a generalized Lorenz model, Shen and his co-authors suggested a revised view: “The atmosphere possesses chaos and order; it includes, as examples, emerging organized systems (such as tornadoes) and time varying forcing from recurrent seasons”.
By taking into consideration time-varying multistability that is associated with the modulation of large-scale processes (e.g., seasonal forcing) and aggregated feedback of small-scale processes (e.g., convection), the above revised view is refined as follows: ""The atmosphere possesses chaos and order; it includes, as examples, emerging organized systems (such as tornadoes) and time varying forcing from recurrent seasons."" In quantum mechanics The potential for sensitive dependence on initial conditions (the butterfly effect) has been studied in a number of cases in semiclassical and quantum physics including atoms in strong fields and the anisotropic Kepler problemModels can explore small-scale processes, such as folding and faulting, or large-scale processes, such as tectonic movement and interior Earth structures.
By taking into consideration time-varying multistability that is associated with the modulation of large-scale processes (e.g., seasonal forcing) and aggregated feedback of small-scale processes (e.g., convection), the above revised view is refined as follows:Atmospheres in the Solar System Atmosphere of the Sun Atmosphere of Mercury Atmosphere of Venus Atmosphere of Earth Atmosphere of the Moon Atmosphere of Mars Atmosphere of Ceres Atmosphere of Jupiter Atmosphere of Io Atmosphere of Callisto Atmosphere of Europa Atmosphere of Ganymede Atmosphere of Saturn Atmosphere of Titan Atmosphere of Enceladus Atmosphere of Uranus Atmosphere of Titania Atmosphere of Neptune Atmosphere of Triton Atmosphere of Pluto Outside the Solar System Main article: Extraterrestrial atmosphere Atmosphere of HD 209458 b
The dual n[SEP]What is the revised view of the atmosphere's nature based on the time-varying multistability that is associated with the modulation of large-scale processes and aggregated feedback of small-scale processes?","['B', 'C', 'A']",1.0
What is the reason that it is nearly impossible to see light emitted at the Lyman-alpha transition wavelength from a star farther than a few hundred light years from Earth?,"In hydrogen, its wavelength of 1215.67 angstroms ( or ), corresponding to a frequency of about , places Lyman-alpha in the ultraviolet (UV) part of the electromagnetic spectrum. In physics and chemistry, the Lyman series is a hydrogen spectral series of transitions and resulting ultraviolet emission lines of the hydrogen atom as an electron goes from n ≥ 2 to n = 1 (where n is the principal quantum number), the lowest energy level of the electron. Lyman-alpha radiation had previously been detected from other galaxies, but due to interference from the Sun, the radiation from the Milky Way was not detectable. ==The Lyman series== The version of the Rydberg formula that generated the Lyman series was: {1 \over \lambda} = R_\text{H} \left( 1 - \frac{1}{n^2} \right) \qquad \left( R_\text{H} \approx 1.0968{\times}10^7\,\text{m}^{-1} \approx \frac{13.6\,\text{eV}}{hc} \right) where n is a natural number greater than or equal to 2 (i.e., ). The Lyman limit is the short-wavelength end of the hydrogen Lyman series, at . The wavelengths in the Lyman series are all ultraviolet: n Wavelength (nm) 2 121.56701Kramida, A., Ralchenko, Yu., Reader, J., and NIST ASD Team (2019). He suggested that most of the absorption lines were all due to the same Lyman- alpha transition. thumb|A computer simulation of a possible Lyman-alpha forest configuration at z = 3 In astronomical spectroscopy, the Lyman-alpha forest is a series of absorption lines in the spectra of distant galaxies and quasars arising from the Lyman-alpha electron transition of the neutral hydrogen atom. The Lyman-alpha transition corresponds to an electron transitioning between the ground state (n = 1) and the first excited state (n = 2). Since neutral hydrogen clouds in the intergalactic medium are at different degrees of redshift (due to their varying distance from Earth), their absorption lines are observed at a range of wavelengths. The Lyman-alpha absorption lines in the quasar spectra result from intergalactic gas through which the galaxy or quasar's light has traveled. The Lyman-alpha spectral line has a laboratory wavelength (or rest wavelength) of 1216 Å, which is in the ultraviolet portion of the electromagnetic spectrum. The rest of the lines of the spectrum (all in the ultraviolet) were discovered by Lyman from 1906-1914. Therefore, each wavelength of the emission lines corresponds to an electron dropping from a certain energy level (greater than 1) to the first energy level. == See also == * Bohr model * H-alpha * Hydrogen spectral series * K-alpha * Lyman-alpha line * Lyman continuum photon * Moseley's law * Rydberg formula * Balmer series ==References== Category:Emission spectroscopy Category:Hydrogen physics DOI: https://doi.org/10.18434/T4W30F 3 102.57220 4 97.253650 5 94.974287 6 93.780331 7 93.0748142 8 92.6225605 9 92.3150275 10 92.0963006 11 91.9351334 ∞, the Lyman limit 91.1753 ==Explanation and derivation== In 1914, when Niels Bohr produced his Bohr model theory, the reason why hydrogen spectral lines fit Rydberg's formula was explained. More specifically, Ly-α lies in vacuum UV (VUV), characterized by a strong absorption in the air. ==Fine structure== thumb|The Lyman-alpha doublet. The Lyman-alpha line, typically denoted by Ly-α, is a spectral line of hydrogen (or, more generally, of any one-electron atom) in the Lyman series. For the same reason, Lyman-alpha astronomy is ordinarily carried out by satellite- borne instruments, except for observing extremely distant sources whose redshifts allow the line to penetrate the Earth atmosphere. Each individual cloud leaves its fingerprint as an absorption line at a different position in the observed spectrum. ==Use as a tool in astrophysics== The Lyman-alpha forest is an important probe of the intergalactic medium and can be used to determine the frequency and density of clouds containing neutral hydrogen, as well as their temperature. The Lyman series of spectral lines are produced by electrons transitioning between the ground state and higher energy levels (excited states). The greater the difference in the principal quantum numbers, the higher the energy of the electromagnetic emission. ==History== thumb|upright=1.3|The Lyman series The first line in the spectrum of the Lyman series was discovered in 1906 by physicist Theodore Lyman, who was studying the ultraviolet spectrum of electrically excited hydrogen gas. ","Far ultraviolet light is absorbed effectively by the charged components of the ISM, including atomic helium, which has a typical absorption wavelength of about 121.5 nanometers, the Lyman-alpha transition.","Far ultraviolet light is absorbed effectively by the neutral components of the ISM, including atomic hydrogen, which has a typical absorption wavelength of about 121.5 nanometers, the Lyman-alpha transition.","Far ultraviolet light is absorbed effectively by the charged components of the ISM, including atomic hydrogen, which has a typical absorption wavelength of about 121.5 nanometers, the Lyman-alpha transition.","Far ultraviolet light is absorbed effectively by the neutral components of the ISM, including atomic helium, which has a typical absorption wavelength of about 121.5 nanometers, the Lyman-alpha transition.","Far ultraviolet light is absorbed effectively by the neutral components of the ISM, including atomic hydrogen, which has a typical absorption wavelength of about 212.5 nanometers, the Lyman-alpha transition.",B,kaggle200,"The coupling between Lyman-alpha photons and the hyperfine states depends not on the intensity of the Lyman-alpha radiation, but on the shape of the spectrum in the vicinity of the Lyman-alpha transition. That this mechanism might affect the population of the hyperfine states in neutral hydrogen was first suggested in 1952 by S. A. Wouthuysen, and then further developed by George B. Field in 1959.
The Lyman-alpha transition in hydrogen in the presence of the spin–orbit interaction involves the transitions
In astronomical spectroscopy, the Lyman-alpha forest is the sum of the absorption lines arising from the Lyman-alpha transition of neutral hydrogen in the spectra of distant galaxies and quasars. Lyman-alpha forest observations can also constrain cosmological models. These constraints agree with those obtained from WMAP data.
Far ultraviolet light is absorbed effectively by the neutral components of the ISM. For example, a typical absorption wavelength of atomic hydrogen lies at about 121.5 nanometers, the Lyman-alpha transition. Therefore, it is nearly impossible to see light emitted at that wavelength from a star farther than a few hundred light years from Earth, because most of it is absorbed during the trip to Earth by intervening neutral hydrogen.","Wouthuysen–Field coupling is a mechanism that couples the spin temperature of neutral hydrogen to Lyman-alpha radiation, which decouples the neutral hydrogen from the CMB. The energy of the Lyman-alpha transition is 10.2 eV—this energy is approximately two million times greater than the hydrogen line, and is produced by astrophysical sources such as stars and quasars. Neutral hydrogen absorbs Lyman-alpha photons, and then re-emits Lyman-alpha photons, and may enter either of the two spin states. This process causes a redistribution of the electrons between the hyperfine states, decoupling the neutral hydrogen from the CMB photons.The coupling between Lyman-alpha photons and the hyperfine states depends not on the intensity of the Lyman-alpha radiation, but on the shape of the spectrum in the vicinity of the Lyman-alpha transition. That this mechanism might affect the population of the hyperfine states in neutral hydrogen was first suggested in 1952 by S. A. Wouthuysen, and then further developed by George B. Field in 1959.The effect of Lyman-alpha photons on the hyperfine levels depends upon the relative intensities of the red and blue wings of the Lyman-alpha line, reflecting the very small difference in energy of the hyperfine states relative to the Lyman-alpha transition. At a cosmological redshift of z∼6 , Wouthuysen–Field coupling is expected to raise the spin temperature of neutral hydrogen above that of the CMB, and produce emission in the hydrogen line.
Lyman-alpha forest In astronomical spectroscopy, the Lyman-alpha forest is the sum of the absorption lines arising from the Lyman-alpha transition of neutral hydrogen in the spectra of distant galaxies and quasars. Lyman-alpha forest observations can also constrain cosmological models. These constraints agree with those obtained from WMAP data.
Extinction provides one of the best ways of mapping the three-dimensional structure of the ISM, especially since the advent of accurate distances to millions of stars from the Gaia mission. The total amount of dust in front of each star is determined from its reddening, and the dust is then located along the line of sight by comparing the dust column density in front of stars projected close together on the sky, but at different distances. By 2022 it was possible to generate a map of ISM structures within 3 kpc (10,000 light years) of the Sun.Far ultraviolet light is absorbed effectively by the neutral hydrogen gas the ISM. Specifically, atomic hydrogen absorbs very strongly at about 121.5 nanometers, the Lyman-alpha transition, and also at the other Lyman series lines. Therefore, it is nearly impossible to see light emitted at those wavelengths from a star farther than a few hundred light years from Earth, because most of it is absorbed during the trip to Earth by intervening neutral hydrogen. All photons with wavelength < 91.6 nm, the Lyman limit, can ionize hydrogen and are also very strongly absorbed. The absorption gradually decreases with increasing photon energy, and the ISM begins to become transparent again in soft X-rays, with wavelengths shorter than about 1 nm.","Therefore, it is nearly impossible to see light emitted at that wavelength from a star farther than a few hundred light years from Earth, because most of it is absorbed during the trip to Earth by intervening neutral hydrogen.Therefore, it is nearly impossible to see light emitted at those wavelengths from a star farther than a few hundred light years from Earth, because most of it is absorbed during the trip to Earth by intervening neutral hydrogenFor example, a typical absorption wavelength of atomic hydrogen lies at about 121.5 nanometers, the Lyman-alpha transitionThe energy of the Lyman-alpha transition is 10.2 eV—this energy is approximately two million times greater than the hydrogen line, and is produced by astrophysical sources such as stars and quasarsSpecifically, atomic hydrogen absorbs very strongly at about 121.5 nanometers, the Lyman-alpha transition, and also at the other Lyman series lines- The coupling between Lyman-alpha photons and the hyperfine states depends not on the intensity of the Lyman-alpha radiation, but on the shape of the spectrum in the vicinity of the Lyman-alpha transitionField in 1959.
The Lyman-alpha transition in hydrogen in the presence of the spin–orbit interaction involves the transitions
In astronomical spectroscopy, the Lyman-alpha forest is the sum of the absorption lines arising from the Lyman-alpha transition of neutral hydrogen in the spectra of distant galaxies and quasarsField in 1959.The effect of Lyman-alpha photons on the hyperfine levels depends upon the relative intensities of the red and blue wings of the Lyman-alpha line, reflecting the very small difference in energy of the hyperfine states relative to the Lyman-alpha transitionAt a cosmological redshift of z∼6 , Wouthuysen–Field coupling is expected to raise the spin temperature of neutral hydrogen above that of the CMB, and produce emission in the hydrogen line.
Lyman-alpha forest In astronomical spectroscopy, the Lyman-alpha forest is the sum of the absorption lines arising from the Lyman-alpha transition of neutral hydrogen in the spectra of distant gal","Therefore, it is nearly impossible to see light emitted at that wavelength from a star farther than a few hundred light years from Earth, because most of it is absorbed during the trip to Earth by intervening neutral hydrogen.Therefore, it is nearly impossible to see light emitted at those wavelengths from a star farther than a few hundred light years from Earth, because most of it is absorbed during the trip to Earth by intervening neutral hydrogenFor example, a typical absorption wavelength of atomic hydrogen lies at about 121.5 nanometers, the Lyman-alpha transitionThe energy of the Lyman-alpha transition is 10.2 eV—this energy is approximately two million times greater than the hydrogen line, and is produced by astrophysical sources such as stars and quasarsSpecifically, atomic hydrogen absorbs very strongly at about 121.5 nanometers, the Lyman-alpha transition, and also at the other Lyman series lines- The coupling between Lyman-alpha photons and the hyperfine states depends not on the intensity of the Lyman-alpha radiation, but on the shape of the spectrum in the vicinity of the Lyman-alpha transitionField in 1959.
The Lyman-alpha transition in hydrogen in the presence of the spin–orbit interaction involves the transitions
In astronomical spectroscopy, the Lyman-alpha forest is the sum of the absorption lines arising from the Lyman-alpha transition of neutral hydrogen in the spectra of distant galaxies and quasarsField in 1959.The effect of Lyman-alpha photons on the hyperfine levels depends upon the relative intensities of the red and blue wings of the Lyman-alpha line, reflecting the very small difference in energy of the hyperfine states relative to the Lyman-alpha transitionAt a cosmological redshift of z∼6 , Wouthuysen–Field coupling is expected to raise the spin temperature of neutral hydrogen above that of the CMB, and produce emission in the hydrogen line.
Lyman-alpha forest In astronomical spectroscopy, the Lyman-alpha forest is the sum of the absorption lines arising from the Lyman-alpha transition of neutral hydrogen in the spectra of distant gal[SEP]What is the reason that it is nearly impossible to see light emitted at the Lyman-alpha transition wavelength from a star farther than a few hundred light years from Earth?","['B', 'C', 'D']",1.0
What is a Schwarzschild black hole?,"A Schwarzschild black hole or static black hole is a black hole that has neither electric charge nor angular momentum. Any non-rotating and non-charged mass that is smaller than its Schwarzschild radius forms a black hole. A Schwarzschild black hole is described by the Schwarzschild metric, and cannot be distinguished from any other Schwarzschild black hole except by its mass. The Schwarzschild solution is the simplest spherically symmetric solution of the Einstein equations with zero cosmological constant, and it describes a black hole event horizon in otherwise empty space. According to Birkhoff's theorem, the Schwarzschild metric is the most general spherically symmetric vacuum solution of the Einstein field equations. (Note that a (non-rotating) black hole is a spherical region in space that surrounds the singularity at its center; it is not the singularity itself.) In Einstein's theory of general relativity, the Schwarzschild metric (also known as the Schwarzschild solution) is an exact solution to the Einstein field equations that describes the gravitational field outside a spherical mass, on the assumption that the electric charge of the mass, angular momentum of the mass, and universal cosmological constant are all zero. In the vicinity of a Schwarschild black hole, space curves so much that even light rays are deflected, and very nearby light can be deflected so much that it travels several times around the black hole. == Formulation == The Schwarzschild metric is a spherically symmetric Lorentzian metric (here, with signature convention ), defined on (a subset of) \mathbb{R}\times \left(E^3 - O\right) \cong \mathbb{R} \times (0,\infty) \times S^2 where E^3 is 3 dimensional Euclidean space, and S^2 \subset E^3 is the two sphere. The de Sitter–Schwarzschild space-time is a combination of the two, and describes a black hole horizon spherically centered in an otherwise de Sitter universe. Black holes can be classified based on their Schwarzschild radius, or equivalently, by their density, where density is defined as mass of a black hole divided by the volume of its Schwarzschild sphere. The Schwarzschild black hole is characterized by a surrounding spherical boundary, called the event horizon, which is situated at the Schwarzschild radius, often called the radius of a black hole. The Schwarzschild radius or the gravitational radius is a physical parameter in the Schwarzschild solution to Einstein's field equations that corresponds to the radius defining the event horizon of a Schwarzschild black hole. According to general relativity, the gravitational collapse of a sufficiently compact mass forms a singular Schwarzschild black hole. The solution of the Einstein field equations is valid for any mass , so in principle (according to general relativity theory) a Schwarzschild black hole of any mass could exist if conditions became sufficiently favorable to allow for its formation. The Schwarzschild solution, taken to be valid for all , is called a Schwarzschild black hole. Black holes are a class of astronomical objects that have undergone gravitational collapse, leaving behind spheroidal regions of space from which nothing can escape, not even light. Any physical object whose radius becomes less than or equal to the Schwarzschild radius has undergone gravitational collapse and become a black hole. == Alternative coordinates == The Schwarzschild solution can be expressed in a range of different choices of coordinates besides the Schwarzschild coordinates used above. Schwarzschild wormholes and Schwarzschild black holes are different mathematical solutions of general relativity and the Einstein–Cartan theory. The de Sitter–Schwarzschild solution is the simplest solution which has both. == Metric == The metric of any spherically symmetric solution in Schwarzschild form is: :: ds^2 = - f(r) dt^2 + {dr^2 \over f(r)} + r^2(d\theta^2 + \sin^2\theta \,d\phi^2) \, The vacuum Einstein equations give a linear equation for ƒ(r), which has as solutions: :: f(r)=1-2a/r \, :: f(r)= 1 - b r^2 \, The first is a zero stress energy solution describing a black hole in empty space time, the second (with b positive) describes de Sitter space with a stress-energy of a positive cosmological constant of magnitude 3b. The surface at the Schwarzschild radius acts as an event horizon in a non-rotating body (a rotating black hole operates slightly differently). ","A black hole that has mass but neither electric charge nor angular momentum, and is not spherically symmetric, according to Birkhoff's theorem.","A black hole that has mass, electric charge, and angular momentum, and is spherically symmetric, according to Birkhoff's theorem.","A black hole that has mass but neither electric charge nor angular momentum, and is spherically symmetric, according to Birkhoff's theorem.","A black hole that has neither mass nor electric charge nor angular momentum, and is not spherically symmetric, according to Birkhoff's theorem.","A black hole that has mass, electric charge, and angular momentum, and is not spherically symmetric, according to Birkhoff's theorem.",C,kaggle200,"The hypothetical black hole electron is super-extremal (having more charge and angular momentum than a black hole of its mass ""should"").
The simplest static black holes have mass but neither electric charge nor angular momentum. These black holes are often referred to as Schwarzschild black holes after Karl Schwarzschild who discovered this solution in 1916. According to Birkhoff's theorem, it is the only vacuum solution that is spherically symmetric. This means there is no observable difference at a distance between the gravitational field of such a black hole and that of any other spherical object of the same mass. The popular notion of a black hole ""sucking in everything"" in its surroundings is therefore correct only near a black hole's horizon; far away, the external gravitational field is identical to that of any other body of the same mass.
According to Birkhoff's theorem, the Schwarzschild metric is the most general spherically symmetric, vacuum solution of the Einstein field equations. A Schwarzschild black hole or static black hole is a black hole that has no charge or angular momentum. A Schwarzschild black hole is described by the Schwarzschild metric, and cannot be distinguished from any other Schwarzschild black hole except by its mass.
According to Birkhoff's theorem, the Schwarzschild metric is the most general spherically symmetric vacuum solution of the Einstein field equations. A Schwarzschild black hole or static black hole is a black hole that has neither electric charge nor angular momentum. A Schwarzschild black hole is described by the Schwarzschild metric, and cannot be distinguished from any other Schwarzschild black hole except by its mass.","Physical properties The simplest static black holes have mass but neither electric charge nor angular momentum. These black holes are often referred to as Schwarzschild black holes after Karl Schwarzschild who discovered this solution in 1916. According to Birkhoff's theorem, it is the only vacuum solution that is spherically symmetric. This means there is no observable difference at a distance between the gravitational field of such a black hole and that of any other spherical object of the same mass. The popular notion of a black hole ""sucking in everything"" in its surroundings is therefore correct only near a black hole's horizon; far away, the external gravitational field is identical to that of any other body of the same mass.Solutions describing more general black holes also exist. Non-rotating charged black holes are described by the Reissner–Nordström metric, while the Kerr metric describes a non-charged rotating black hole. The most general stationary black hole solution known is the Kerr–Newman metric, which describes a black hole with both charge and angular momentum.While the mass of a black hole can take any positive value, the charge and angular momentum are constrained by the mass. The total electric charge Q and the total angular momentum J are expected to satisfy the inequality Q24πϵ0+c2J2GM2≤GM2 for a black hole of mass M. Black holes with the minimum possible mass satisfying this inequality are called extremal. Solutions of Einstein's equations that violate this inequality exist, but they do not possess an event horizon. These solutions have so-called naked singularities that can be observed from the outside, and hence are deemed unphysical. The cosmic censorship hypothesis rules out the formation of such singularities, when they are created through the gravitational collapse of realistic matter. This is supported by numerical simulations.Due to the relatively large strength of the electromagnetic force, black holes forming from the collapse of stars are expected to retain the nearly neutral charge of the star. Rotation, however, is expected to be a universal feature of compact astrophysical objects. The black-hole candidate binary X-ray source GRS 1915+105 appears to have an angular momentum near the maximum allowed value. That uncharged limit is J≤GM2c, allowing definition of a dimensionless spin parameter such that 1.
According to Birkhoff's theorem, the Schwarzschild metric is the most general spherically symmetric, vacuum solution of the Einstein field equations. A Schwarzschild black hole or static black hole is a black hole that has no charge or angular momentum. A Schwarzschild black hole is described by the Schwarzschild metric, and cannot be distinguished from any other Schwarzschild black hole except by its mass.
By the no-hair theorem, a black hole can only have three fundamental properties: mass, electric charge, and angular momentum. The angular momentum of a stellar black hole is due to the conservation of angular momentum of the star or objects that produced it.","A Schwarzschild black hole or static black hole is a black hole that has no charge or angular momentumA Schwarzschild black hole is described by the Schwarzschild metric, and cannot be distinguished from any other Schwarzschild black hole except by its mass.
According to Birkhoff's theorem, the Schwarzschild metric is the most general spherically symmetric vacuum solution of the Einstein field equationsThese black holes are often referred to as Schwarzschild black holes after Karl Schwarzschild who discovered this solution in 1916A Schwarzschild black hole or static black hole is a black hole that has neither electric charge nor angular momentumA Schwarzschild black hole is described by the Schwarzschild metric, and cannot be distinguished from any other Schwarzschild black hole except by its mass.A Schwarzschild black hole is described by the Schwarzschild metric, and cannot be distinguished from any other Schwarzschild black hole except by its mass.
By the no-hair theorem, a black hole can only have three fundamental properties: mass, electric charge, and angular momentumThe popular notion of a black hole ""sucking in everything"" in its surroundings is therefore correct only near a black hole's horizon; far away, the external gravitational field is identical to that of any other body of the same mass.
According to Birkhoff's theorem, the Schwarzschild metric is the most general spherically symmetric, vacuum solution of the Einstein field equationsThis means there is no observable difference at a distance between the gravitational field of such a black hole and that of any other spherical object of the same massBlack holes with the minimum possible mass satisfying this inequality are called extremalThe most general stationary black hole solution known is the Kerr–Newman metric, which describes a black hole with both charge and angular momentum.While the mass of a black hole can take any positive value, the charge and angular momentum are constrained by the massNon-rotating charged black holes are described by the Reissner–Nordström metric, while the Kerr metric de","A Schwarzschild black hole or static black hole is a black hole that has no charge or angular momentumA Schwarzschild black hole is described by the Schwarzschild metric, and cannot be distinguished from any other Schwarzschild black hole except by its mass.
According to Birkhoff's theorem, the Schwarzschild metric is the most general spherically symmetric vacuum solution of the Einstein field equationsThese black holes are often referred to as Schwarzschild black holes after Karl Schwarzschild who discovered this solution in 1916A Schwarzschild black hole or static black hole is a black hole that has neither electric charge nor angular momentumA Schwarzschild black hole is described by the Schwarzschild metric, and cannot be distinguished from any other Schwarzschild black hole except by its mass.A Schwarzschild black hole is described by the Schwarzschild metric, and cannot be distinguished from any other Schwarzschild black hole except by its mass.
By the no-hair theorem, a black hole can only have three fundamental properties: mass, electric charge, and angular momentumThe popular notion of a black hole ""sucking in everything"" in its surroundings is therefore correct only near a black hole's horizon; far away, the external gravitational field is identical to that of any other body of the same mass.
According to Birkhoff's theorem, the Schwarzschild metric is the most general spherically symmetric, vacuum solution of the Einstein field equationsThis means there is no observable difference at a distance between the gravitational field of such a black hole and that of any other spherical object of the same massBlack holes with the minimum possible mass satisfying this inequality are called extremalThe most general stationary black hole solution known is the Kerr–Newman metric, which describes a black hole with both charge and angular momentum.While the mass of a black hole can take any positive value, the charge and angular momentum are constrained by the massNon-rotating charged black holes are described by the Reissner–Nordström metric, while the Kerr metric de[SEP]What is a Schwarzschild black hole?","['C', 'A', 'B']",1.0
What is the definition of Atomristor?,"Atomization may also refer to: ==Science and technology== * The making of an aerosol, which is a colloidal suspension of fine solid particles or liquid droplets in a gas * An apparatus using an atomizer nozzle * Sprays, mists, fogs, clouds, dust clouds and smoke, which appear to be atomized * A nebulizer, which is a device used to administer medication in the form of a mist inhaled into the lungs * An electronic cigarette atomiser is a component which employs a heating element to vaporize a flavored solution, that may or may not contain nicotine, for inhalation into the lungs * The conversion of a vaporized sample into atomic components in atomic spectroscopy ==Sociology== * Atomization is frequently used as a synonym for social alienation. ==The arts== * Atomizer (album), a 1986 album by Big Black * Atomizer (band), a British synthpop duo * Atomised, a 1998 novel by Michel Houellebecq * In fiction, the complete disintegration of a targeted object into the atoms which constitute it is accomplished by shooting it with a disintegrator ray ==Places== * Atomizer Geyser, a cone geyser in Yellowstone National Park. ==See also== * Enthalpy of atomization * Atom * Spray bottle Atomism or social atomism is a sociological theory arising from the scientific notion atomic theory, coined by the ancient Greek philosopher Democritus and the Roman philosopher Lucretius. In the scientific rendering of the word, atomism refers to the notion that all matter in the universe is composed of basic indivisible components, or atoms. An atom interferometer is an interferometer which uses the wave character of atoms. Atomization refers to breaking bonds in some substance to obtain its constituent atoms in gas phase. Atomic physics is the field of physics that studies atoms as an isolated system of electrons and an atomic nucleus. Physicists distinguish between atomic physics—which deals with the atom as a system consisting of a nucleus and electrons—and nuclear physics, which studies nuclear reactions and special properties of atomic nuclei. ATOM stands for ""Abolish Testing. Atomic physics typically refers to the study of atomic structure and the interaction between atoms. Physics research groups are usually so classified. ==Isolated atoms== Atomic physics primarily considers atoms in isolation. As far as atoms and their electron shells were concerned, not only did this yield a better overall description, i.e. the atomic orbital model, but it also provided a new theoretical basis for chemistry (quantum chemistry) and spectroscopy. As with many scientific fields, strict delineation can be highly contrived and atomic physics is often considered in the wider context of atomic, molecular, and optical physics. When placed into the field of sociology, atomism assigns the individual as the basic unit of analysis for all implications of social life. Atomic models will consist of a single nucleus that may be surrounded by one or more bound electrons. This means that the individual atoms can be treated as if each were in isolation, as the vast majority of the time they are. Detailed overview of atom interferometers at that time (good introductions and theory). Similar to optical interferometers, atom interferometers measure the difference in phase between atomic matter waves along different paths. The atom is said to have undergone the process of ionization. It is concerned with processes such as ionization and excitation by photons or collisions with atomic particles. It is not concerned with the formation of molecules (although much of the physics is identical), nor does it examine atoms in a solid state as condensed matter. ",Atomristor is a flexible memristive device comprising a MoOx/MoS2 heterostructure sandwiched between silver electrodes on a plastic foil.,Atomristor is a prominent memcapacitive effect observed in switches with memristive behavior.,Atomristor is defined as the electrical devices showing memristive behavior in atomically thin nanomaterials or atomic sheets.,Atomristor is a printing and solution-processing technology used to fabricate memristive devices.,Atomristor is a type of two-dimensional layered transition metal dichalcogenides (TMDs) used in the fabrication of memristive devices.,C,kaggle200,"In a memristive network, the memristive devices are used to simulate the behavior of neurons and synapses in the human brain. The network consists of layers of memristive devices, each of which is connected to other layers through a set of weights. These weights are adjusted during the training process, allowing the network to learn and adapt to new input data.
The concept of memristive networks was first introduced by Leon Chua in his 1965 paper ""Memristive Devices and Systems."" Chua proposed the use of memristive devices as a means of building artificial neural networks that could simulate the behavior of the human brain. In fact, memristive devices in circuits have complex interactions due to Kirchhoff's laws.
Atomristor is defined as the electrical devices showing memristive behavior in atomically thin nanomaterials or atomic sheets. In 2018, Ge and Wu et al. in the Akinwande group at the University of Texas, first reported a universal memristive effect in single-layer TMD (MX, M = Mo, W; and X = S, Se) atomic sheets based on vertical metal-insulator-metal (MIM) device structure. The work was later extended to monolayer hexagonal boron nitride, which is the thinnest memory material of around 0.33 nm. These atomristors offer forming-free switching and both unipolar and bipolar operation. The switching behavior is found in single-crystalline and poly-crystalline films, with various conducting electrodes (gold, silver and graphene). Atomically thin TMD sheets are prepared via CVD/MOCVD, enabling low-cost fabrication. Afterwards, taking advantage of the low ""on"" resistance and large on/off ratio, a high-performance zero-power RF switch is proved based on MoS or h-BN atomristors, indicating a new application of memristors for 5G, 6G and THz communication and connectivity systems. In 2020, atomistic understanding of the conductive virtual point mechanism was elucidated in an article in nature nanotechnology.
In 2014, Bessonov et al. reported a flexible memristive device comprising a MoO/MoS heterostructure sandwiched between silver electrodes on a plastic foil. The fabrication method is entirely based on printing and solution-processing technologies using two-dimensional layered transition metal dichalcogenides (TMDs). The memristors are mechanically flexible, optically transparent and produced at low cost. The memristive behaviour of switches was found to be accompanied by a prominent memcapacitive effect. High switching performance, demonstrated synaptic plasticity and sustainability to mechanical deformations promise to emulate the appealing characteristics of biological neural systems in novel computing technologies.","Memristive networks and mathematical models of circuit interactions The concept of memristive networks was first introduced by Leon Chua in his 1965 paper ""Memristive Devices and Systems."" Chua proposed the use of memristive devices as a means of building artificial neural networks that could simulate the behavior of the human brain. In fact, memristive devices in circuits have complex interactions due to Kirchhoff's laws.
Layered memristor In 2014, Bessonov et al. reported a flexible memristive device comprising a MoOx/MoS2 heterostructure sandwiched between silver electrodes on a plastic foil. The fabrication method is entirely based on printing and solution-processing technologies using two-dimensional layered transition metal dichalcogenides (TMDs). The memristors are mechanically flexible, optically transparent and produced at low cost. The memristive behaviour of switches was found to be accompanied by a prominent memcapacitive effect. High switching performance, demonstrated synaptic plasticity and sustainability to mechanical deformations promise to emulate the appealing characteristics of biological neural systems in novel computing technologies.
Atomristor Atomristor is defined as the electrical devices showing memristive behavior in atomically thin nanomaterials or atomic sheets. In 2018, Ge and Wu et al. in the Akinwande group at the University of Texas, first reported a universal memristive effect in single-layer TMD (MX2, M = Mo, W; and X = S, Se) atomic sheets based on vertical metal-insulator-metal (MIM) device structure. The work was later extended to monolayer hexagonal boron nitride, which is the thinnest memory material of around 0.33 nm. These atomristors offer forming-free switching and both unipolar and bipolar operation. The switching behavior is found in single-crystalline and poly-crystalline films, with various conducting electrodes (gold, silver and graphene). Atomically thin TMD sheets are prepared via CVD/MOCVD, enabling low-cost fabrication. Afterwards, taking advantage of the low ""on"" resistance and large on/off ratio, a high-performance zero-power RF switch is proved based on MoS2 or h-BN atomristors, indicating a new application of memristors for 5G, 6G and THz communication and connectivity systems. In 2020, atomistic understanding of the conductive virtual point mechanism was elucidated in an article in nature nanotechnology.","In fact, memristive devices in circuits have complex interactions due to Kirchhoff's laws.
Atomristor is defined as the electrical devices showing memristive behavior in atomically thin nanomaterials or atomic sheetsThese atomristors offer forming-free switching and both unipolar and bipolar operationHigh switching performance, demonstrated synaptic plasticity and sustainability to mechanical deformations promise to emulate the appealing characteristics of biological neural systems in novel computing technologies.
Atomristor Atomristor is defined as the electrical devices showing memristive behavior in atomically thin nanomaterials or atomic sheets- In a memristive network, the memristive devices are used to simulate the behavior of neurons and synapses in the human brainin the Akinwande group at the University of Texas, first reported a universal memristive effect in single-layer TMD (MX, M = Mo, W; and X = S, Se) atomic sheets based on vertical metal-insulator-metal (MIM) device structurein the Akinwande group at the University of Texas, first reported a universal memristive effect in single-layer TMD (MX2, M = Mo, W; and X = S, Se) atomic sheets based on vertical metal-insulator-metal (MIM) device structureThe network consists of layers of memristive devices, each of which is connected to other layers through a set of weightsAfterwards, taking advantage of the low ""on"" resistance and large on/off ratio, a high-performance zero-power RF switch is proved based on MoS or h-BN atomristors, indicating a new application of memristors for 5G, 6G and THz communication and connectivity systemsAfterwards, taking advantage of the low ""on"" resistance and large on/off ratio, a high-performance zero-power RF switch is proved based on MoS2 or h-BN atomristors, indicating a new application of memristors for 5G, 6G and THz communication and connectivity systemsIn fact, memristive devices in circuits have complex interactions due to Kirchhoff's laws.
Layered memristor In 2014, Bessonov et alThese weights are adjusted during the training process, allowing the network to learn and","In fact, memristive devices in circuits have complex interactions due to Kirchhoff's laws.
Atomristor is defined as the electrical devices showing memristive behavior in atomically thin nanomaterials or atomic sheetsThese atomristors offer forming-free switching and both unipolar and bipolar operationHigh switching performance, demonstrated synaptic plasticity and sustainability to mechanical deformations promise to emulate the appealing characteristics of biological neural systems in novel computing technologies.
Atomristor Atomristor is defined as the electrical devices showing memristive behavior in atomically thin nanomaterials or atomic sheets- In a memristive network, the memristive devices are used to simulate the behavior of neurons and synapses in the human brainin the Akinwande group at the University of Texas, first reported a universal memristive effect in single-layer TMD (MX, M = Mo, W; and X = S, Se) atomic sheets based on vertical metal-insulator-metal (MIM) device structurein the Akinwande group at the University of Texas, first reported a universal memristive effect in single-layer TMD (MX2, M = Mo, W; and X = S, Se) atomic sheets based on vertical metal-insulator-metal (MIM) device structureThe network consists of layers of memristive devices, each of which is connected to other layers through a set of weightsAfterwards, taking advantage of the low ""on"" resistance and large on/off ratio, a high-performance zero-power RF switch is proved based on MoS or h-BN atomristors, indicating a new application of memristors for 5G, 6G and THz communication and connectivity systemsAfterwards, taking advantage of the low ""on"" resistance and large on/off ratio, a high-performance zero-power RF switch is proved based on MoS2 or h-BN atomristors, indicating a new application of memristors for 5G, 6G and THz communication and connectivity systemsIn fact, memristive devices in circuits have complex interactions due to Kirchhoff's laws.
Layered memristor In 2014, Bessonov et alThese weights are adjusted during the training process, allowing the network to learn and[SEP]What is the definition of Atomristor?","['C', 'D', 'A']",1.0
Who published the first theory that was able to encompass previously separate field theories to provide a unifying theory of electromagnetism?,"Maxwell's equations for electromagnetism have been called the ""second great unification in physics"" where the first one had been realised by Isaac Newton. Chapters six through eight present the development of electromagnetism as a line from Faraday to Maxwell, including the development of theories of electricity and magnetism modelled on Newtonian mechanics. A History of the Theories of Aether and Electricity is any of three books written by British mathematician Sir Edmund Taylor Whittaker FRS FRSE on the history of electromagnetic theory, covering the development of classical electromagnetism, optics, and aether theories. The book covers the history of aether theories and the development of electromagnetic theory up to the 20th century. James Clerk Maxwell used Faraday's conceptualisation to help formulate his unification of electricity and magnetism in his electromagnetic theory. James Clerk Maxwell (13 June 1831 – 5 November 1879) was a Scottish mathematician and scientist responsible for the classical theory of electromagnetic radiation, which was the first theory to describe electricity, magnetism and light as different manifestations of the same phenomenon. In particular, unification of gravitation and electromagnetism was actively pursued by several physicists and mathematicians in the years between the two World Wars. Einstein was not alone in his attempts to unify electromagnetism and gravity; a large number of mathematicians and physicists, including Hermann Weyl, Arthur Eddington, and Theodor Kaluza also attempted to develop approaches that could unify these interactions. The work covers the development of optics, electricity, and magnetism, with some side-plots in the history of thermodynamics and gravitation, over three centuries, through the close of the nineteenth century. ====Overview (vol. 1)==== Volume I: The Classical Theories contents # Title 1 The theory of the aether to the death of Newton 2 Electric and magnetic science, prior to the introduction of the potentials 3 Galvanism, from Galvani to Ohm 4 The luminiferous medium from Bradley to Fresnel 5 The aether as an elastic solid 6 Faraday 7 The mathematical electricians of the middle of the nineteenth century 8 Maxwell 9 Models of the aether 10 The followers of Maxwell 11 Conduction in solutions and gases, from Faraday to the discovery of the electron 12 Classical radiation-theory 13 Classical theory in the age of Lorentz Chapter one of the first volume was renamed the theory of the aether to the death of Newton after being mostly rewritten, though it still focuses on René Descartes, Isaac Newton, Pierre de Fermat, Robert Hooke, and Christiaan Huygens, among others. Since the 19th century, some physicists, notably Albert Einstein, have attempted to develop a single theoretical framework that can account for all the fundamental forces of nature – a unified field theory. Although new ""classical"" unified field theories continue to be proposed from time to time, often involving non-traditional elements such as spinors or relating gravitation to an electromagnetic force, none have been generally accepted by physicists yet. ==See also== *Affine gauge theory *Classical field theory *Gauge gravitation theory *Metric-affine gravitation theory ==References== Category:History of physics * Classical unified field theories But even after his Treatise and subsequent discovery of light as an electromagnetic wave, Maxwell continued to believe in the aether theory: > ""Another theory of electricity which I prefer denies action at a distance > and attributes electric action to tensions and pressures in an all-pervading > medium, these stresses being the same in kind with those familiar to > engineers, and the medium being identical with that in which light is > supposed to be propagated."" Faraday's insights into the behavior of magnetic fields would prove invaluable to James Clerk Maxwell's course to unite electricity and magnetism into one theory. Field theory had its origins in the 18th century in a mathematical formulation of Newtonian mechanics, but it was seen as deficient as it implied action at a distance. Faraday advanced what has been termed the molecular theory of electricityA treatise on electricity, in theory and practice, Volume 1 By Auguste de La Rive. Current mainstream research on unified field theories focuses on the problem of creating a quantum theory of gravity and unifying with the other fundamental theories in physics, all of which are quantum field theories. This discovery gave a clue to the subsequently proved intimate relationship between electricity and magnetism which was promptly followed up by Ampère who some months later, in September 1820, presented the first elements of his new theory, which he developed in the following years culminating with the publication in his 1827 """" (Memoir on the Mathematical Theory of Electrodynamic Phenomena, Uniquely Deduced from Experience) announcing his celebrated theory of electrodynamics, relating to the force that one current exerts upon another, by its electro-magnetic effects, namely # Two parallel portions of a circuit attract one another if the currents in them are flowing in the same direction, and repel one another if the currents flow in the opposite direction. Perhaps the most original, and certainly the most permanent in their influence, were his memoirs on the theory of electricity and magnetism, which virtually created a new branch of mathematical physics. For a survey of current work toward creating a quantum theory of gravitation, see quantum gravity. ==Overview== The early attempts at creating a unified field theory began with the Riemannian geometry of general relativity, and attempted to incorporate electromagnetic fields into a more general geometry, since ordinary Riemannian geometry seemed incapable of expressing the properties of the electromagnetic field. Now Maxwell logically showed how these methods of calculation could be applied to the electro-magnetic field.In November 1847, Clerk Maxwell entered the University of Edinburgh, learning mathematics from Kelland, natural philosophy from J. D. Forbes, and logic from Sir W. R. Hamilton. ",Maxwell,Einstein,Galileo,Faraday,Newton,A,kaggle200,"In the 20th century, the search for a unifying theory was interrupted by the discovery of the strong and weak nuclear forces, which differ both from gravity and from electromagnetism. A further hurdle was the acceptance that in a theory of everything, quantum mechanics had to be incorporated from the outset, rather than emerging as a consequence of a deterministic unified theory, as Einstein had hoped.
The Novak–Tyson model, first published in the paper titled ""Numerical analysis of a comprehensive model of M-phase control in Xenopus oocyte extracts and intact embryos"", builds on the Goldbeter and Tyson 1991 models in order to generate a unifying theory, encapsulating the observed dynamics of the cyclin-MPF relationship.
This area of research was summarized in terms understandable by the layperson in a 2008 article in New Scientist that offered a unifying theory of brain function. Friston makes the following claims about the explanatory power of the theory:
The first successful classical unified field theory was developed by James Clerk Maxwell. In 1820, Hans Christian Ørsted discovered that electric currents exerted forces on magnets, while in 1831, Michael Faraday made the observation that time-varying magnetic fields could induce electric currents. Until then, electricity and magnetism had been thought of as unrelated phenomena. In 1864, Maxwell published his famous paper on a dynamical theory of the electromagnetic field. This was the first example of a theory that was able to encompass previously separate field theories (namely electricity and magnetism) to provide a unifying theory of electromagnetism. By 1905, Albert Einstein had used the constancy of the speed of light in Maxwell's theory to unify our notions of space and time into an entity we now call spacetime and in 1915 he expanded this theory of special relativity to a description of gravity, general relativity, using a field to describe the curving geometry of four-dimensional spacetime.","Category theory is a unifying theory of mathematics that was initially developed in the second half of the 20th century. In this respect it is an alternative and complement to set theory. A key theme from the ""categorical"" point of view is that mathematics requires not only certain kinds of objects (Lie groups, Banach spaces, etc.) but also mappings between them that preserve their structure.
Script-based semantic theory of humor The script-based semantic theory of humor (SSTH) was introduced by Victor Raskin in ""Semantic Mechanisms of Humor"", published 1985. While being a variant on the more general concepts of the Incongruity theory of humor (see above), it is the first theory to identify its approach as exclusively linguistic. As such it concerns itself only with verbal humor: written and spoken words used in narrative or riddle jokes concluding with a punch line.
Classic theory The first successful classical unified field theory was developed by James Clerk Maxwell. In 1820, Hans Christian Ørsted discovered that electric currents exerted forces on magnets, while in 1831, Michael Faraday made the observation that time-varying magnetic fields could induce electric currents. Until then, electricity and magnetism had been thought of as unrelated phenomena. In 1864, Maxwell published his famous paper on a dynamical theory of the electromagnetic field. This was the first example of a theory that was able to encompass previously separate field theories (namely electricity and magnetism) to provide a unifying theory of electromagnetism. By 1905, Albert Einstein had used the constancy of the speed of light in Maxwell's theory to unify our notions of space and time into an entity we now call spacetime and in 1915 he expanded this theory of special relativity to a description of gravity, general relativity, using a field to describe the curving geometry of four-dimensional spacetime.","This was the first example of a theory that was able to encompass previously separate field theories (namely electricity and magnetism) to provide a unifying theory of electromagnetismIn 1864, Maxwell published his famous paper on a dynamical theory of the electromagnetic fieldBy 1905, Albert Einstein had used the constancy of the speed of light in Maxwell's theory to unify our notions of space and time into an entity we now call spacetime and in 1915 he expanded this theory of special relativity to a description of gravity, general relativity, using a field to describe the curving geometry of four-dimensional spacetime.By 1905, Albert Einstein had used the constancy of the speed of light in Maxwell's theory to unify our notions of space and time into an entity we now call spacetime and in 1915 he expanded this theory of special relativity to a description of gravity, general relativity, using a field to describe the curving geometry of four-dimensional spacetimeIn 1820, Hans Christian Ørsted discovered that electric currents exerted forces on magnets, while in 1831, Michael Faraday made the observation that time-varying magnetic fields could induce electric currentsFriston makes the following claims about the explanatory power of the theory:
The first successful classical unified field theory was developed by James Clerk Maxwell- In the 20th century, the search for a unifying theory was interrupted by the discovery of the strong and weak nuclear forces, which differ both from gravity and from electromagnetismUntil then, electricity and magnetism had been thought of as unrelated phenomenaAs such it concerns itself only with verbal humor: written and spoken words used in narrative or riddle jokes concluding with a punch line.
Classic theory The first successful classical unified field theory was developed by James Clerk MaxwellCategory theory is a unifying theory of mathematics that was initially developed in the second half of the 20th centuryA further hurdle was the acceptance that in a theory of everything, quantum mechanics had to be incorporated from the outse","This was the first example of a theory that was able to encompass previously separate field theories (namely electricity and magnetism) to provide a unifying theory of electromagnetismIn 1864, Maxwell published his famous paper on a dynamical theory of the electromagnetic fieldBy 1905, Albert Einstein had used the constancy of the speed of light in Maxwell's theory to unify our notions of space and time into an entity we now call spacetime and in 1915 he expanded this theory of special relativity to a description of gravity, general relativity, using a field to describe the curving geometry of four-dimensional spacetime.By 1905, Albert Einstein had used the constancy of the speed of light in Maxwell's theory to unify our notions of space and time into an entity we now call spacetime and in 1915 he expanded this theory of special relativity to a description of gravity, general relativity, using a field to describe the curving geometry of four-dimensional spacetimeIn 1820, Hans Christian Ørsted discovered that electric currents exerted forces on magnets, while in 1831, Michael Faraday made the observation that time-varying magnetic fields could induce electric currentsFriston makes the following claims about the explanatory power of the theory:
The first successful classical unified field theory was developed by James Clerk Maxwell- In the 20th century, the search for a unifying theory was interrupted by the discovery of the strong and weak nuclear forces, which differ both from gravity and from electromagnetismUntil then, electricity and magnetism had been thought of as unrelated phenomenaAs such it concerns itself only with verbal humor: written and spoken words used in narrative or riddle jokes concluding with a punch line.
Classic theory The first successful classical unified field theory was developed by James Clerk MaxwellCategory theory is a unifying theory of mathematics that was initially developed in the second half of the 20th centuryA further hurdle was the acceptance that in a theory of everything, quantum mechanics had to be incorporated from the outse[SEP]Who published the first theory that was able to encompass previously separate field theories to provide a unifying theory of electromagnetism?","['A', 'B', 'D']",1.0
What is the relevant type of coherence for the Young's double-slit interferometer?,"The coherence encountered in most optical experiments, including the classic Young's double slit experiment and Mach–Zehnder interferometer, is first order coherence. Young's double slit experiment demonstrates the dependence of interference on coherence, specifically on the first-order correlation. This experiment is equivalent to the Mach–Zehnder interferometer with the caveat that Young's double slit experiment is concerned with spatial coherence, while the Mach–Zehnder interferometer relies on temporal coherence. The coherence length can also be measured using a Michelson interferometer and is the optical path length difference of a self-interfering laser beam which corresponds to \, \frac{1}{\, e \,} \approx 37\% \, fringe visibility, where the fringe visibility is defined as :V = \frac{\; I_\max - I_\min \;}{ I_\max + I_\min} ~, where \, I \, is the fringe intensity. The chief benefit of coherence scanning interferometry is that systems can be designed that do not suffer from the 2 pi ambiguity of coherent interferometry, and as seen in Fig. 18, which scans a 180μm x 140μm x 10μm volume, it is well suited to profiling steps and rough surfaces. The theory of partial coherence was awoken in the 1930s due to work by Pieter Hendrik van Cittert and Frits Zernike. ==Topics in coherence theory== * Visibility * Mutual coherence function * Degree of coherence * Self coherence function * Coherence function * Low frequency fluctuations * General interference law * Van Cittert–Zernike theorem * Michelson stellar interferometer * Correlation interferometry * Hanbury–Brown and Twiss effect * Phase-contrast microscope * Pseudothermal light * Englert–Greenberger duality relation * Coherence Collapse ==See also== * Nonclassical light * Optical coherence tomography ==References== * Eugene Hecht and Alfred Zajac, Optics, (1974) Addison-Wesley Publishing, Reading, Massachusetts . Correlation interferometry uses coherences of fourth-order and higher to perform stellar measurements. In physics, coherence length is the propagation distance over which a coherent wave (e.g. an electromagnetic wave) maintains a specified degree of coherence. Such a distinction is not captured by the classical description on wave interference. == Mathematical properties of coherence functions == For the purposes of standard optical experiments, coherence is just first-order coherence and higher-order coherences are generally ignored. The N-slit interferometer is an extension of the double-slit interferometer also known as Young's double-slit interferometer. Higher order coherence extends the concept of coherence -- the ability of waves to interfere -- to quantum optics and coincidence experiments. In physics, coherence theory is the study of optical effects arising from partially coherent light and radio sources. Many aspects of modern coherence theory are studied in quantum optics. As coherence is the ability to interfere visibility and coherence are linked: :|\gamma^{(1)}(x_1,x_2)| = 1 means highest contrast, complete coherence :0 < |\gamma^{(1)}(x_1,x_2)| < 1 means partial fringe visibility, partial coherence :|\gamma^{(1)}(x_1,x_2)| = 0 means no contrast, complete incoherence. ==== Quantum description ==== Classically, the electric field at a position \mathbf{r}, is the sum of electric field components from at the two pinholes \mathbf{r}_1 and \mathbf{r}_2 earlier times t_1, t_2 respectably i.e. E^+(\mathbf{r},t) = E^+(\mathbf{r_1},t_1) + E^+(\mathbf{r}_2,t_2). Recent advances have striven to combine the nanometer phase retrieval of coherent interferometry with the ranging capability of low-coherence interferometry. Hanbury Brown and Twiss used this result to compute the first order coherence from their measurement of the second order coherence. The first application of the N-slit interferometer was the generation and measurement of complex interference patterns. Consequently, coherent states have all orders of coherences as being non-zero. == See also == * Degree of coherence * Hanbury Brown and Twiss effect * Double-slit experiment * Young's interference experiment * Mach–Zehnder interferometer == References == Category:Quantum optics It is important to note that this is a roundtrip coherence length — this definition is applied in applications like OCT where the light traverses the measured displacement twice (as in a Michelson interferometer). Twyman–Green interferometer set up as a white light scanner In coherence scanning interferometry,P. de Groot, J., ""Interference Microscopy for Surface Structure Analysis,"" in Handbook of Optical Metrology, edited by T. Yoshizawa, chapt.31, pp. 791-828, (CRC Press, 2015). interference is only achieved when the path length delays of the interferometer are matched within the coherence time of the light source. ",Visibility,Coherence time,Spatial coherence,Coherence length,Diameter of the coherence area (Ac),E,kaggle200,"One should be careful not to confuse the coherence time with the time duration of the signal, nor the coherence length with the coherence area (see below).
In Young's double slit experiment, light from a light source is allowed to pass through two pinholes separated by some distance, and a screen is placed some distance away from the pinholes where the interference between the light waves is observed (Figure. 1). Young's double slit experiment demonstrates the dependence of interference on coherence, specifically on the first-order correlation. This experiment is equivalent to the Mach-Zehnder interferometer with the caveat that Young's double slit experiment is concerned with spatial coherence, while Mach-Zehnder interferometer relies on temporal coherence.
The ""N""-slit interferometer is an extension of the double-slit interferometer also known as Young's double-slit interferometer. One of the first known uses of ""N""-slit arrays in optics was illustrated by Newton. In the first part of the twentieth century, Michelson described various cases of ""N""-slit diffraction.
In some systems, such as water waves or optics, wave-like states can extend over one or two dimensions. Spatial coherence describes the ability for two points in space, ""x"" and ""x"", in the extent of a wave to interfere, when averaged over time. More precisely, the spatial coherence is the cross-correlation between two points in a wave for all times. If a wave has only 1 value of amplitude over an infinite length, it is perfectly spatially coherent. The range of separation between the two points over which there is significant interference defines the diameter of the coherence area, ""A"" (Coherence length, often a feature of a source, is usually an industrial term related to the coherence time of the source, not the coherence area in the medium.) A is the relevant type of coherence for the Young's double-slit interferometer. It is also used in optical imaging systems and particularly in various types of astronomy telescopes. Sometimes people also use ""spatial coherence"" to refer to the visibility when a wave-like state is combined with a spatially shifted copy of itself.","In developmental psychology, thematic coherence is an organization of a set of meanings in and through an event. In education, for example, the thematic coherence happens when a child during a classroom session understands what all the talking is about.This expression was termed by Habermas and Bluck (2000), along with other terms such as temporal coherence, biographical coherence, and causal coherence, to describe the coherence that people talk about while narrating their own personal experiences (the many different episodes in their life, most especially in childhood and adolescence) which need to be structured within a context.In conversation — although this technique also can be found in literature — the thematic coherence is when a person (or character) ""is able to derive a general theme or principle about the self based on a narrated sequence of events.""
Coherence provides a variety of mechanisms to integrate with other services using TopLink, Java Persistence API, Oracle Golden Gate and other platforms using APIs provided by Coherence.
Coherence can be used to manage HTTP sessions via Coherence*Web, in which application services such as Oracle WebLogic Server, IBM WebSphere, Apache Tomcat and others are claimed to get the same performance, fault tolerance, and scalability as data.
In the summer of 2020, Coherence Community Edition was released as open source on GitHub. Some Coherence usage patterns are also open source and are listed and supported through the Oracle Coherence incubator. These patterns implement features such as messaging, work distribution and data replication across wide area networks with Coherence.
In some systems, such as water waves or optics, wave-like states can extend over one or two dimensions. Spatial coherence describes the ability for two spatial points x1 and x2 in the extent of a wave to interfere when averaged over time. More precisely, the spatial coherence is the cross-correlation between two points in a wave for all times. If a wave has only 1 value of amplitude over an infinite length, it is perfectly spatially coherent. The range of separation between the two points over which there is significant interference defines the diameter of the coherence area, Ac (Coherence length, often a feature of a source, is usually an industrial term related to the coherence time of the source, not the coherence area in the medium.) Ac is the relevant type of coherence for the Young's double-slit interferometer. It is also used in optical imaging systems and particularly in various types of astronomy telescopes. Sometimes people also use ""spatial coherence"" to refer to the visibility when a wave-like state is combined with a spatially shifted copy of itself.","The range of separation between the two points over which there is significant interference defines the diameter of the coherence area, ""A"" (Coherence length, often a feature of a source, is usually an industrial term related to the coherence time of the source, not the coherence area in the medium.) A is the relevant type of coherence for the Young's double-slit interferometerYoung's double slit experiment demonstrates the dependence of interference on coherence, specifically on the first-order correlationThe range of separation between the two points over which there is significant interference defines the diameter of the coherence area, Ac (Coherence length, often a feature of a source, is usually an industrial term related to the coherence time of the source, not the coherence area in the medium.) Ac is the relevant type of coherence for the Young's double-slit interferometer- One should be careful not to confuse the coherence time with the time duration of the signal, nor the coherence length with the coherence area (see below).
In Young's double slit experiment, light from a light source is allowed to pass through two pinholes separated by some distance, and a screen is placed some distance away from the pinholes where the interference between the light waves is observed (FigureThis experiment is equivalent to the Mach-Zehnder interferometer with the caveat that Young's double slit experiment is concerned with spatial coherence, while Mach-Zehnder interferometer relies on temporal coherence.
The ""N""-slit interferometer is an extension of the double-slit interferometer also known as Young's double-slit interferometerMore precisely, the spatial coherence is the cross-correlation between two points in a wave for all timesSpatial coherence describes the ability for two spatial points x1 and x2 in the extent of a wave to interfere when averaged over timeSpatial coherence describes the ability for two points in space, ""x"" and ""x"", in the extent of a wave to interfere, when averaged over timeSometimes people also use ""spatial coherence"" to refer to the visibility w","The range of separation between the two points over which there is significant interference defines the diameter of the coherence area, ""A"" (Coherence length, often a feature of a source, is usually an industrial term related to the coherence time of the source, not the coherence area in the medium.) A is the relevant type of coherence for the Young's double-slit interferometerYoung's double slit experiment demonstrates the dependence of interference on coherence, specifically on the first-order correlationThe range of separation between the two points over which there is significant interference defines the diameter of the coherence area, Ac (Coherence length, often a feature of a source, is usually an industrial term related to the coherence time of the source, not the coherence area in the medium.) Ac is the relevant type of coherence for the Young's double-slit interferometer- One should be careful not to confuse the coherence time with the time duration of the signal, nor the coherence length with the coherence area (see below).
In Young's double slit experiment, light from a light source is allowed to pass through two pinholes separated by some distance, and a screen is placed some distance away from the pinholes where the interference between the light waves is observed (FigureThis experiment is equivalent to the Mach-Zehnder interferometer with the caveat that Young's double slit experiment is concerned with spatial coherence, while Mach-Zehnder interferometer relies on temporal coherence.
The ""N""-slit interferometer is an extension of the double-slit interferometer also known as Young's double-slit interferometerMore precisely, the spatial coherence is the cross-correlation between two points in a wave for all timesSpatial coherence describes the ability for two spatial points x1 and x2 in the extent of a wave to interfere when averaged over timeSpatial coherence describes the ability for two points in space, ""x"" and ""x"", in the extent of a wave to interfere, when averaged over timeSometimes people also use ""spatial coherence"" to refer to the visibility w[SEP]What is the relevant type of coherence for the Young's double-slit interferometer?","['E', 'D', 'C']",1.0
What is the Peierls bracket in canonical quantization?,"In theoretical physics, the Peierls bracket is an equivalent description of the Poisson bracket. In quantum mechanics, the Peierls bracket becomes a commutator i.e. a Lie bracket. ==References== Peierls, R. The Dirac bracket is a generalization of the Poisson bracket developed by Paul Dirac to treat classical systems with second class constraints in Hamiltonian mechanics, and to thus allow them to undergo canonical quantization. Now, suppose one wishes to employ canonical quantization, then the phase-space coordinates become operators whose commutators become times their classical Poisson bracket. When applying canonical quantization on a constrained Hamiltonian system, the commutator of the operators is supplanted by times their classical Dirac bracket. This example illustrates the need for some generalization of the Poisson bracket which respects the system's constraints, and which leads to a consistent quantization procedure. This article assumes familiarity with the standard Lagrangian and Hamiltonian formalisms, and their connection to canonical quantization. The canonical structure (also known as the symplectic structure) of classical mechanics consists of Poisson brackets enclosing these variables, such as . In physics, canonical quantization is a procedure for quantizing a classical theory, while attempting to preserve the formal structure, such as symmetries, of the classical theory, to the greatest extent possible. If one wants to canonically quantize a general system, then one needs the Dirac brackets. The central relation between these operators is a quantum analog of the above Poisson bracket of classical mechanics, the canonical commutation relation, [\hat{X},\hat{P}] = \hat{X}\hat{P}-\hat{P}\hat{X} = i\hbar. Theorem 13.13 However, he further appreciated that such a systematic correspondence does, in fact, exist between the quantum commutator and a deformation of the Poisson bracket, today called the Moyal bracket, and, in general, quantum operators and classical observables and distributions in phase space. (Here, the curly braces denote the Poisson bracket. Canonical quantization treats the variables and as operators with canonical commutation relations at time = 0, given by [\phi(x),\phi(y)] = 0, \ \ [\pi(x), \pi(y)] = 0, \ \ [\phi(x),\pi(y)] = i\hbar \delta(x-y). A further generalization is to consider a Poisson manifold instead of a symplectic space for the classical theory and perform an ħ-deformation of the corresponding Poisson algebra or even Poisson supermanifolds. ===Geometric quantization=== In contrast to the theory of deformation quantization described above, geometric quantization seeks to construct an actual Hilbert space and operators on it. In quantum field theory, it is known as canonical quantization, in which the fields (typically as the wave functions of matter) are thought of as field operators, in a manner similar to how the physical quantities (position, momentum, etc.) are thought of as operators in first quantization. Details of Dirac's modified Hamiltonian formalism are also summarized to put the Dirac bracket in context. == Inadequacy of the standard Hamiltonian procedure == The standard development of Hamiltonian mechanics is inadequate in several specific situations: # When the Lagrangian is at most linear in the velocity of at least one coordinate; in which case, the definition of the canonical momentum leads to a constraint. On one hand, canonical quantization gives the above commutation relation, but on the other hand 1 and are constraints that must vanish on physical states, whereas the right-hand side cannot vanish. In general, for the quantities (observables) involved, and providing the arguments of such brackets, ħ-deformations are highly nonunique—quantization is an ""art"", and is specified by the physical context. The details of the canonical quantization depend on the field being quantized, and whether it is free or interacting. ====Real scalar field==== A scalar field theory provides a good example of the canonical quantization procedure.This treatment is based primarily on Ch. 1 in Classically, a scalar field is a collection of an infinity of oscillator normal modes. ",The Peierls bracket is a mathematical symbol used to represent the Poisson algebra in the canonical quantization method.,The Peierls bracket is a mathematical tool used to generate the Hamiltonian in the canonical quantization method.,The Peierls bracket is a Poisson bracket derived from the action in the canonical quantization method that converts the quotient algebra into a Poisson algebra.,The Peierls bracket is a mathematical symbol used to represent the quotient algebra in the canonical quantization method.,The Peierls bracket is a mathematical tool used to generate the Euler-Lagrange equations in the canonical quantization method.,C,kaggle200,"and providing the arguments of such brackets, ""ħ""-deformations are highly nonunique—quantization is an ""art"", and is specified by the physical context.
It is now known that there is no reasonable such quantization map satisfying the above identity exactly for all functions formula_24 and formula_28.
The method does not apply to all possible actions (for instance, actions with a noncausal structure or actions with gauge ""flows""). It starts with the classical algebra of all (smooth) functionals over the configuration space. This algebra is quotiented over by the ideal generated by the Euler–Lagrange equations. Then, this quotient algebra is converted into a Poisson algebra by introducing a Poisson bracket derivable from the action, called the Peierls bracket. This Poisson algebra is then ħ -deformed in the same way as in canonical quantization.
In theoretical physics, the Peierls bracket is an equivalent description of the Poisson bracket. It can be defined directly from the action and does not require the canonical coordinates and their canonical momenta to be defined in advance.","The method does not apply to all possible actions (for instance, actions with a noncausal structure or actions with gauge ""flows""). It starts with the classical algebra of all (smooth) functionals over the configuration space. This algebra is quotiented over by the ideal generated by the Euler–Lagrange equations. Then, this quotient algebra is converted into a Poisson algebra by introducing a Poisson bracket derivable from the action, called the Peierls bracket. This Poisson algebra is then ℏ -deformed in the same way as in canonical quantization.
In quantum mechanics, the Peierls bracket becomes a commutator i.e. a Lie bracket.
In theoretical physics, the Peierls bracket is an equivalent description of the Poisson bracket. It can be defined directly from the action and does not require the canonical coordinates and their canonical momenta to be defined in advance.The bracket [A,B] is defined as DA(B)−DB(A) ,as the difference between some kind of action of one quantity on the other, minus the flipped term.","This Poisson algebra is then ħ -deformed in the same way as in canonical quantization.
In theoretical physics, the Peierls bracket is an equivalent description of the Poisson bracketThis Poisson algebra is then ℏ -deformed in the same way as in canonical quantization.
In quantum mechanics, the Peierls bracket becomes a commutator i.ea Lie bracket.
In theoretical physics, the Peierls bracket is an equivalent description of the Poisson bracketThen, this quotient algebra is converted into a Poisson algebra by introducing a Poisson bracket derivable from the action, called the Peierls bracketIt can be defined directly from the action and does not require the canonical coordinates and their canonical momenta to be defined in advance.The bracket [A,B] is defined as DA(B)−DB(A) ,as the difference between some kind of action of one quantity on the other, minus the flipped term- and providing the arguments of such brackets, ""ħ""-deformations are highly nonunique—quantization is an ""art"", and is specified by the physical context.
It is now known that there is no reasonable such quantization map satisfying the above identity exactly for all functions formula_24 and formula_28.
The method does not apply to all possible actions (for instance, actions with a noncausal structure or actions with gauge ""flows"")It can be defined directly from the action and does not require the canonical coordinates and their canonical momenta to be defined in advance.It starts with the classical algebra of all (smooth) functionals over the configuration spaceThis algebra is quotiented over by the ideal generated by the Euler–Lagrange equationsThe method does not apply to all possible actions (for instance, actions with a noncausal structure or actions with gauge ""flows"")","This Poisson algebra is then ħ -deformed in the same way as in canonical quantization.
In theoretical physics, the Peierls bracket is an equivalent description of the Poisson bracketThis Poisson algebra is then ℏ -deformed in the same way as in canonical quantization.
In quantum mechanics, the Peierls bracket becomes a commutator i.ea Lie bracket.
In theoretical physics, the Peierls bracket is an equivalent description of the Poisson bracketThen, this quotient algebra is converted into a Poisson algebra by introducing a Poisson bracket derivable from the action, called the Peierls bracketIt can be defined directly from the action and does not require the canonical coordinates and their canonical momenta to be defined in advance.The bracket [A,B] is defined as DA(B)−DB(A) ,as the difference between some kind of action of one quantity on the other, minus the flipped term- and providing the arguments of such brackets, ""ħ""-deformations are highly nonunique—quantization is an ""art"", and is specified by the physical context.
It is now known that there is no reasonable such quantization map satisfying the above identity exactly for all functions formula_24 and formula_28.
The method does not apply to all possible actions (for instance, actions with a noncausal structure or actions with gauge ""flows"")It can be defined directly from the action and does not require the canonical coordinates and their canonical momenta to be defined in advance.It starts with the classical algebra of all (smooth) functionals over the configuration spaceThis algebra is quotiented over by the ideal generated by the Euler–Lagrange equationsThe method does not apply to all possible actions (for instance, actions with a noncausal structure or actions with gauge ""flows"")[SEP]What is the Peierls bracket in canonical quantization?","['C', 'D', 'E']",1.0
What is the isophotal diameter used for in measuring a galaxy's size?,"The distance measured by a standard ruler is what is known as the angular diameter distance. Distances can also be measured using standard candles; many different types of standard candles and rulers are needed to construct the cosmic distance ladder. == Relationship between angular size and distance == The relation between the angular diameter, θ, actual (physical) diameter, r, and distance, D, of an object from the observer is given by: : \theta \approx \frac{r}{D} where θ is measured in radians. Measuring distances is of great importance in cosmology, as the relationship between the distance and redshift of an object can be used to measure the expansion rate and geometry of the Universe. A standard ruler is an astronomical object for which the actual physical size is known. COSMIC functional size measurement is a method to measure a standard functional size of a piece of software. The foundation of the method is the ISO/IEC 19761 standard, which contains the definitions and basic principles that are described in more detail in the COSMIC measurement manual. == The applicability of the COSMIC functional size measurement method == Since the COSMIC method is based on generic software principles, these principles can be applied in various software domains. Standard candles measure another type of distance called the luminosity distance. == See also == *Standard candle *Baryon acoustic oscillations *Angular diameter distance *Parallax *Cosmic distance ladder Category:Astrometry Category:Length, distance, or range measuring devices 28 mm (twenty-eight millimeter): * 28 mm film * 28 mm scale of miniature figures COSMIC is an acronym of COmmon Software Measurement International Consortium, a voluntary organization that has developed the method and is still expanding its use to more software domains. == The method == The ""Measurement Manual"" defines the principles, rules and a process for measuring a standard functional size of a piece of software. By measuring its angular size in the sky, one can use simple trigonometry to determine its distance from Earth. Because space is expanding, there is no one, unique way of measuring the distance between source and observer. Key elements of a second generation functional size measurement method are: * Adoption of all measurement concepts from the ISO metrology * A defined measurement unit * Fully compliant with ISO/IEC 14143 * Preferably domain independent The method is based on principles rather than rules that are domain independent. Key elements of a second generation functional size measurement method are: * Adoption of all measurement concepts from the ISO metrology * A defined measurement unit * Fully compliant with ISO/IEC 14143 * Preferably domain independent The method is based on principles and rules that are domain independent. The COSMIC standard is the first second generation implementation of the ISO/IEC 14143 standard. In simple terms, this is because objects of a fixed size appear smaller the further away they are. The first generation functional size measurement methods consisted of rules that are based on empirical results. The generic principles of functional size are described in the ISO/IEC 14143 standard. As a consequence of measuring the size, the method can be used to establish benchmarks of (and subsequent estimates) regarding the effort, cost, quality and duration of software work. The guideline describes how to measure the functional size of distinct components. The principles of the method are based on fundamental software engineering principles, which have been subsequently tested in practice. ==References== == External links == * COSMIC website A public domain version of the COSMIC measurement manual and other technical reports * COSMIC Publications Public domain publications for the COSMIC method Category:Software metrics Category:Software engineering costs ",The isophotal diameter is a way of measuring a galaxy's distance from Earth.,The isophotal diameter is a measure of a galaxy's age.,The isophotal diameter is a measure of a galaxy's mass.,The isophotal diameter is a measure of a galaxy's temperature.,The isophotal diameter is a conventional way of measuring a galaxy's size based on its apparent surface brightness.,E,kaggle200,"The surface brightness fluctuation (SBF) method takes advantage of the use of CCD cameras on telescopes. Because of spatial fluctuations in a galaxy's surface brightness, some pixels on these cameras will pick up more stars than others. However, as distance increases the picture will become increasingly smoother. Analysis of this describes a magnitude of the pixel-to-pixel variation, which is directly related to a galaxy's distance.
Variations of this method exist. In particular, in the ESO-Uppsala Catalogue of Galaxies values of 50%, 70%, and 90% of the total blue light (the light detected through a B-band specific filter) had been used to calculate a galaxy's diameter.
A critique of an earlier version of this method has been issued by IPAC, with the method causing a magnitude of error (upwards to 10%) of the values than using isophotal diameter. The use of Petrosian magnitudes also have the disadvantage of missing most of the light outside the Petrosian aperture, which is defined relative to the galaxy's overall brightness profile, especially for elliptical galaxies, with higher signal-to-noise ratios on higher distances and redshifts. A correction for this method has been issued by Graham ""et al."" in 2005, based on the assumption that galaxies follow Sersic's law.
The ""isophotal diameter"" is introduced as a conventional way of measuring a galaxy's size based on its apparent surface brightness. Isophotes are curves in a diagram - such as a picture of a galaxy - that adjoins points of equal brightnesses, and are useful in defining the extent of the galaxy. The apparent brightness flux of a galaxy is measured in units of magnitudes per square arcsecond (mag/arcsec; sometimes expressed as ""mag arcsec""), which defines the brightness depth of the isophote. To illustrate how this unit works, a typical galaxy has a brightness flux of 18 mag/arcsec at its central region. This brightness is equivalent to the light of an 18th magnitude hypothetical point object (like a star) being spread out evenly in a one square arcsecond area of the sky. For the purposes of objectivity, the spectrum of light being used is sometimes also given in figures. As an example, the Milky Way has an average surface brightness of 22.1 B-mag/arcsec, where ""B-mag"" refers to the brightness at the B-band (445 nm wavelength of light, in the blue part of the visible spectrum).","Abell 1795 is a galaxy cluster in the Abell catalogue.
Examples of isophotal diameter measurements: Large Magellanic Cloud - 9.86 kiloparsecs (32,200 light-years) at the 25.0 B-mag/arcsec2 isophote.
Milky Way - has a diameter at the 25.0 B-mag/arcsec2 isophote of 26.8 ± 1.1 kiloparsecs (87,400 ± 3,590 light-years).
Messier 87 - has a has a diameter at the 25.0 B-mag/arcsec2 isophote of 40.55 kiloparsecs (132,000 light-years).
Andromeda Galaxy - has a has a diameter at the 25.0 B-mag/arcsec2 isophote of 46.56 kiloparsecs (152,000 light-years).
Isophotal diameter The isophotal diameter is introduced as a conventional way of measuring a galaxy's size based on its apparent surface brightness. Isophotes are curves in a diagram - such as a picture of a galaxy - that adjoins points of equal brightnesses, and are useful in defining the extent of the galaxy. The apparent brightness flux of a galaxy is measured in units of magnitudes per square arcsecond (mag/arcsec2; sometimes expressed as mag arcsec−2), which defines the brightness depth of the isophote. To illustrate how this unit works, a typical galaxy has a brightness flux of 18 mag/arcsec2 at its central region. This brightness is equivalent to the light of an 18th magnitude hypothetical point object (like a star) being spread out evenly in a one square arcsecond area of the sky. For the purposes of objectivity, the spectrum of light being used is sometimes also given in figures. As an example, the Milky Way has an average surface brightness of 22.1 B-mag/arcsec−2, where B-mag refers to the brightness at the B-band (445 nm wavelength of light, in the blue part of the visible spectrum).","Abell 1795 is a galaxy cluster in the Abell catalogue.
Examples of isophotal diameter measurements: Large Magellanic Cloud - 9.86 kiloparsecs (32,200 light-years) at the 25.0 B-mag/arcsec2 isophote.
Milky Way - has a diameter at the 25.0 B-mag/arcsec2 isophote of 26.8 ± 1.1 kiloparsecs (87,400 ± 3,590 light-years).
Messier 87 - has a has a diameter at the 25.0 B-mag/arcsec2 isophote of 40.55 kiloparsecs (132,000 light-years).
Andromeda Galaxy - has a has a diameter at the 25.0 B-mag/arcsec2 isophote of 46.56 kiloparsecs (152,000 light-years).
Isophotal diameter The isophotal diameter is introduced as a conventional way of measuring a galaxy's size based on its apparent surface brightnessA correction for this method has been issued by Graham ""et al."" in 2005, based on the assumption that galaxies follow Sersic's law.
The ""isophotal diameter"" is introduced as a conventional way of measuring a galaxy's size based on its apparent surface brightnessIn particular, in the ESO-Uppsala Catalogue of Galaxies values of 50%, 70%, and 90% of the total blue light (the light detected through a B-band specific filter) had been used to calculate a galaxy's diameter.
A critique of an earlier version of this method has been issued by IPAC, with the method causing a magnitude of error (upwards to 10%) of the values than using isophotal diameterThe apparent brightness flux of a galaxy is measured in units of magnitudes per square arcsecond (mag/arcsec; sometimes expressed as ""mag arcsec""), which defines the brightness depth of the isophoteThe apparent brightness flux of a galaxy is measured in units of magnitudes per square arcsecond (mag/arcsec2; sometimes expressed as mag arcsec−2), which defines the brightness depth of the isophoteIsophotes are curves in a diagram - such as a picture of a galaxy - that adjoins points of equal brightnesses, and are useful in defining the extent of the galaxyAnalysis of this describes a magnitude of the pixel-to-pixel variation, which is directly related to a galaxy's distance.
Variations of this method existTo illustrate how this unit works, a typic","Abell 1795 is a galaxy cluster in the Abell catalogue.
Examples of isophotal diameter measurements: Large Magellanic Cloud - 9.86 kiloparsecs (32,200 light-years) at the 25.0 B-mag/arcsec2 isophote.
Milky Way - has a diameter at the 25.0 B-mag/arcsec2 isophote of 26.8 ± 1.1 kiloparsecs (87,400 ± 3,590 light-years).
Messier 87 - has a has a diameter at the 25.0 B-mag/arcsec2 isophote of 40.55 kiloparsecs (132,000 light-years).
Andromeda Galaxy - has a has a diameter at the 25.0 B-mag/arcsec2 isophote of 46.56 kiloparsecs (152,000 light-years).
Isophotal diameter The isophotal diameter is introduced as a conventional way of measuring a galaxy's size based on its apparent surface brightnessA correction for this method has been issued by Graham ""et al."" in 2005, based on the assumption that galaxies follow Sersic's law.
The ""isophotal diameter"" is introduced as a conventional way of measuring a galaxy's size based on its apparent surface brightnessIn particular, in the ESO-Uppsala Catalogue of Galaxies values of 50%, 70%, and 90% of the total blue light (the light detected through a B-band specific filter) had been used to calculate a galaxy's diameter.
A critique of an earlier version of this method has been issued by IPAC, with the method causing a magnitude of error (upwards to 10%) of the values than using isophotal diameterThe apparent brightness flux of a galaxy is measured in units of magnitudes per square arcsecond (mag/arcsec; sometimes expressed as ""mag arcsec""), which defines the brightness depth of the isophoteThe apparent brightness flux of a galaxy is measured in units of magnitudes per square arcsecond (mag/arcsec2; sometimes expressed as mag arcsec−2), which defines the brightness depth of the isophoteIsophotes are curves in a diagram - such as a picture of a galaxy - that adjoins points of equal brightnesses, and are useful in defining the extent of the galaxyAnalysis of this describes a magnitude of the pixel-to-pixel variation, which is directly related to a galaxy's distance.
Variations of this method existTo illustrate how this unit works, a typic[SEP]What is the isophotal diameter used for in measuring a galaxy's size?","['E', 'D', 'C']",1.0
What is the Maxwell's Demon thought experiment?,"The demon must allow molecules to pass in both directions in order to produce only a temperature difference; one-way passage only of faster-than-average molecules from A to B will cause higher temperature and pressure to develop on the B side. == Criticism and development == Several physicists have presented calculations that show that the second law of thermodynamics will not actually be violated, if a more complete analysis is made of the whole system including the demon. William Thomson (Lord Kelvin) was the first to use the word ""demon"" for Maxwell's concept, in the journal Nature in 1874, and implied that he intended the Greek mythology interpretation of a daemon, a supernatural being working in the background, rather than a malevolent being. == Original thought experiment == The second law of thermodynamics ensures (through statistical probability) that two bodies of different temperature, when brought into contact with each other and isolated from the rest of the Universe, will evolve to a thermodynamic equilibrium in which both bodies have approximately the same temperature. right|340px|thumb|Schematic figure of Maxwell's demon thought experiment Maxwell's demon is a thought experiment that would hypothetically violate the second law of thermodynamics. The operation of the demon is directly observed as a temperature drop in the system, with a simultaneous temperature rise in the demon arising from the thermodynamic cost of generating the mutual information. In the thought experiment, a demon controls a small massless door between two chambers of gas. Only a year later and based on an earlier theoretical proposal, the same group presented the first experimental realization of an autonomous Maxwell's demon, which extracts microscopic information from a system and reduces its entropy by applying feedback. Because the kinetic temperature of a gas depends on the velocities of its constituent molecules, the demon's actions cause one chamber to warm up and the other to cool down. The essence of the physical argument is to show, by calculation, that any demon must ""generate"" more entropy segregating the molecules than it could ever eliminate by the method described. John Earman and John D. Norton have argued that Szilárd and Landauer's explanations of Maxwell's demon begin by assuming that the second law of thermodynamics cannot be violated by the demon, and derive further properties of the demon from this assumption, including the necessity of consuming energy when erasing information, etc. This technique is widely described as a ""Maxwell's demon"" because it realizes Maxwell's process of creating a temperature difference by sorting high and low energy atoms into different containers. If this demon only let fast moving molecules through a trapdoor to a container, the temperature inside the container would increase without any work being applied. As individual gas molecules (or atoms) approach the door, the demon quickly opens and closes the door to allow only fast-moving molecules to pass through in one direction, and only slow-moving molecules to pass through in the other. Other researchers have implemented forms of Maxwell's demon in experiments, though they all differ from the thought experiment to some extent and none have been shown to violate the second law. == Origin and history of the idea == The thought experiment first appeared in a letter Maxwell wrote to Peter Guthrie Tait on 11 December 1867. Since the demon and the gas are interacting, we must consider the total entropy of the gas and the demon combined. When a faster-than-average molecule from A flies towards the trapdoor, the demon opens it, and the molecule will fly from A to B. Likewise, when a slower-than-average molecule from B flies towards the trapdoor, the demon will let it pass from B to A. The expenditure of energy by the demon will cause an increase in the entropy of the demon, which will be larger than the lowering of the entropy of the gas. For more general information processes including biological information processing, both inequality and equality with mutual information hold. == Applications == Real-life versions of Maxwellian demons occur, but all such ""real demons"" or molecular demons have their entropy- lowering effects duly balanced by increase of entropy elsewhere. Observing the molecules on both sides, an imaginary demon guards a trapdoor between the two parts. Bennett later acknowledged the validity of Earman and Norton's argument, while maintaining that Landauer's principle explains the mechanism by which real systems do not violate the second law of thermodynamics. == Recent progress == Although the argument by Landauer and Bennett only answers the consistency between the second law of thermodynamics and the whole cyclic process of the entire system of a Szilard engine (a composite system of the engine and the demon), a recent approach based on the non-equilibrium thermodynamics for small fluctuating systems has provided deeper insight on each information process with each subsystem. Although Bennett had reached the same conclusion as Szilard's 1929 paper, that a Maxwellian demon could not violate the second law because entropy would be created, he had reached it for different reasons. ","A thought experiment in which a demon guards a microscopic trapdoor in a wall separating two parts of a container filled with different gases at equal temperatures. The demon selectively allows molecules to pass from one side to the other, causing an increase in temperature in one part and a decrease in temperature in the other, contrary to the second law of thermodynamics.","A thought experiment in which a demon guards a macroscopic trapdoor in a wall separating two parts of a container filled with different gases at different temperatures. The demon selectively allows molecules to pass from one side to the other, causing a decrease in temperature in one part and an increase in temperature in the other, in accordance with the second law of thermodynamics.","A thought experiment in which a demon guards a microscopic trapdoor in a wall separating two parts of a container filled with the same gas at equal temperatures. The demon selectively allows faster-than-average molecules to pass from one side to the other, causing a decrease in temperature in one part and an increase in temperature in the other, contrary to the second law of thermodynamics.","A thought experiment in which a demon guards a macroscopic trapdoor in a wall separating two parts of a container filled with the same gas at equal temperatures. The demon selectively allows faster-than-average molecules to pass from one side to the other, causing an increase in temperature in one part and a decrease in temperature in the other, contrary to the second law of thermodynamics.","A thought experiment in which a demon guards a microscopic trapdoor in a wall separating two parts of a container filled with the same gas at different temperatures. The demon selectively allows slower-than-average molecules to pass from one side to the other, causing a decrease in temperature in one part and an increase in temperature in the other, in accordance with the second law of thermodynamics.",C,kaggle200,"Maxwell's demon is a thought experiment that would hypothetically violate the second law of thermodynamics. It was proposed by the physicist James Clerk Maxwell in 1867. In his first letter Maxwell called the demon a ""finite being"", while the ""Daemon"" name was first used by Lord Kelvin.
In the thought experiment, a demon controls a small massless door between two chambers of gas. As individual gas molecules (or atoms) approach the door, the demon quickly opens and closes the door to allow only fast-moving molecules to pass through in one direction, and only slow-moving molecules to pass through in the other. Because the kinetic temperature of a gas depends on the velocities of its constituent molecules, the demon's actions cause one chamber to warm up and the other to cool down. This would decrease the total entropy of the two gases, without applying any work, thereby violating the second law of thermodynamics.
In other words, Maxwell imagines one container divided into two parts, ""A"" and ""B"". Both parts are filled with the same gas at equal temperatures and placed next to each other. Observing the molecules on both sides, an imaginary demon guards a trapdoor between the two parts. When a faster-than-average molecule from ""A"" flies towards the trapdoor, the demon opens it, and the molecule will fly from ""A"" to ""B"". Likewise, when a slower-than-average molecule from ""B"" flies towards the trapdoor, the demon will let it pass from ""B"" to ""A"". The average speed of the molecules in ""B"" will have increased while in ""A"" they will have slowed down on average. Since average molecular speed corresponds to temperature, the temperature decreases in ""A"" and increases in ""B"", contrary to the second law of thermodynamics. A heat engine operating between the thermal reservoirs ""A"" and ""B"" could extract useful work from this temperature difference.
James Clerk Maxwell imagined one container divided into two parts, ""A"" and ""B"". Both parts are filled with the same gas at equal temperatures and placed next to each other, separated by a wall. Observing the molecules on both sides, an imaginary demon guards a microscopic trapdoor in the wall. When a faster-than-average molecule from ""A"" flies towards the trapdoor, the demon opens it, and the molecule will fly from ""A"" to ""B"". The average speed of the molecules in ""B"" will have increased while in ""A"" they will have slowed down on average. Since average molecular speed corresponds to temperature, the temperature decreases in ""A"" and increases in ""B"", contrary to the second law of thermodynamics.","Maxwell's demon can distinguish between fast and slow moving molecules. If this demon only let fast moving molecules through a trapdoor to a container, the temperature inside the container would increase without any work being applied. Such a scenario violates the second law of thermodynamics. Leo Szilard's refinement of Maxwell's demon in the context of information theory is sometimes referred to as Szilard's demon. The biological equivalent of Maxwell's ""finite being"" is a Molecular demon.
The second law of thermodynamics ensures (through statistical probability) that two bodies of different temperature, when brought into contact with each other and isolated from the rest of the Universe, will evolve to a thermodynamic equilibrium in which both bodies have approximately the same temperature. The second law is also expressed as the assertion that in an isolated system, entropy never decreases.Maxwell conceived a thought experiment as a way of furthering the understanding of the second law. His description of the experiment is as follows: ... if we conceive of a being whose faculties are so sharpened that he can follow every molecule in its course, such a being, whose attributes are as essentially finite as our own, would be able to do what is impossible to us. For we have seen that molecules in a vessel full of air at uniform temperature are moving with velocities by no means uniform, though the mean velocity of any great number of them, arbitrarily selected, is almost exactly uniform. Now let us suppose that such a vessel is divided into two portions, A and B, by a division in which there is a small hole, and that a being, who can see the individual molecules, opens and closes this hole, so as to allow only the swifter molecules to pass from A to B, and only the slower molecules to pass from B to A. He will thus, without expenditure of work, raise the temperature of B and lower that of A, in contradiction to the second law of thermodynamics.In other words, Maxwell imagines one container divided into two parts, A and B. Both parts are filled with the same gas at equal temperatures and placed next to each other. Observing the molecules on both sides, an imaginary demon guards a trapdoor between the two parts. When a faster-than-average molecule from A flies towards the trapdoor, the demon opens it, and the molecule will fly from A to B. Likewise, when a slower-than-average molecule from B flies towards the trapdoor, the demon will let it pass from B to A. The average speed of the molecules in B will have increased while in A they will have slowed down on average. Since average molecular speed corresponds to temperature, the temperature decreases in A and increases in B, contrary to the second law of thermodynamics. A heat engine operating between the thermal reservoirs A and B could extract useful work from this temperature difference.
Maxwell's demon James Clerk Maxwell imagined one container divided into two parts, A and B. Both parts are filled with the same gas at equal temperatures and placed next to each other, separated by a wall. Observing the molecules on both sides, an imaginary demon guards a microscopic trapdoor in the wall. When a faster-than-average molecule from A flies towards the trapdoor, the demon opens it, and the molecule will fly from A to B. The average speed of the molecules in B will have increased while in A they will have slowed down on average. Since average molecular speed corresponds to temperature, the temperature decreases in A and increases in B, contrary to the second law of thermodynamics.One response to this question was suggested in 1929 by Leó Szilárd and later by Léon Brillouin. Szilárd pointed out that a real-life Maxwell's demon would need to have some means of measuring molecular speed, and that the act of acquiring information would require an expenditure of energy. Likewise, Brillouin demonstrated that the decrease in entropy caused by the demon would be less than the entropy produced by choosing molecules based on their speed.Maxwell's 'demon' repeatedly alters the permeability of the wall between A and B. It is therefore performing thermodynamic operations on a microscopic scale, not just observing ordinary spontaneous or natural macroscopic thermodynamic processes.","- Maxwell's demon is a thought experiment that would hypothetically violate the second law of thermodynamicsSzilárd pointed out that a real-life Maxwell's demon would need to have some means of measuring molecular speed, and that the act of acquiring information would require an expenditure of energyLikewise, Brillouin demonstrated that the decrease in entropy caused by the demon would be less than the entropy produced by choosing molecules based on their speed.Maxwell's 'demon' repeatedly alters the permeability of the wall between A and BIn his first letter Maxwell called the demon a ""finite being"", while the ""Daemon"" name was first used by Lord Kelvin.
In the thought experiment, a demon controls a small massless door between two chambers of gasLeo Szilard's refinement of Maxwell's demon in the context of information theory is sometimes referred to as Szilard's demonMaxwell's demon can distinguish between fast and slow moving moleculesThe second law is also expressed as the assertion that in an isolated system, entropy never decreases.Maxwell conceived a thought experiment as a way of furthering the understanding of the second lawA heat engine operating between the thermal reservoirs A and B could extract useful work from this temperature difference.
Maxwell's demon James Clerk Maxwell imagined one container divided into two parts, A and BIt was proposed by the physicist James Clerk Maxwell in 1867When a faster-than-average molecule from A flies towards the trapdoor, the demon opens it, and the molecule will fly from A to BHis description of the experiment is as follows: ..Observing the molecules on both sides, an imaginary demon guards a microscopic trapdoor in the wallWhen a faster-than-average molecule from ""A"" flies towards the trapdoor, the demon opens it, and the molecule will fly from ""A"" to ""B""Because the kinetic temperature of a gas depends on the velocities of its constituent molecules, the demon's actions cause one chamber to warm up and the other to cool downLikewise, when a slower-than-average molecule from ""B"" flies towards the trapdoor, the demon ","- Maxwell's demon is a thought experiment that would hypothetically violate the second law of thermodynamicsSzilárd pointed out that a real-life Maxwell's demon would need to have some means of measuring molecular speed, and that the act of acquiring information would require an expenditure of energyLikewise, Brillouin demonstrated that the decrease in entropy caused by the demon would be less than the entropy produced by choosing molecules based on their speed.Maxwell's 'demon' repeatedly alters the permeability of the wall between A and BIn his first letter Maxwell called the demon a ""finite being"", while the ""Daemon"" name was first used by Lord Kelvin.
In the thought experiment, a demon controls a small massless door between two chambers of gasLeo Szilard's refinement of Maxwell's demon in the context of information theory is sometimes referred to as Szilard's demonMaxwell's demon can distinguish between fast and slow moving moleculesThe second law is also expressed as the assertion that in an isolated system, entropy never decreases.Maxwell conceived a thought experiment as a way of furthering the understanding of the second lawA heat engine operating between the thermal reservoirs A and B could extract useful work from this temperature difference.
Maxwell's demon James Clerk Maxwell imagined one container divided into two parts, A and BIt was proposed by the physicist James Clerk Maxwell in 1867When a faster-than-average molecule from A flies towards the trapdoor, the demon opens it, and the molecule will fly from A to BHis description of the experiment is as follows: ..Observing the molecules on both sides, an imaginary demon guards a microscopic trapdoor in the wallWhen a faster-than-average molecule from ""A"" flies towards the trapdoor, the demon opens it, and the molecule will fly from ""A"" to ""B""Because the kinetic temperature of a gas depends on the velocities of its constituent molecules, the demon's actions cause one chamber to warm up and the other to cool downLikewise, when a slower-than-average molecule from ""B"" flies towards the trapdoor, the demon [SEP]What is the Maxwell's Demon thought experiment?","['C', 'A', 'D']",1.0
What is the application of Memristor?,"One advantage of memristive networks is that they can be implemented using relatively simple and inexpensive hardware, making them an attractive option for developing low-cost artificial intelligence systems. A memristor (; a portmanteau of memory resistor) is a non-linear two-terminal electrical component relating electric charge and magnetic flux linkage. However, the field of memristive networks is still in the early stages of development, and more research is needed to fully understand their capabilities and limitations. Memristor have applications in programmable logicSnider, Gregory Stuart (2004) ""Architecture and methods for computing with reconfigurable resistor crossbars"" signal processing,Mouttet, Blaise Laurent (2006) ""Programmable crossbar signal processor"" Super-resolution imaging physical neural networks,Snider, Greg (2003) ""Molecular-junction-nanowire-crossbar-based neural network"" control systems,Mouttet, Blaise Laurent (2007) ""Crossbar control circuit"" reconfigurable computing,Pino, Robinson E. (2010) ""Reconfigurable electronic circuit"" in-memory computing, brain–computer interfacesMouttet, Blaise Laurent (2009) ""Memristor crossbar neural interface"" and RFID.Kang, Hee Bok (2009) ""RFID device with memory unit having memristor characteristics"" Memristive devices are potentially used for stateful logic implication, allowing a replacement for CMOS-based logic computation Several early works have been reported in this direction. For a mathematical description of a memristive device (systems), see Theory. In a memristive network, the memristive devices are used to simulate the behavior of neurons and synapses in the human brain. A memristive network is a type of artificial neural network that is based on memristive devices, which are electronic components that exhibit the property of memristance. Several such memristor system technologies have been developed, notably ReRAM. Such a system comprises a circuit, of multiple conventional components, which mimics key properties of the ideal memristor component and is also commonly referred to as a memristor. The identification of memristive properties in electronic devices has attracted controversy. This has raised the suggestion that such devices should be recognised as memristors. Chua proposed the use of memristive devices as a means of building artificial neural networks that could simulate the behavior of the human brain. In 2012, Meuffels and Soni discussed some fundamental issues and problems in the realization of memristors. A simple test has been proposed by Pershin and Di Ventra to analyze whether such an ideal or generic memristor does actually exist or is a purely mathematical concept. This was an early use of the word ""memristor"" in the context of a circuit device. Such a device would act as a memristor under all conditions, but would be less practical. ===Memristive systems=== In the more general concept of an n-th order memristive system the defining equations are :\begin{align} y(t) &= g(\textbf{x},u,t)u(t), \\\ \dot{\textbf{x}} &= f(\textbf{x},u,t) \end{align} where u(t) is an input signal, y(t) is an output signal, the vector x represents a set of n state variables describing the device, and g and f are continuous functions. These devices are intended for applications in nanoelectronic memory devices, computer logic, and neuromorphic/neuromemristive computer architectures. Since then, several memristive sensors have been demonstrated. ===Spin memristive systems=== ====Spintronic memristor==== Chen and Wang, researchers at disk-drive manufacturer Seagate Technology described three examples of possible magnetic memristors. The article was the first to demonstrate that a solid-state device could have the characteristics of a memristor based on the behavior of nanoscale thin films. In 2011, they showed how memristor crossbars can be combined with fuzzy logic to create an analog memristive neuro-fuzzy computing system with fuzzy input and output terminals. ","Memristor has applications in the production of electric cars, airplanes, and ships.","Memristor has applications in the production of food, clothing, and shelter.","Memristor has applications in the production of solar panels, wind turbines, and hydroelectric power plants.","Memristor has applications in programmable logic signal processing, Super-resolution imaging, physical neural networks, control systems, reconfigurable computing, in-memory computing, brain–computer interfaces and RFID.","Memristor has applications in optical fiber communication, satellite communication, and wireless communication.",D,kaggle200,"In 2008, a team at HP Labs found experimental evidence for the Chua's memristor based on an analysis of a thin film of titanium dioxide, thus connecting the operation of ReRAM devices to the memristor concept. According to HP Labs, the memristor would operate in the following way: the memristor's electrical resistance is not constant but depends on the current that had previously flowed through the device, i.e., its present resistance depends on how much electric charge has previously flowed through it and in what direction; the device remembers its history—the so-called ""non-volatility property"". When the electric power supply is turned off, the memristor remembers its most recent resistance until it is turned on again.
Dr. Paul Penfield, in a 1974 MIT technical report mentions the memristor in connection with Josephson junctions. This was an early use of the word ""memristor"" in the context of a circuit device.
In July 2008, Erokhin and Fontana claimed to have developed a polymeric memristor before the more recently announced titanium dioxide memristor.
Memristor have applications in programmable logic signal processing, Super-resolution imaging physical neural networks, control systems, reconfigurable computing, in-memory computing, brain–computer interfaces and RFID. Memristive devices are potentially used for stateful logic implication, allowing a replacement for CMOS-based logic computation Several early works have been reported in this direction.","In 2012, Erokhin and co-authors have demonstrated a stochastic three-dimensional matrix with capabilities for learning and adapting based on polymeric memristor.
Titanium dioxide memristor Interest in the memristor revived when an experimental solid-state version was reported by R. Stanley Williams of Hewlett Packard in 2007. The article was the first to demonstrate that a solid-state device could have the characteristics of a memristor based on the behavior of nanoscale thin films. The device neither uses magnetic flux as the theoretical memristor suggested, nor stores charge as a capacitor does, but instead achieves a resistance dependent on the history of current.
The identification of memristive properties in electronic devices has attracted controversy. Experimentally, the ideal memristor has yet to be demonstrated.","This was an early use of the word ""memristor"" in the context of a circuit device.
In July 2008, Erokhin and Fontana claimed to have developed a polymeric memristor before the more recently announced titanium dioxide memristor.
Memristor have applications in programmable logic signal processing, Super-resolution imaging physical neural networks, control systems, reconfigurable computing, in-memory computing, brain–computer interfaces and RFIDExperimentally, the ideal memristor has yet to be demonstratedThe article was the first to demonstrate that a solid-state device could have the characteristics of a memristor based on the behavior of nanoscale thin films- In 2008, a team at HP Labs found experimental evidence for the Chua's memristor based on an analysis of a thin film of titanium dioxide, thus connecting the operation of ReRAM devices to the memristor conceptThe device neither uses magnetic flux as the theoretical memristor suggested, nor stores charge as a capacitor does, but instead achieves a resistance dependent on the history of current.
The identification of memristive properties in electronic devices has attracted controversyPaul Penfield, in a 1974 MIT technical report mentions the memristor in connection with Josephson junctionsAccording to HP Labs, the memristor would operate in the following way: the memristor's electrical resistance is not constant but depends on the current that had previously flowed through the device, i.e., its present resistance depends on how much electric charge has previously flowed through it and in what direction; the device remembers its history—the so-called ""non-volatility property""Memristive devices are potentially used for stateful logic implication, allowing a replacement for CMOS-based logic computation Several early works have been reported in this direction.When the electric power supply is turned off, the memristor remembers its most recent resistance until it is turned on again.
DrIn 2012, Erokhin and co-authors have demonstrated a stochastic three-dimensional matrix with capabilities for learning and adapting b","This was an early use of the word ""memristor"" in the context of a circuit device.
In July 2008, Erokhin and Fontana claimed to have developed a polymeric memristor before the more recently announced titanium dioxide memristor.
Memristor have applications in programmable logic signal processing, Super-resolution imaging physical neural networks, control systems, reconfigurable computing, in-memory computing, brain–computer interfaces and RFIDExperimentally, the ideal memristor has yet to be demonstratedThe article was the first to demonstrate that a solid-state device could have the characteristics of a memristor based on the behavior of nanoscale thin films- In 2008, a team at HP Labs found experimental evidence for the Chua's memristor based on an analysis of a thin film of titanium dioxide, thus connecting the operation of ReRAM devices to the memristor conceptThe device neither uses magnetic flux as the theoretical memristor suggested, nor stores charge as a capacitor does, but instead achieves a resistance dependent on the history of current.
The identification of memristive properties in electronic devices has attracted controversyPaul Penfield, in a 1974 MIT technical report mentions the memristor in connection with Josephson junctionsAccording to HP Labs, the memristor would operate in the following way: the memristor's electrical resistance is not constant but depends on the current that had previously flowed through the device, i.e., its present resistance depends on how much electric charge has previously flowed through it and in what direction; the device remembers its history—the so-called ""non-volatility property""Memristive devices are potentially used for stateful logic implication, allowing a replacement for CMOS-based logic computation Several early works have been reported in this direction.When the electric power supply is turned off, the memristor remembers its most recent resistance until it is turned on again.
DrIn 2012, Erokhin and co-authors have demonstrated a stochastic three-dimensional matrix with capabilities for learning and adapting b[SEP]What is the application of Memristor?","['D', 'E', 'C']",1.0
What is the effect generated by a spinning superconductor?,"This experiment measured the magnetic fields of four superconducting gyroscopes to determine their spin axes. It is characterized by the Meissner effect, the complete ejection of magnetic field lines from the interior of the superconductor during its transitions into the superconducting state. In the quantum vortex, supercurrent circulates around the normal (i.e. non-superconducting) core of the vortex. The results were strongly supported by Monte Carlo computer simulations. === Meissner effect === When a superconductor is placed in a weak external magnetic field H, and cooled below its transition temperature, the magnetic field is ejected. Most pure elemental superconductors, except niobium and carbon nanotubes, are Type I, while almost all impure and compound superconductors are Type II. === London moment === Conversely, a spinning superconductor generates a magnetic field, precisely aligned with the spin axis. The situation is different in a superconductor. This is because the Gibbs free energy of the superconducting phase increases quadratically with the magnetic field while the free energy of the normal phase is roughly independent of the magnetic field. A superconductor with little or no magnetic field within it is said to be in the Meissner state. thumb|Video of the Meissner effect in a high-temperature superconductor (black pellet) with a NdFeB magnet (metallic) thumb|A high-temperature superconductor levitating above a magnet Superconductivity is a set of physical properties observed in certain materials where electrical resistance vanishes and magnetic flux fields are expelled from the material. It has been experimentally demonstrated that, as a consequence, when the magnetic field is increased beyond the critical field, the resulting phase transition leads to a decrease in the temperature of the superconducting material. By using the London equation, one can obtain the dependence of the magnetic field inside the superconductor on the distance to the surface. Superconductors are also able to maintain a current with no applied voltage whatsoever, a property exploited in superconducting electromagnets such as those found in MRI machines. The superconductivity effect came about as a result of electrons twisted into a vortex between the graphene layers, called ""skyrmions"". The Meissner effect is a defining characteristic of superconductivity. In superconductivity, a Josephson vortex (after Brian Josephson from Cambridge University) is a quantum vortex of supercurrents in a Josephson junction (see Josephson effect). The second hypothesis proposed that electron pairing in high-temperature superconductors is mediated by short- range spin waves known as paramagnons. Two superconductors with greatly different values of the critical magnetic field are combined to produce a fast, simple switch for computer elements. ""High-Temperature Superconductivity Understood at Last"" == External links == * Video about Type I Superconductors: R=0/transition temperatures/ B is a state variable/ Meissner effect/ Energy gap(Giaever)/ BCS model * Lectures on Superconductivity (series of videos, including interviews with leading experts) * YouTube Video Levitating magnet * DoITPoMS Teaching and Learning Package – ""Superconductivity"" Category:Phases of matter Category:Exotic matter Category:Unsolved problems in physics Category:Magnetic levitation Category:Physical phenomena Category:Spintronics Category:Phase transitions Category:Articles containing video clips Category:Science and technology in the Netherlands Category:Dutch inventions Category:1911 in science The Meissner effect was given a phenomenological explanation by the brothers Fritz and Heinz London, who showed that the electromagnetic free energy in a superconductor is minimized provided abla^2\mathbf{H} = \lambda^{-2} \mathbf{H}\, where H is the magnetic field and λ is the London penetration depth. Like ferromagnetism and atomic spectral lines, superconductivity is a phenomenon which can only be explained by quantum mechanics. ","An electric field, precisely aligned with the spin axis.","A magnetic field, randomly aligned with the spin axis.","A magnetic field, precisely aligned with the spin axis.","A gravitational field, randomly aligned with the spin axis.","A gravitational field, precisely aligned with the spin axis.",C,kaggle200,"A spinning wheel is mounted in a gimbal frame whose axis of rotation (the precession axis) is perpendicular to the spin axis. The assembly is mounted on the vehicle chassis such that, at equilibrium, the spin axis, precession axis and vehicle roll axis are mutually perpendicular.
A London moment gyroscope relies on the quantum-mechanical phenomenon, whereby a spinning superconductor generates a magnetic field whose axis lines up exactly with the spin axis of the gyroscopic rotor. A magnetometer determines the orientation of the generated field, which is interpolated to determine the axis of rotation. Gyroscopes of this type can be extremely accurate and stable. For example, those used in the Gravity Probe B experiment measured changes in gyroscope spin axis orientation to better than 0.5 milliarcseconds (1.4 degrees, or about ) over a one-year period. This is equivalent to an angular separation the width of a human hair viewed from away.
The London moment (after Fritz London) is a quantum-mechanical phenomenon whereby a spinning superconductor generates a magnetic field whose axis lines up exactly with the spin axis.
Conversely, a spinning superconductor generates a magnetic field, precisely aligned with the spin axis. The effect, the London moment, was put to good use in Gravity Probe B. This experiment measured the magnetic fields of four superconducting gyroscopes to determine their spin axes. This was critical to the experiment since it is one of the few ways to accurately determine the spin axis of an otherwise featureless sphere.","The London moment (after Fritz London) is a quantum-mechanical phenomenon whereby a spinning superconductor generates a magnetic field whose axis lines up exactly with the spin axis.
The term may also refer to the magnetic moment of any rotation of any superconductor, caused by the electrons lagging behind the rotation of the object, although the field strength is independent of the charge carrier density in the superconductor.
Any Kerr–Newman source has its rotation axis aligned with its magnetic axis. Thus, a Kerr–Newman source is different from commonly observed astronomical bodies, for which there is a substantial angle between the rotation axis and the magnetic moment. Specifically, neither the Sun, nor any of the planets in the Solar System have magnetic fields aligned with the spin axis. Thus, while the Kerr solution describes the gravitational field of the Sun and planets, the magnetic fields arise by a different process.
London moment Conversely, a spinning superconductor generates a magnetic field, precisely aligned with the spin axis. The effect, the London moment, was put to good use in Gravity Probe B. This experiment measured the magnetic fields of four superconducting gyroscopes to determine their spin axes. This was critical to the experiment since it is one of the few ways to accurately determine the spin axis of an otherwise featureless sphere.","This experiment measured the magnetic fields of four superconducting gyroscopes to determine their spin axesThe London moment (after Fritz London) is a quantum-mechanical phenomenon whereby a spinning superconductor generates a magnetic field whose axis lines up exactly with the spin axis.
The term may also refer to the magnetic moment of any rotation of any superconductor, caused by the electrons lagging behind the rotation of the object, although the field strength is independent of the charge carrier density in the superconductor.
Any Kerr–Newman source has its rotation axis aligned with its magnetic axisThus, while the Kerr solution describes the gravitational field of the Sun and planets, the magnetic fields arise by a different process.
London moment Conversely, a spinning superconductor generates a magnetic field, precisely aligned with the spin axisThe assembly is mounted on the vehicle chassis such that, at equilibrium, the spin axis, precession axis and vehicle roll axis are mutually perpendicular.
A London moment gyroscope relies on the quantum-mechanical phenomenon, whereby a spinning superconductor generates a magnetic field whose axis lines up exactly with the spin axis of the gyroscopic rotorThis is equivalent to an angular separation the width of a human hair viewed from away.
The London moment (after Fritz London) is a quantum-mechanical phenomenon whereby a spinning superconductor generates a magnetic field whose axis lines up exactly with the spin axis.
Conversely, a spinning superconductor generates a magnetic field, precisely aligned with the spin axis- A spinning wheel is mounted in a gimbal frame whose axis of rotation (the precession axis) is perpendicular to the spin axisA magnetometer determines the orientation of the generated field, which is interpolated to determine the axis of rotationFor example, those used in the Gravity Probe B experiment measured changes in gyroscope spin axis orientation to better than 0.5 milliarcseconds (1.4 degrees, or about ) over a one-year periodThis was critical to the experiment since it is one of the few","This experiment measured the magnetic fields of four superconducting gyroscopes to determine their spin axesThe London moment (after Fritz London) is a quantum-mechanical phenomenon whereby a spinning superconductor generates a magnetic field whose axis lines up exactly with the spin axis.
The term may also refer to the magnetic moment of any rotation of any superconductor, caused by the electrons lagging behind the rotation of the object, although the field strength is independent of the charge carrier density in the superconductor.
Any Kerr–Newman source has its rotation axis aligned with its magnetic axisThus, while the Kerr solution describes the gravitational field of the Sun and planets, the magnetic fields arise by a different process.
London moment Conversely, a spinning superconductor generates a magnetic field, precisely aligned with the spin axisThe assembly is mounted on the vehicle chassis such that, at equilibrium, the spin axis, precession axis and vehicle roll axis are mutually perpendicular.
A London moment gyroscope relies on the quantum-mechanical phenomenon, whereby a spinning superconductor generates a magnetic field whose axis lines up exactly with the spin axis of the gyroscopic rotorThis is equivalent to an angular separation the width of a human hair viewed from away.
The London moment (after Fritz London) is a quantum-mechanical phenomenon whereby a spinning superconductor generates a magnetic field whose axis lines up exactly with the spin axis.
Conversely, a spinning superconductor generates a magnetic field, precisely aligned with the spin axis- A spinning wheel is mounted in a gimbal frame whose axis of rotation (the precession axis) is perpendicular to the spin axisA magnetometer determines the orientation of the generated field, which is interpolated to determine the axis of rotationFor example, those used in the Gravity Probe B experiment measured changes in gyroscope spin axis orientation to better than 0.5 milliarcseconds (1.4 degrees, or about ) over a one-year periodThis was critical to the experiment since it is one of the few[SEP]What is the effect generated by a spinning superconductor?","['C', 'A', 'D']",1.0
What is the main focus of cryogenic and noble liquid detectors in dark matter experiments?,"The Cryogenic Low-Energy Astrophysics with Noble liquids (CLEAN) experiment by the DEAP/CLEAN collaboration is searching for dark matter using noble gases at the SNOLAB underground facility. SIMPLE (Superheated Instrument for Massive ParticLe Experiments) is an experiment search for direct evidence of dark matter. CLEAN has studied neon and argon in the MicroCLEAN prototype, and running the MiniCLEAN detector to test a multi-ton design. == Design == Dark matter searches in isolated noble gas scintillators with xenon and argon have set limits on WIMP interactions, such as recent cross sections from LUX and XENON. Indirect detection of dark matter is a method of searching for dark matter that focuses on looking for the products of dark matter interactions (particularly Standard Model particles) rather than the dark matter itself. Contrastingly, direct detection of dark matter looks for interactions of dark matter directly with atoms. There are experiments aiming to produce dark matter particles using colliders. There are many instruments that have been used in efforts to detect dark matter annihilation products, including H.E.S.S., VERITAS, and MAGIC (Cherenkov telescopes), Fermi Large Area Telescope (LAT), High Altitude Water Cherenkov Experiment (HAWC), and Antares, IceCube, and SuperKamiokande (neutrino telescopes).Ahnen, Max Ludwig, S. Ansoldi, L. A. Antonelli, P. Antoranz, A. Babic, B. Banerjee, P. Bangale et al. ""Limits to dark matter annihilation cross-section from a combined analysis of MAGIC and Fermi-LAT observations of dwarf satellite galaxies."" These two measurements determine the energy deposited in the crystal in each interaction, but also give information about what kind of particle caused the event. GEODM (Germanium Observatory for Dark Matter), with roughly 1500 kg of detector mass, has expressed interest in the SNOLAB ""Cryopit"" location. The Cryogenic Dark Matter Search (CDMS) is a series of experiments designed to directly detect particle dark matter in the form of Weakly Interacting Massive Particles (or WIMPs). The DarkSide collaboration is an international affiliation of universities and labs seeking to directly detect dark matter in the form of weakly interacting massive particles (WIMPs). Detectors like CDMS and similar experiments measure huge numbers of interactions within their detector volume in order to find the extremely rare WIMP events. == Detection technology == The CDMS detectors measure the ionization and phonons produced by every particle interaction in their germanium and silicon crystal substrates. In general, indirect detection searches focus on either gamma-rays, cosmic-rays, or neutrinos. Spin-dependeant cross section limits were set for light WIMPs. ==References== * The SIMPLE Phase II dark matter search (2014) * Fabrication and response of high concentration SIMPLE superheated droplet detectors with different liquids (2013) * Final Analysis and Results of the Phase II SIMPLE Dark Matter Search (2012) * Reply to Comment on First Results of the Phase II SIMPLE Dark Matter Search (2012) * Comment on First Results of the Phase II SIMPLE Dark Matter Search (2012) * First Results of the Phase II SIMPLE Dark Matter Search (2010) * SIMPLE dark matter search results (2005) ==External links== * SIMPLE experiment website Category:Experiments for dark matter search Searches for the products of dark matter interactions are profitable because there is an extensive amount of dark matter present in the universe, and presumably, a lot of dark matter interactions and products of those interactions (which are the focus of indirect detection searches); and many currently operational telescopes can be used to search for these products. It will have 500 kg of noble cryogen in a spherical steel vessel with 92 PMTs shielded in a water tank with muon rejection. == References == Category:Experiments for dark matter search The detectors are filled with liquid argon from underground sources in order to exclude the radioactive isotope , which makes up one in every 1015 (quadrillion) atoms in atmospheric argon. Darkside-20k (DS-20k) with 20 tonnes of liquid argon is being planned as of 2019. == Darkside-10 == The Darkside-10 prototype detector had 10 kg of liquid argon. Using an array of semiconductor detectors at millikelvin temperatures, CDMS has at times set the most sensitive limits on the interactions of WIMP dark matter with terrestrial materials (as of 2018, CDMS limits are not the most sensitive). Thus, the objects of indirect searches are the secondary products that are expected from the annihilation of two dark matter particles. ",Distinguishing background particles from dark matter particles by detecting the heat produced when a particle hits an atom in a crystal absorber or the scintillation produced by a particle collision in liquid xenon or argon.,Detecting the heat produced when a particle hits an atom in a crystal absorber or the scintillation produced by a particle collision in liquid xenon or argon to determine the mass and interaction cross section with electrons of dark matter particles.,Detecting the heat produced when a particle hits an atom in a crystal absorber or the scintillation produced by a particle collision in liquid xenon or argon to distinguish between different types of background particles.,Detecting the heat produced when a particle hits an atom in a crystal absorber or the scintillation produced by a particle collision in liquid xenon or argon to determine the mass and interaction cross section with nucleons of dark matter particles.,Detecting the mass and interaction cross section with nucleons of dark matter particles by detecting the heat produced when a particle hits an atom in a crystal absorber or the scintillation produced by a particle collision in liquid xenon or argon.,A,kaggle200,"WIMPs fit the model of a relic dark matter particle from the early Universe, when all particles were in a state of thermal equilibrium. For sufficiently high temperatures, such as those existing in the early Universe, the dark matter particle and its antiparticle would have been both forming from and annihilating into lighter particles. As the Universe expanded and cooled, the average thermal energy of these lighter particles decreased and eventually became insufficient to form a dark matter particle-antiparticle pair. The annihilation of the dark matter particle-antiparticle pairs, however, would have continued, and the number density of dark matter particles would have begun to decrease exponentially. Eventually, however, the number density would become so low that the dark matter particle and antiparticle interaction would cease, and the number of dark matter particles would remain (roughly) constant as the Universe continued to expand. Particles with a larger interaction cross section would continue to annihilate for a longer period of time, and thus would have a smaller number density when the annihilation interaction ceases. Based on the current estimated abundance of dark matter in the Universe, if the dark matter particle is such a relic particle, the interaction cross section governing the particle-antiparticle annihilation can be no larger than the cross section for the weak interaction. If this model is correct, the dark matter particle would have the properties of the WIMP.
The most recently completed version of the XENON experiment is XENON1T, which used 3.2 tons of liquid xenon. This experiment produced a then record limit for the cross section of WIMP dark matter of at a mass of 30 GeV/c. The most recent iteration of the XENON succession is XENONnT, which is currently running with 8 tones of liquid xenon. This experiment is projected to be able to probe WIMP-nucleon cross sections of for a 50 GeV/c WIMP mass. At this ultra-low cross section, interference from the background neutrino flux is predicted to be problematic.
Currently there has been no well-established claim of dark matter detection from a direct detection experiment, leading instead to strong upper limits on the mass and interaction cross section with nucleons of such dark matter particles. The DAMA/NaI and more recent DAMA/LIBRA experimental collaborations have detected an annual modulation in the rate of events in their detectors, which they claim is due to dark matter. This results from the expectation that as the Earth orbits the Sun, the velocity of the detector relative to the dark matter halo will vary by a small amount. This claim is so far unconfirmed and in contradiction with negative results from other experiments such as LUX, SuperCDMS and XENON100.
These experiments mostly use either cryogenic or noble liquid detector technologies. Cryogenic detectors operating at temperatures below 100 mK, detect the heat produced when a particle hits an atom in a crystal absorber such as germanium. Noble liquid detectors detect scintillation produced by a particle collision in liquid xenon or argon. Cryogenic detector experiments include: CDMS, CRESST, EDELWEISS, EURECA. Noble liquid experiments include LZ, XENON, DEAP, ArDM, WARP, DarkSide, PandaX, and LUX, the Large Underground Xenon experiment. Both of these techniques focus strongly on their ability to distinguish background particles (which predominantly scatter off electrons) from dark matter particles (that scatter off nuclei). Other experiments include SIMPLE and PICASSO.","Currently there has been no well-established claim of dark matter detection from a direct detection experiment, leading instead to strong upper limits on the mass and interaction cross section with nucleons of such dark matter particles. The DAMA/NaI and more recent DAMA/LIBRA experimental collaborations have detected an annual modulation in the rate of events in their detectors, which they claim is due to dark matter. This results from the expectation that as the Earth orbits the Sun, the velocity of the detector relative to the dark matter halo will vary by a small amount. This claim is so far unconfirmed and in contradiction with negative results from other experiments such as LUX, SuperCDMS and XENON100.A special case of direct detection experiments covers those with directional sensitivity. This is a search strategy based on the motion of the Solar System around the Galactic Center. A low-pressure time projection chamber makes it possible to access information on recoiling tracks and constrain WIMP-nucleus kinematics. WIMPs coming from the direction in which the Sun travels (approximately towards Cygnus) may then be separated from background, which should be isotropic. Directional dark matter experiments include DMTPC, DRIFT, Newage and MIMAC.
Noble gas scintillators Noble gas scintillators use the property of certain materials to scintillate, which is when a material absorbs energy from a particle and remits the same amount of energy as light. Of particular interest for dark matter detection is the use of noble gases, even more specifically liquid xenon. The XENON series of experiments, also located at the Gran Sasso National Lab, is a forefront user of liquid xenon scintillators. Common across all generations of the experiment, the detector consists of a tank of liquid xenon with a gaseous layer on top. At the top and bottom of the detector is a layer of photomultiplier tubes (PMTs). When a dark matter particle collides with the liquid xenon, it rapidly releases a photon which is detected by the PMTs. To cross reference this data point an electric field is applied which is sufficiently large to prevent complete recombination of the electrons knocked loose by the interaction. These drift to the top of the detector and are also detected, creating two separate detections for each event. Measuring the time delay between these allows for a complete 3-D reconstruction of the interaction. The detector is also able to discriminate between electronic recoils and nuclear recoils, as both types of events would produce differing ratios of the photon energy and the released electron energy. The most recently completed version of the XENON experiment is XENON1T, which used 3.2 tons of liquid xenon. This experiment produced a then record limit for the cross section of WIMP dark matter of 4.1×10−47 cm2 at a mass of 30 GeV/c2. The most recent iteration of the XENON succession is XENONnT, which is currently running with 8 tones of liquid xenon. This experiment is projected to be able to probe WIMP-nucleon cross sections of 1.4×10−48 cm2 for a 50 GeV/c2 WIMP mass. At this ultra-low cross section, interference from the background neutrino flux is predicted to be problematic.
These experiments mostly use either cryogenic or noble liquid detector technologies. Cryogenic detectors operating at temperatures below 100 mK, detect the heat produced when a particle hits an atom in a crystal absorber such as germanium. Noble liquid detectors detect scintillation produced by a particle collision in liquid xenon or argon. Cryogenic detector experiments include: CDMS, CRESST, EDELWEISS, EURECA. Noble liquid experiments include LZ, XENON, DEAP, ArDM, WARP, DarkSide, PandaX, and LUX, the Large Underground Xenon experiment. Both of these techniques focus strongly on their ability to distinguish background particles (which predominantly scatter off electrons) from dark matter particles (that scatter off nuclei). Other experiments include SIMPLE and PICASSO.","Of particular interest for dark matter detection is the use of noble gases, even more specifically liquid xenonDirectional dark matter experiments include DMTPC, DRIFT, Newage and MIMAC.
Noble gas scintillators Noble gas scintillators use the property of certain materials to scintillate, which is when a material absorbs energy from a particle and remits the same amount of energy as lightNoble liquid detectors detect scintillation produced by a particle collision in liquid xenon or argonNoble liquid experiments include LZ, XENON, DEAP, ArDM, WARP, DarkSide, PandaX, and LUX, the Large Underground Xenon experimentCryogenic detector experiments include: CDMS, CRESST, EDELWEISS, EURECAThis claim is so far unconfirmed and in contradiction with negative results from other experiments such as LUX, SuperCDMS and XENON100.
These experiments mostly use either cryogenic or noble liquid detector technologiesAt this ultra-low cross section, interference from the background neutrino flux is predicted to be problematic.
These experiments mostly use either cryogenic or noble liquid detector technologiesWhen a dark matter particle collides with the liquid xenon, it rapidly releases a photon which is detected by the PMTsCurrently there has been no well-established claim of dark matter detection from a direct detection experiment, leading instead to strong upper limits on the mass and interaction cross section with nucleons of such dark matter particlesCommon across all generations of the experiment, the detector consists of a tank of liquid xenon with a gaseous layer on topThe DAMA/NaI and more recent DAMA/LIBRA experimental collaborations have detected an annual modulation in the rate of events in their detectors, which they claim is due to dark matterCryogenic detectors operating at temperatures below 100 mK, detect the heat produced when a particle hits an atom in a crystal absorber such as germaniumBoth of these techniques focus strongly on their ability to distinguish background particles (which predominantly scatter off electrons) from dark matter particles (that scatter off n","Of particular interest for dark matter detection is the use of noble gases, even more specifically liquid xenonDirectional dark matter experiments include DMTPC, DRIFT, Newage and MIMAC.
Noble gas scintillators Noble gas scintillators use the property of certain materials to scintillate, which is when a material absorbs energy from a particle and remits the same amount of energy as lightNoble liquid detectors detect scintillation produced by a particle collision in liquid xenon or argonNoble liquid experiments include LZ, XENON, DEAP, ArDM, WARP, DarkSide, PandaX, and LUX, the Large Underground Xenon experimentCryogenic detector experiments include: CDMS, CRESST, EDELWEISS, EURECAThis claim is so far unconfirmed and in contradiction with negative results from other experiments such as LUX, SuperCDMS and XENON100.
These experiments mostly use either cryogenic or noble liquid detector technologiesAt this ultra-low cross section, interference from the background neutrino flux is predicted to be problematic.
These experiments mostly use either cryogenic or noble liquid detector technologiesWhen a dark matter particle collides with the liquid xenon, it rapidly releases a photon which is detected by the PMTsCurrently there has been no well-established claim of dark matter detection from a direct detection experiment, leading instead to strong upper limits on the mass and interaction cross section with nucleons of such dark matter particlesCommon across all generations of the experiment, the detector consists of a tank of liquid xenon with a gaseous layer on topThe DAMA/NaI and more recent DAMA/LIBRA experimental collaborations have detected an annual modulation in the rate of events in their detectors, which they claim is due to dark matterCryogenic detectors operating at temperatures below 100 mK, detect the heat produced when a particle hits an atom in a crystal absorber such as germaniumBoth of these techniques focus strongly on their ability to distinguish background particles (which predominantly scatter off electrons) from dark matter particles (that scatter off n[SEP]What is the main focus of cryogenic and noble liquid detectors in dark matter experiments?","['A', 'D', 'E']",1.0
What is a pycnometer?,"A gas pycnometer is a laboratory device used for measuring the density—or, more accurately, the volume—of solids, be they regularly shaped, porous or non-porous, monolithic, powdered, granular or in some way comminuted, employing some method of gas displacement and the volume:pressure relationship known as Boyle's Law. The simplest type of gas pycnometer (due to its relative lack of moving parts) consists of two chambers, one (with a removable gas-tight lid) to hold the sample and a second chamber of fixed, known (via calibration) internal volume – referred to as the reference volume or added volume. A gas pycnometer is also sometimes referred to as a helium pycnometer. ==Types of gas pycnometer== ===Gas expansion pycnometer=== Gas expansion pycnometer is also known as constant volume gas pycnometer. The volume measured in a gas pycnometer is that amount of three-dimensional space which is inaccessible to the gas used, i.e. that volume within the sample chamber from which the gas is excluded. Pyknometer is to be found in older texts, and is used interchangeably with pycnometer in British English. In practice the sample may occupy either chamber, that is gas pycnometers can be constructed such that the sample chamber is pressurized first, or such that it is the reference chamber that starts at the higher pressure. Derivation of the ""working equation"" and a schematic illustration of such a gas expansion pycnometer is given by Lowell et al..S. Lowell, J.E. Shields, M.A. Thomas and M. Thommes ""Characterization of Porous Solids and Powders: Surface Area, Pore Size and Density"", Springer (originally by Kluwer Academic Publishers), 2004 p. 327 ===Variable volume pycnometer=== Variable volume pycnometer (or gas comparison pycnometer) consists of either a single or two variable volume chambers. A pyranometer () is a type of actinometer used for measuring solar irradiance on a planar surface and it is designed to measure the solar radiation flux density (W/m2) from the hemisphere above within a wavelength range 0.3 μm to 3 μm. Various design parameters have been analyzed by Tamari.S. Tamari (2004) Meas. Sci. Technol. 15 549–558 ""Optimum design of the constant- volume gas pycnometer for determining the volume of solid particles"" The working equation of a gas pycnometer wherein the sample chamber is pressurized first is as follows: ::V_{s} = V_{c} + \frac{ V_{r}} {1-\frac{P_{1}}{P_{2}}} where Vs is the sample volume, Vc is the volume of the empty sample chamber (known from a prior calibration step), Vr is the volume of the reference volume (again known from a prior calibration step), P1 is the first pressure (i.e. in the sample chamber only) and P2 is the second (lower) pressure after expansion of the gas into the combined volumes of sample chamber and reference chamber. *An extreme example of the gas displacement principle for volume measurement is described in (Lindberg, 1993) wherein a chamber large enough to hold a flatbed truck is used to measure the volume of a load of timber. ==See also== *Pycnometer ==References== ==External links== *ASTM International, formerly known as the American Society for Testing and Materials. This type of pycnometer is commercially obsolete; in 2006 ASTM withdrew its standard test method D2856ASTM D2856-94(1998) Standard Test Method for Open-Cell Content of Rigid Cellular Plastics by the Air Pycnometer (withdrawn in 2006). for the open-cell content of rigid cellular plastics by the air pycnometer, which relied upon the use of a variable volume pycnometer, and was replaced by test method D6226ASTM D6226-05 Standard Test Method for Open Cell Content of Rigid Cellular Plastics. which describes a gas expansion pycnometer. ==Practical use== ===Volume vs density=== While pycnometers (of any type) are recognized as density measuring devices they are in fact devices for measuring volume only. A lysimeter (from Greek λύσις (loosening) and the suffix -meter) is a measuring device which can be used to measure the amount of actual evapotranspiration which is released by plants (usually crops or trees). A pneumonic device is any equipment designed for use with or relating to the diaphragm. * For non- porous solids a pycnometer can be used to measure particle density. The device additionally comprises a valve to admit a gas under pressure to one of the chambers, a pressure measuring device – usually a transducer – connected to the first chamber, a valved pathway connecting the two chambers, and a valved vent from the second of the chambers. An Abney level and clinometer is an instrument used in surveying which consists of a fixed sighting tube, a movable spirit level that is connected to a pointing arm, and a protractor scale. The spectrum is influenced also by aerosol and pollution. === Thermopile pyranometers === A thermopile pyranometer (also called thermo-electric pyranometer) is a sensor based on thermopiles designed to measure the broad band of the solar radiation flux density from a 180° field of view angle. It can be used as a hand-held instrument or mounted on a Jacob's staff for more precise measurement, and it is small enough to carry in a coat pocket.Smaller Instruments and Appliances: The Abney Level and Clinometer, A Manual of the Principal Instruments used in American Engineering and Surveying, W. & L. E. Gurley, Troy, NY, 1891; page 219.George William Usill, Clinometers: The Abney Level, Practical Surveying, Crosby Lockwood and Son, London, 1889; page 33. Each pyranometer has a unique sensitivity, unless otherwise equipped with electronics for signal calibration. ==== Usage ==== thumb|left|Thermopile pyranometer as part of a meteorological station Thermopile pyranometers are frequently used in meteorology, climatology, climate change research, building engineering physics, photovoltaic systems, and monitoring of photovoltaic power stations. Adsorption of the measuring gas should be avoided, as should excessive vapor pressure from moisture or other liquids present in the otherwise solid sample. ===Applications=== Gas pycnometers are used extensively for characterizing a wide variety of solids such as heterogeneous catalysts, carbons, DIN 51913 Testing of carbon materials – Determination of density by gas pycnometer (volumetric) using helium as the measuring gas metal powders,ASTM B923-02(2008)Standard Test Method for Metal Powder Skeletal Density by Helium or Nitrogen PycnometryMPIF Standard 63: Method for Determination of MIM Components (Gas Pycnometer) soils,ASTM D5550 -06 Standard Test Method for Specific Gravity of Soil Solids by Gas Pycnometer ceramics,ASTM C604 Standard Test Method for True Specific Gravity of Refractory Materials by Gas-Comparison Pycnometer active pharmaceutical ingredients (API's) and excipients,USP<699> ""Density of Solids"" petroleum coke,ASTM D2638 – 06 Standard Test Method for Real Density of Calcined Petroleum Coke by Helium Pycnometer cement and other construction materials,C. Hall ""Water Transport in Brick, Stone and Concrete"", Taylor & Francis, 2002, p. 13 cenospheres/glass microballoons and solid foams. ==Notes== *Pycnometer is the preferred spelling in modern American English usage. ",A device used to measure the density of a gas.,A device used to measure the mass of a liquid.,A device used to measure the volume of a gas.,A device used to determine the density of a liquid.,A device used to determine the volume of a liquid.,D,kaggle200,"The Fahrenheit hydrometer is a device used to measure the density of a liquid. It was invented by Daniel Gabriel Fahrenheit (1686–1736), better known for his work in thermometry.
A device used to measure humidity of air is called a psychrometer or hygrometer. A humidistat is a humidity-triggered switch, often used to control a dehumidifier.
A spirometer is a device used to measure timed expired and inspired volumes, and can be used to help diagnose asthma.
A pycnometer (from Greek: πυκνός () meaning ""dense""), also called pyknometer or specific gravity bottle, is a device used to determine the density of a liquid. A pycnometer is usually made of glass, with a close-fitting ground glass stopper with a capillary tube through it, so that air bubbles may escape from the apparatus. This device enables a liquid's density to be measured accurately by reference to an appropriate working fluid, such as water or mercury, using an analytical balance.","Pycnometer is the preferred spelling in modern American English usage. Pyknometer is to be found in older texts, and is used interchangeably with pycnometer in British English. The term has its origins in the Greek word πυκνός, meaning ""dense"".
The density calculated from a volume measured using a gas pycnometer is often referred to as skeletal density, true density or helium density.
For non-porous solids a pycnometer can be used to measure particle density.
An extreme example of the gas displacement principle for volume measurement is described in U.S. Patent 5,231,873 (Lindberg, 1993) wherein a chamber large enough to hold a flatbed truck is used to measure the volume of a load of timber.
An evaporator is a device used to turn a liquid into a gas.
Pycnometer A pycnometer (from Ancient Greek: πυκνός, romanized: puknos, lit. 'dense'), also called pyknometer or specific gravity bottle, is a device used to determine the density of a liquid. A pycnometer is usually made of glass, with a close-fitting ground glass stopper with a capillary tube through it, so that air bubbles may escape from the apparatus. This device enables a liquid's density to be measured accurately by reference to an appropriate working fluid, such as water or mercury, using an analytical balance.If the flask is weighed empty, full of water, and full of a liquid whose relative density is desired, the relative density of the liquid can easily be calculated. The particle density of a powder, to which the usual method of weighing cannot be applied, can also be determined with a pycnometer. The powder is added to the pycnometer, which is then weighed, giving the weight of the powder sample. The pycnometer is then filled with a liquid of known density, in which the powder is completely insoluble. The weight of the displaced liquid can then be determined, and hence the relative density of the powder.","Pycnometer is the preferred spelling in modern American English usageA humidistat is a humidity-triggered switch, often used to control a dehumidifier.
A spirometer is a device used to measure timed expired and inspired volumes, and can be used to help diagnose asthma.
A pycnometer (from Greek: πυκνός () meaning ""dense""), also called pyknometer or specific gravity bottle, is a device used to determine the density of a liquidA pycnometer is usually made of glass, with a close-fitting ground glass stopper with a capillary tube through it, so that air bubbles may escape from the apparatusPyknometer is to be found in older texts, and is used interchangeably with pycnometer in British EnglishPatent 5,231,873 (Lindberg, 1993) wherein a chamber large enough to hold a flatbed truck is used to measure the volume of a load of timber.
An evaporator is a device used to turn a liquid into a gas.
Pycnometer A pycnometer (from Ancient Greek: πυκνός, romanized: puknos, lit. 'dense'), also called pyknometer or specific gravity bottle, is a device used to determine the density of a liquidThe powder is added to the pycnometer, which is then weighed, giving the weight of the powder sampleThe pycnometer is then filled with a liquid of known density, in which the powder is completely insolubleThe term has its origins in the Greek word πυκνός, meaning ""dense"".
The density calculated from a volume measured using a gas pycnometer is often referred to as skeletal density, true density or helium density.
For non-porous solids a pycnometer can be used to measure particle density.
An extreme example of the gas displacement principle for volume measurement is described in U.S- The Fahrenheit hydrometer is a device used to measure the density of a liquidThe particle density of a powder, to which the usual method of weighing cannot be applied, can also be determined with a pycnometerThis device enables a liquid's density to be measured accurately by reference to an appropriate working fluid, such as water or mercury, using an analytical balance.It was invented by Daniel Gabriel Fahrenheit (1686","Pycnometer is the preferred spelling in modern American English usageA humidistat is a humidity-triggered switch, often used to control a dehumidifier.
A spirometer is a device used to measure timed expired and inspired volumes, and can be used to help diagnose asthma.
A pycnometer (from Greek: πυκνός () meaning ""dense""), also called pyknometer or specific gravity bottle, is a device used to determine the density of a liquidA pycnometer is usually made of glass, with a close-fitting ground glass stopper with a capillary tube through it, so that air bubbles may escape from the apparatusPyknometer is to be found in older texts, and is used interchangeably with pycnometer in British EnglishPatent 5,231,873 (Lindberg, 1993) wherein a chamber large enough to hold a flatbed truck is used to measure the volume of a load of timber.
An evaporator is a device used to turn a liquid into a gas.
Pycnometer A pycnometer (from Ancient Greek: πυκνός, romanized: puknos, lit. 'dense'), also called pyknometer or specific gravity bottle, is a device used to determine the density of a liquidThe powder is added to the pycnometer, which is then weighed, giving the weight of the powder sampleThe pycnometer is then filled with a liquid of known density, in which the powder is completely insolubleThe term has its origins in the Greek word πυκνός, meaning ""dense"".
The density calculated from a volume measured using a gas pycnometer is often referred to as skeletal density, true density or helium density.
For non-porous solids a pycnometer can be used to measure particle density.
An extreme example of the gas displacement principle for volume measurement is described in U.S- The Fahrenheit hydrometer is a device used to measure the density of a liquidThe particle density of a powder, to which the usual method of weighing cannot be applied, can also be determined with a pycnometerThis device enables a liquid's density to be measured accurately by reference to an appropriate working fluid, such as water or mercury, using an analytical balance.It was invented by Daniel Gabriel Fahrenheit (1686[SEP]What is a pycnometer?","['D', 'E', 'A']",1.0
"What is the estimated redshift of CEERS-93316, a candidate high-redshift galaxy observed by the James Webb Space Telescope?","Spectroscopic observations by JWST's NIRSpec instrument in October 2022 confirmed the galaxy's redshift of z = 13.2 to a high accuracy, establishing it as the oldest and most distant spectroscopically-confirmed galaxy known , with a light-travel distance (lookback time) of 13.6 billion years. CEERS-93316 is a high-redshift galaxy with a spectroscopic redshift z=4.9. F200DB-045 is a candidate high-redshift galaxy, with an estimated redshift of approximately z = 20.4, corresponding to 168 million years after the Big Bang. Notably, the redshift that was initially reported was photometric (z = 16.4), and would have made CEERS-93316 the earliest and most distant known galaxy observed. __NOTOC__ MACS0647-JD is a galaxy with a redshift of about z = 10.7, equivalent to a light travel distance of 13.26 billion light-years (4 billion parsecs). Nonetheless, the redshift value of the galaxy presented by the procedure in one study may differ from the values presented in other studies using different procedures. ==Discovery== The candidate high-redshift galaxy F200DB-045 was discovered within the data from the Early Release Observations (ERO) that was obtained using the Near Infrared Camera of the James Webb Space Telescope (JWST) in July 2022. (H0=67.4 and OmegaM=0.315 (see Table/Planck2018 at ""Lambda-CDM model#Parameters"" ) ==Discovery== The candidate high-redshift galaxy CEERS-93316 (RA:14:19:39.48 DEC:+52:56:34.92), in the Boötes constellation, was discovered by the CEERS imaging observing program using the Near Infrared Camera of the James Webb Space Telescope (JWST) in July 2022. It was reported with a redshift of z~10 using Hubble and Spitzer Space Telescope photometric data, with later reports in 2012 suggesting a possibly higher redshift of z = 11.9 Although doubts were raised that this galaxy could instead be a low- redshift interloper with extreme spectral emission lines producing the appearance of a very high redshift source, later spectroscopic observations by the James Webb Space Telescope's NIRSpec instrument in 2022 confirmed the galaxy's high redshift to a spectroscopically confirmed estimate of z = 11.58. == Gallery == File:Hudf09z10nl.png|UDFj-39546284 File:UDFj-39546284.tif|UDFj-39546284 appears as a faint red blob == See also == * EGSY8p7 * Hubble Ultra-Deep Field * List of the most distant astronomical objects * MACS0647-JD * Reionization * UDFy-38135539 == References == == External links == * UDFj-39546284 on WikiSky 20110127 Category:Fornax Category:Dwarf galaxies Category:Hubble Space Telescope Category:Hubble Ultra- Deep Field CEERS-93316 has a light-travel distance (lookback time) of 12.6 billion years, and, due to the expansion of the universe, a present proper distance of 25.7 billion light-years. MACS0647-JD was announced in November 2012, but by the next month UDFj-39546284, which was previously thought to be z = 10.3, was said to be at z = 11.9,Universe Today - Hubble Census Unveils Galaxies Shining Near Cosmic Dawn although more recent analyses have suggested the latter is likely to be at a lower redshift. This data included a nearby galaxy cluster SMACS J0723.3–7327, a massive cluster known as a possible ""cosmic telescope"" in amplifying background galaxies, including the F200DB-045 background galaxy. ==Distance== Only a photometric redshift has been determined for F200DB-045; follow-up spectroscopic measurements will be required to confirm the redshift (see spectroscopic redshift). Additional spectroscopic observations by JWST will be needed to accurately confirm the redshift of MACS0647-JD. == See also == * List of the most distant astronomical objects * Farthest galaxies ==References== ==External links== * * NASA Great Observatories Find Candidate for Most Distant Object in the Universe to Date * European Space Agency – Galaxy cluster MACS J0647.7+7015 Category:Galaxies Category:Camelopardalis Category:Dwarf galaxies If the distance estimate is correct, it formed about 427 million years after the Big Bang. ==Details== JD refers to J-band Dropout – the galaxy was not detected in the so-called J-band (F125W), nor in 14 bluer Hubble filters. F200DB-045 would have a light-travel distance (lookback time) of 13.7 billion years, and, due to the expansion of the universe, a present proper distance of 36.1 billion light-years. Due to the expansion of the universe, its present proper distance is 33.6 billion light-years. Infrared NIRCam imaging of MACS0647-JD by the James Webb Space Telescope (JWST) in September 2022 determined a photometric redshift of , in agreement with the previous Hubble estimate. CEERS stands for ""Cosmic Evolution Early Release Science Survey"", and is a deep- and wide-field sky survey program developed specifically for JWST image studies, and is conducted by the CEERS Collaboration. ==See also== * Earliest galaxies * F200DB-045 * GLASS-z12 * HD1 (galaxy) * JADES-GS-z13-0 * List of the most distant astronomical objects * Peekaboo Galaxy ==References== ==External links== * CEERS WebSite * IMAGE: CEERS-93316 galaxy (1 Aug 2022) * * Category:Astronomical objects discovered in 2022 Category:Boötes Category:Galaxies Category:Discoveries by the James Webb Space Telescope __NOTOC__ UDFj-39546284 is a high-redshift Lyman-break galaxy discovered by the Hubble Space Telescope in infrared Hubble Ultra-Deep Field (HUDF) observations in 2009. A paper in April 2023 suggests that JADES-GS-z13-0 isn't in fact a galaxy, but a dark star with a mass of around a million times that of the Sun. == See also == * List of the most distant astronomical objects * GN-z11 - Previous record holder from 2016 to 2022. (z = 10.957) == References == Category:Astronomical objects discovered in 2022 Category:Galaxies Category:Fornax Category:Discoveries by the James Webb Space Telescope JADES-GS-z13-0 is a high-redshift Lyman-break galaxy discovered by the James Webb Space Telescope (JWST) during NIRCam imaging for the JWST Advanced Deep Extragalactic Survey (JADES) on 29 September 2022. ","Approximately z = 6.0, corresponding to 1 billion years after the Big Bang.","Approximately z = 16.7, corresponding to 235.8 million years after the Big Bang.","Approximately z = 3.0, corresponding to 5 billion years after the Big Bang.","Approximately z = 10.0, corresponding to 13 billion years after the Big Bang.","Approximately z = 13.0, corresponding to 30 billion light-years away from Earth.",B,kaggle200,"More than a million quasars have been found, with the nearest known being about 600 million light-years away from Earth. The record for the most distant known quasar continues to change. In 2017, the quasar ULAS J1342+0928 was detected at redshift ""z"" = 7.54. Light observed from this 800-million-solar-mass quasar was emitted when the universe was only 690 million years old. In 2020, the quasar Pōniuāʻena was detected from a time only 700 million years after the Big Bang, and with an estimated mass of 1.5 billion times the mass of the Sun. In early 2021, the quasar J0313–1806, with a 1.6-billion-solar-mass black hole, was reported at ""z"" = 7.64, 670 million years after the Big Bang.
HD1 is a proposed high-redshift galaxy, and is considered, as of April 2022, to be one of the earliest and most distant known galaxies yet identified in the observable universe. The galaxy, with an estimated redshift of approximately z = 13.27, is seen as it was about 324 million years after the Big Bang, 13.787 billion years ago. It has a light-travel distance (lookback time) of 13.463 billion light-years from Earth, and, due to the expansion of the universe, a present proper distance of 33.288 billion light-years.
Within two weeks of the first Webb images, several preprint papers described a wide range of early galaxies believed to date from 235 million years (z=16.7) to 280 million years after the Big Bang, far earlier than previously known. The results await peer review. On 17 August 2022, NASA released a large mosaic image of 690 individual frames taken by the Near Infrared Camera (NIRCam) on JWST of numerous very early galaxies. Some early galaxies observed by JWST like CEERS-93316, which has an estimated redshift of approximately z=16.7 corresponding to 235.8 million years after the Big Bang, are high redshift galaxy candidates.
While some scientists have claimed other objects (such as Abell 1835 IR1916) have higher redshifts (and therefore are seen in an earlier stage of the universe's evolution), IOK-1's age and composition have been more reliably established. In December 2012, astronomers reported that UDFj-39546284 is the most distant object known and has a redshift value of 11.9. The object, estimated to have existed around 380 million years after the Big Bang (which was about 13.8 billion years ago), is about 13.42 billion light travel distance years away. The existence of galaxies so soon after the Big Bang suggests that protogalaxies must have grown in the so-called ""dark ages"". As of May 5, 2015, the galaxy EGS-zs8-1 is the most distant and earliest galaxy measured, forming 670 million years after the Big Bang. The light from EGS-zs8-1 has taken 13 billion years to reach Earth, and is now 30 billion light-years away, because of the expansion of the universe during 13 billion years. On 17 August 2022, NASA released a large mosaic image of 690 individual frames taken by the Near Infrared Camera (NIRCam) on the James Webb Space Telescope (JWST) of numerous very early galaxies. Some early galaxies observed by JWST, like CEERS-93316, a candidate high-redshift galaxy, has an estimated redshift of approximately z = 16.7, corresponding to 235.8 million years after the Big Bang.","HD1 is a proposed high-redshift galaxy, which is considered (as of April 2022) to be one of the earliest and most distant known galaxies yet identified in the observable universe. The galaxy, with an estimated redshift of approximately z = 13.27, is seen as it was about 324 million years after the Big Bang, which was 13.787 billion years ago. It has a light-travel distance (lookback time) of 13.463 billion light-years from Earth, and, due to the expansion of the universe, a present proper distance of 33.288 billion light-years.
MIRI low-resolution spectroscopy (LRS): a hot super-Earth planet L 168-9 b (TOI-134) around a bright M-dwarf starWithin two weeks of the first Webb images, several preprint papers described a wide range of high redshift and very luminous (presumably large) galaxies believed to date from 235 million years (z=16.7) to 280 million years after the Big Bang, far earlier than previously known. On 17 August 2022, NASA released a large mosaic image of 690 individual frames taken by the Near Infrared Camera (NIRCam) on Webb of numerous very early galaxies. Some early galaxies observed by Webb like CEERS-93316, which has an estimated redshift of approximately z=16.7 corresponding to 235.8 million years after the Big Bang, are high redshift galaxy candidates. In September 2022, primordial black holes were proposed as explaining these unexpectedly large and early galaxies.In June 2023 detection of organic molecules 12 billion light-years away in a galaxy called SPT0418-47 using the Webb telescope was announced.On 12 July 2023, NASA celebrated the first year of operations with the release of Webb’s image of a small star-forming region in the Rho Ophiuchi cloud complex, 390 light years away.
2022 — James Webb Space Telescope (JWST) releases the Webb's First Deep Field.
2022 — JWST detects CEERS-93316, a candidate high-redshift galaxy, with an estimated redshift of approximately z = 16.7, corresponding to 235.8 million years after the Big Bang. If confirmed, it is one of the earliest and most distant known galaxies observed.","Some early galaxies observed by Webb like CEERS-93316, which has an estimated redshift of approximately z=16.7 corresponding to 235.8 million years after the Big Bang, are high redshift galaxy candidatesSome early galaxies observed by JWST, like CEERS-93316, a candidate high-redshift galaxy, has an estimated redshift of approximately z = 16.7, corresponding to 235.8 million years after the Big Bang.In September 2022, primordial black holes were proposed as explaining these unexpectedly large and early galaxies.In June 2023 detection of organic molecules 12 billion light-years away in a galaxy called SPT0418-47 using the Webb telescope was announced.On 12 July 2023, NASA celebrated the first year of operations with the release of Webb’s image of a small star-forming region in the Rho Ophiuchi cloud complex, 390 light years away.
2022 — James Webb Space Telescope (JWST) releases the Webb's First Deep Field.
2022 — JWST detects CEERS-93316, a candidate high-redshift galaxy, with an estimated redshift of approximately z = 16.7, corresponding to 235.8 million years after the Big BangSome early galaxies observed by JWST like CEERS-93316, which has an estimated redshift of approximately z=16.7 corresponding to 235.8 million years after the Big Bang, are high redshift galaxy candidates.
While some scientists have claimed other objects (such as Abell 1835 IR1916) have higher redshifts (and therefore are seen in an earlier stage of the universe's evolution), IOK-1's age and composition have been more reliably establishedIn 2017, the quasar ULAS J1342+0928 was detected at redshift ""z"" = 7.54The galaxy, with an estimated redshift of approximately z = 13.27, is seen as it was about 324 million years after the Big Bang, 13.787 billion years agoOn 17 August 2022, NASA released a large mosaic image of 690 individual frames taken by the Near Infrared Camera (NIRCam) on the James Webb Space Telescope (JWST) of numerous very early galaxiesOn 17 August 2022, NASA released a large mosaic image of 690 individual frames taken by the Near Infrared Camera (NIRCam) on JWST of numerous very","Some early galaxies observed by Webb like CEERS-93316, which has an estimated redshift of approximately z=16.7 corresponding to 235.8 million years after the Big Bang, are high redshift galaxy candidatesSome early galaxies observed by JWST, like CEERS-93316, a candidate high-redshift galaxy, has an estimated redshift of approximately z = 16.7, corresponding to 235.8 million years after the Big Bang.In September 2022, primordial black holes were proposed as explaining these unexpectedly large and early galaxies.In June 2023 detection of organic molecules 12 billion light-years away in a galaxy called SPT0418-47 using the Webb telescope was announced.On 12 July 2023, NASA celebrated the first year of operations with the release of Webb’s image of a small star-forming region in the Rho Ophiuchi cloud complex, 390 light years away.
2022 — James Webb Space Telescope (JWST) releases the Webb's First Deep Field.
2022 — JWST detects CEERS-93316, a candidate high-redshift galaxy, with an estimated redshift of approximately z = 16.7, corresponding to 235.8 million years after the Big BangSome early galaxies observed by JWST like CEERS-93316, which has an estimated redshift of approximately z=16.7 corresponding to 235.8 million years after the Big Bang, are high redshift galaxy candidates.
While some scientists have claimed other objects (such as Abell 1835 IR1916) have higher redshifts (and therefore are seen in an earlier stage of the universe's evolution), IOK-1's age and composition have been more reliably establishedIn 2017, the quasar ULAS J1342+0928 was detected at redshift ""z"" = 7.54The galaxy, with an estimated redshift of approximately z = 13.27, is seen as it was about 324 million years after the Big Bang, 13.787 billion years agoOn 17 August 2022, NASA released a large mosaic image of 690 individual frames taken by the Near Infrared Camera (NIRCam) on the James Webb Space Telescope (JWST) of numerous very early galaxiesOn 17 August 2022, NASA released a large mosaic image of 690 individual frames taken by the Near Infrared Camera (NIRCam) on JWST of numerous very[SEP]What is the estimated redshift of CEERS-93316, a candidate high-redshift galaxy observed by the James Webb Space Telescope?","['B', 'D', 'C']",1.0
What is bollard pull primarily used for measuring?,"Bollard pull is a conventional measure of the pulling (or towing) power of a watercraft. Bollard pull is primarily (but not only) used for measuring the strength of tugboats, with the largest commercial harbour tugboats in the 2000-2010s having around of bollard pull, which is described as above ""normal"" tugboats. The bollard pull of a vessel may be reported as two numbers, the static or maximum bollard pull – the highest force measured – and the steady or continuous bollard pull, the average of measurements over an interval of, for example, 10 minutes. In the English system units, HP = {R\times v\over550} ==Measurement== Values for bollard pull can be determined in two ways. ===Practical trial=== thumb|Figure 1: bollard pull trial under ideal (imaginary) conditions This method is useful for one-off ship designs and smaller shipyards. There, bollard pull is often a category in competitions and gives an indication of the power train efficiency. Bollard pull values are stated in tonnes-force (written as t or tonnef) or kilonewtons (kN).Note the inherent conflict: the SI unit of force is the newton. An equivalent measurement on land is known as drawbar pull, or tractive force, which is used to measure the total horizontal force generated by a locomotive, a piece of heavy machinery such as a tractor, or a truck, (specifically a ballast tractor), which is utilized to move a load. The Sailor's Word-Book of 1867 defines a bollard in a more specific context as ""a thick piece of wood on the head of a whale-boat, round which the harpooner gives the line a turn, in order to veer it steadily, and check the animal's velocity"".Chris Roberts, Heavy Words Lightly Thrown: The Reason Behind Rhyme, Thorndike Press, 2006 () Bollards on ships, when arranged in pairs, may also be referred to as ""bitts"". === Road traffic === ==== Roadside bollards ==== Bollards can be used either to control traffic intake size by limiting movements, or to control traffic speed by narrowing the available space. Furthermore, simulation tools and computer systems capable of determining bollard pull for a ship design are costly. Practical trials can be used to validate the result of numerical simulation. ==Human-powered vehicles== Practical bollard pull tests under simplified conditions are conducted for human powered vehicles. They are popular in car park buildings and other areas of high vehicle usage. ==== Flexible ==== Flexible bollards are bollards designed to bend when struck by vehicles. See Figure 2 for an illustration of error influences in a practical bollard pull trial. Such bollards are effective against heavy goods vehicles that may damage or destroy conventional bollards or other types of street furniture. === Retractable === Manually retractable bollards (lowered by a key mechanism) are found useful in some cases because they require less infrastructure. Bollards are widely used to contribute to safety and security. The widely used Bourdon gauge is a mechanical device, which both measures and indicates and is probably the best known type of gauge. All of these factors contribute to measurement error. thumb|Figure 2: bollard pull trial under real conditions ===Simulation=== This method eliminates much of the uncertainties of the practical trial. A bollard is a sturdy, short, vertical post. Washington, DC Bollards are used by government agencies and private businesses to protect buildings, public spaces, and the people in them from car ramming attacks. It is defined as the force (usually in tonnes-force or kilonewtons (kN)) exerted by a vessel under full power, on a shore-mounted bollard through a tow-line, commonly measured in a practical test (but sometimes simulated) under test conditions that include calm water, no tide, level trim, and sufficient depth and side clearance for a free propeller stream. The term ""robotic bollards"" has been applied to traffic barricades capable of moving themselves into position on a roadway. ",The weight of heavy machinery,The speed of locomotives,The distance traveled by a truck,The strength of tugboats,The height of a ballast tractor,D,kaggle200,"See Figure 2 for an illustration of error influences in a practical bollard pull trial. Note the difference in elevation of the ends of the line (the port bollard is higher than the ship's towing hook). Furthermore, there is the partial short circuit in propeller discharge current, the uneven trim of the ship and the short length of the tow line. All of these factors contribute to measurement error.
Practical bollard pull tests under simplified conditions are conducted for human powered vehicles. There, bollard pull is often a category in competitions and gives an indication of the power train efficiency. Although conditions for such measurements are inaccurate in absolute terms, they are the same for all competitors. Hence, they can still be valid for comparing several craft.
Bollard pull is a conventional measure of the pulling (or towing) power of a watercraft. It is defined as the force (in tonnes force, or kilonewtons (kN)) exerted by a vessel under full power, on a shore-mounted bollard through a tow-line, commonly measured in a practical test (but sometimes simulated) under test conditions that include calm water, no tide, level trim, and sufficient depth and side clearance for a free propeller stream. Like the horsepower or mileage rating of a car, it is a convenient but idealized number that must be adjusted for operating conditions that differ from the test. The bollard pull of a vessel may be reported as two numbers, the ""static"" or ""maximum"" bollard pull - the highest force measured - and the ""steady"" or ""continuous"" bollard pull, the average of measurements over an interval of, for example, 10 minutes. An equivalent measurement on land is known as drawbar pull, or tractive force, which is used to measure the total horizontal force generated by a locomotive, a piece of heavy machinery such as a tractor, or a truck, (specifically a ballast tractor), which is utilized to move a load.
Bollard pull is primarily (but not only) used for measuring the strength of tugboats, with the largest commercial harbour tugboats in the 2000-2010s having around of bollard pull, which is described as above ""normal"" tugboats. The worlds strongest tug is Island Victory (Vard Brevik 831) of Island Offshore, with a bollard pull of . Island Victory is not a typical tug, rather it is a special class of ship used in the petroleum industry called an Anchor Handling Tug Supply vessel.","Values for bollard pull can be determined in two ways.
Bollard pull is a conventional measure of the pulling (or towing) power of a watercraft. It is defined as the force (usually in tonnes-force or kilonewtons (kN)) exerted by a vessel under full power, on a shore-mounted bollard through a tow-line, commonly measured in a practical test (but sometimes simulated) under test conditions that include calm water, no tide, level trim, and sufficient depth and side clearance for a free propeller stream. Like the horsepower or mileage rating of a car, it is a convenient but idealized number that must be adjusted for operating conditions that differ from the test. The bollard pull of a vessel may be reported as two numbers, the static or maximum bollard pull – the highest force measured – and the steady or continuous bollard pull, the average of measurements over an interval of, for example, 10 minutes. An equivalent measurement on land is known as drawbar pull, or tractive force, which is used to measure the total horizontal force generated by a locomotive, a piece of heavy machinery such as a tractor, or a truck, (specifically a ballast tractor), which is utilized to move a load.
Bollard pull is primarily (but not only) used for measuring the strength of tugboats, with the largest commercial harbour tugboats in the 2000-2010s having around 60 to 65 short tons-force (530–580 kN; 54–59 tf) of bollard pull, which is described as 15 short tons-force (130 kN; 14 tf) above ""normal"" tugboats. The worlds strongest tug since its delivery in 2020 is Island Victory (Vard Brevik 831) of Island Offshore, with a bollard pull of 477 tonnes-force (526 short tons-force; 4,680 kN). Island Victory is not a typical tug, rather it is a special class of ship used in the petroleum industry called an Anchor Handling Tug Supply vessel.","An equivalent measurement on land is known as drawbar pull, or tractive force, which is used to measure the total horizontal force generated by a locomotive, a piece of heavy machinery such as a tractor, or a truck, (specifically a ballast tractor), which is utilized to move a load.
Bollard pull is primarily (but not only) used for measuring the strength of tugboats, with the largest commercial harbour tugboats in the 2000-2010s having around of bollard pull, which is described as above ""normal"" tugboatsAn equivalent measurement on land is known as drawbar pull, or tractive force, which is used to measure the total horizontal force generated by a locomotive, a piece of heavy machinery such as a tractor, or a truck, (specifically a ballast tractor), which is utilized to move a load.
Bollard pull is primarily (but not only) used for measuring the strength of tugboats, with the largest commercial harbour tugboats in the 2000-2010s having around 60 to 65 short tons-force (530–580 kN; 54–59 tf) of bollard pull, which is described as 15 short tons-force (130 kN; 14 tf) above ""normal"" tugboatsThe bollard pull of a vessel may be reported as two numbers, the static or maximum bollard pull – the highest force measured – and the steady or continuous bollard pull, the average of measurements over an interval of, for example, 10 minutesValues for bollard pull can be determined in two ways.
Bollard pull is a conventional measure of the pulling (or towing) power of a watercraftHence, they can still be valid for comparing several craft.
Bollard pull is a conventional measure of the pulling (or towing) power of a watercraftThe bollard pull of a vessel may be reported as two numbers, the ""static"" or ""maximum"" bollard pull - the highest force measured - and the ""steady"" or ""continuous"" bollard pull, the average of measurements over an interval of, for example, 10 minutesThere, bollard pull is often a category in competitions and gives an indication of the power train efficiency- See Figure 2 for an illustration of error influences in a practical bollard pull trialAll of these factor","An equivalent measurement on land is known as drawbar pull, or tractive force, which is used to measure the total horizontal force generated by a locomotive, a piece of heavy machinery such as a tractor, or a truck, (specifically a ballast tractor), which is utilized to move a load.
Bollard pull is primarily (but not only) used for measuring the strength of tugboats, with the largest commercial harbour tugboats in the 2000-2010s having around of bollard pull, which is described as above ""normal"" tugboatsAn equivalent measurement on land is known as drawbar pull, or tractive force, which is used to measure the total horizontal force generated by a locomotive, a piece of heavy machinery such as a tractor, or a truck, (specifically a ballast tractor), which is utilized to move a load.
Bollard pull is primarily (but not only) used for measuring the strength of tugboats, with the largest commercial harbour tugboats in the 2000-2010s having around 60 to 65 short tons-force (530–580 kN; 54–59 tf) of bollard pull, which is described as 15 short tons-force (130 kN; 14 tf) above ""normal"" tugboatsThe bollard pull of a vessel may be reported as two numbers, the static or maximum bollard pull – the highest force measured – and the steady or continuous bollard pull, the average of measurements over an interval of, for example, 10 minutesValues for bollard pull can be determined in two ways.
Bollard pull is a conventional measure of the pulling (or towing) power of a watercraftHence, they can still be valid for comparing several craft.
Bollard pull is a conventional measure of the pulling (or towing) power of a watercraftThe bollard pull of a vessel may be reported as two numbers, the ""static"" or ""maximum"" bollard pull - the highest force measured - and the ""steady"" or ""continuous"" bollard pull, the average of measurements over an interval of, for example, 10 minutesThere, bollard pull is often a category in competitions and gives an indication of the power train efficiency- See Figure 2 for an illustration of error influences in a practical bollard pull trialAll of these factor[SEP]What is bollard pull primarily used for measuring?","['D', 'E', 'B']",1.0
What is the piezoelectric strain coefficient for AT-cut quartz crystals?,"Within a certain range of strain this relationship is linear, so that the piezoresistive coefficient : \rho_\sigma = \frac{\left(\frac{\partial\rho}{\rho}\right)}{\varepsilon} where :∂ρ = Change in resistivity :ρ = Original resistivity :ε = Strain is constant. === Piezoresistivity in metals === Usually the resistance change in metals is mostly due to the change of geometry resulting from applied mechanical stress. Also, several ferroelectrics with perovskite-structure (BaTiO3 [BT], (Bi1/2Na1/2) TiO3 [BNT], (Bi1/2K1/2) TiO3 [BKT], KNbO3 [KN], (K, Na) NbO3 [KNN]) have been investigated for their piezoelectric properties. == Key piezoelectric properties == The following table lists the following properties for piezoelectric materials * The piezoelectric coefficients (d33, d31, d15 etc.) measure the strain induced by an applied voltage (expressed as meters per volt). The electromechanical coupling coefficient is a numerical measure of the conversion efficiency between electrical and acoustic energy in piezoelectric materials. A piezoresistor aligned with the x-axis as shown in the figure may be described by :\ V_r = R_0 I[1 + \pi _L \sigma _{xx} + \pi _T (\sigma _{yy} + \sigma _{zz} )] where R_0, I, \pi _T, \pi _L, and \sigma _{ij} denote the stress free resistance, the applied current, the transverse and longitudinal piezoresistive coefficients, and the three tensile stress components, respectively. The piezoelectric coefficient or piezoelectric modulus, usually written d33, quantifies the volume change when a piezoelectric material is subject to an electric field, or the polarization on the application of stress. In general, piezoelectricity is described by a tensor of coefficients d_{ij}; see for further details. ==External links== *List of piezoelectric materials *Table of properties for lead zirconate titanate *Piezoelectric terminology Category:Electrical phenomena The piezoresistive effect is a change in the electrical resistivity of a semiconductor or metal when mechanical strain is applied. The piezoresistive coefficients vary significantly with the sensor orientation with respect to the crystallographic axes and with the doping profile. This page lists properties of several commonly used piezoelectric materials. Under intense pressure (but limited temperature), the crystalline structure of quartz is deformed along planes inside the crystal. It is the inverse of the mechanical loss tan ϕ. == Table == Single crystals Reference Material & heterostructure used for the characterization (electrodes/material, electrode/substrate) Orientation Piezoelectric coefficients, d (pC/N) Relative permittivity, εr Electromechanical coupling factor, k Quality factor Hutson 1963Hutson, Andrew R. ""Piezoelectric devices utilizing aluminum nitride."" Piezoelectric polymers (PVDF, 240 mV-m/N) possess higher piezoelectric stress constants (g33), an important parameter in sensors, than ceramics (PZT, 11 mV-m/N), which show that they can be better sensors than ceramics. In platinum alloys, for instance, piezoresistivity is more than a factor of two larger, combining with the geometry effects to give a strain gauge sensitivity of up to more than three times as large than due to geometry effects alone. With single crystal silicon becoming the material of choice for the design of analog and digital circuits, the large piezoresistive effect in silicon and germanium was first discovered in 1954 (Smith 1954). == Mechanism == In conducting and semi-conducting materials, changes in inter-atomic spacing resulting from strain affect the bandgaps, making it easier (or harder depending on the material and strain) for electrons to be raised into the conduction band. * The mechanical quality factor Qm is an important high-power property of piezoelectric ceramics. Shocked quartz is a form of quartz that has a microscopic structure that is different from normal quartz. These polymorphs have a crystal structure different from standard quartz. The most commonly produced piezoelectric ceramics are lead zirconate titanate (PZT), barium titanate, and lead titanate. Image:Piezoresistor.jpg Schematic cross-section of the basic elements of a silicon n-well piezoresistor. ==== Physics of operation ==== For typical stress values in the MPa range the stress dependent voltage drop along the resistor Vr, can be considered to be linear. Though shocked quartz is only recently recognized, Eugene Shoemaker discovered it prior to its crystallographic description in building stones in the Bavarian town of Nördlingen, derived from shock-metamorphic rocks, such as breccia and pseudotachylite, of Ries crater. == See also == * Lechatelierite * Seifertite * Shatter cone * Shock metamorphism ==References== ==External links== * Shocked quartz page * Coesite page * Stishovite page Category:Quartz varieties Category:Impact geology ",d = 1.9·10‑12 m/V,d = 3.1·10‑12 m/V,d = 4.2·10‑12 m/V,d = 2.5·10‑12 m/V,d = 5.8·10‑12 m/V,B,kaggle200,"Because the inner element is receive only while the outer element is transmit only, special materials can be chosen to optimize the efficiency and sensitivity of this process. Lead Zirconate Titanate (PZT) works well as a material choice for the transmitting element because it has a high transmitting constant (d = 300 x 10^-12 m/V) while Polyvinylidene Fluoride (PVDF) works well as a material for the receiving element because it has a high receiving constant (g = 14 x 10^-2 Vm/N). Generally, PVDF is not a good choice for an ultrasound transducer because it has a relatively poor transmitting constant, however, since acoustic angiography separates the transmitting and receiving elements, this is no longer an issue.
CdTe is also applied for electro-optic modulators. It has the greatest electro-optic coefficient of the linear electro-optic effect among II-VI compound crystals (r=r=r=6.8×10 m/V).
It consisted of a cartridge which interfaced an analog to digital converter (with 10, 12 and 14 bit variants) and software.
with ""u"" the amplitude of lateral displacement, ""n"" the overtone order, ""d"" the piezoelectric strain coefficient, ""Q"" the quality factor, and ""U"" the amplitude of electrical driving. The piezoelectric strain coefficient is given as ""d"" = 3.1·10 m/V for AT-cut quartz crystals. Due to the small amplitude, stress and strain usually are proportional to each other. The QCM operates in the range of linear acoustics.","Because the inner element is receive only while the outer element is transmit only, special materials can be chosen to optimize the efficiency and sensitivity of this process. Lead Zirconate Titanate (PZT) works well as a material choice for the transmitting element because it has a high transmitting constant (d = 300 x 10^-12 m/V) while Polyvinylidene Fluoride (PVDF) works well as a material for the receiving element because it has a high receiving constant (g = 14 x 10^-2 Vm/N). Generally, PVDF is not a good choice for an ultrasound transducer because it has a relatively poor transmitting constant, however, since acoustic angiography separates the transmitting and receiving elements, this is no longer an issue.
CdTe can be alloyed with mercury to make a versatile infrared detector material (HgCdTe). CdTe alloyed with a small amount of zinc makes an excellent solid-state X-ray and gamma ray detector (CdZnTe).
CdTe is used as an infrared optical material for optical windows and lenses and is proven to provide a good performance across a wide range of temperatures. An early form of CdTe for IR use was marketed under the trademarked name of Irtran-6, but this is obsolete.
CdTe is also applied for electro-optic modulators. It has the greatest electro-optic coefficient of the linear electro-optic effect among II-VI compound crystals (r41=r52=r63=6.8×10−12 m/V).
Amplitude of motion The amplitude of lateral displacement rarely exceeds a nanometer. More specifically one has u0=4(nπ)2dQUel with u0 the amplitude of lateral displacement, n the overtone order, d the piezoelectric strain coefficient, Q the quality factor, and Uel the amplitude of electrical driving. The piezoelectric strain coefficient is given as d = 3.1·10‑12 m/V for AT-cut quartz crystals. Due to the small amplitude, stress and strain usually are proportional to each other. The QCM operates in the range of linear acoustics.","The piezoelectric strain coefficient is given as ""d"" = 3.1·10 m/V for AT-cut quartz crystalsThe piezoelectric strain coefficient is given as d = 3.1·10‑12 m/V for AT-cut quartz crystalsMore specifically one has u0=4(nπ)2dQUel with u0 the amplitude of lateral displacement, n the overtone order, d the piezoelectric strain coefficient, Q the quality factor, and Uel the amplitude of electrical drivingIt has the greatest electro-optic coefficient of the linear electro-optic effect among II-VI compound crystals (r=r=r=6.8×10 m/V).
It consisted of a cartridge which interfaced an analog to digital converter (with 10, 12 and 14 bit variants) and software.
with ""u"" the amplitude of lateral displacement, ""n"" the overtone order, ""d"" the piezoelectric strain coefficient, ""Q"" the quality factor, and ""U"" the amplitude of electrical drivingIt has the greatest electro-optic coefficient of the linear electro-optic effect among II-VI compound crystals (r41=r52=r63=6.8×10−12 m/V).
Amplitude of motion The amplitude of lateral displacement rarely exceeds a nanometerDue to the small amplitude, stress and strain usually are proportional to each otherGenerally, PVDF is not a good choice for an ultrasound transducer because it has a relatively poor transmitting constant, however, since acoustic angiography separates the transmitting and receiving elements, this is no longer an issue.
CdTe can be alloyed with mercury to make a versatile infrared detector material (HgCdTe)Generally, PVDF is not a good choice for an ultrasound transducer because it has a relatively poor transmitting constant, however, since acoustic angiography separates the transmitting and receiving elements, this is no longer an issue.
CdTe is also applied for electro-optic modulatorsThe QCM operates in the range of linear acoustics.CdTe alloyed with a small amount of zinc makes an excellent solid-state X-ray and gamma ray detector (CdZnTe).
CdTe is used as an infrared optical material for optical windows and lenses and is proven to provide a good performance across a wide range of temperaturesThe QCM operates in the range","The piezoelectric strain coefficient is given as ""d"" = 3.1·10 m/V for AT-cut quartz crystalsThe piezoelectric strain coefficient is given as d = 3.1·10‑12 m/V for AT-cut quartz crystalsMore specifically one has u0=4(nπ)2dQUel with u0 the amplitude of lateral displacement, n the overtone order, d the piezoelectric strain coefficient, Q the quality factor, and Uel the amplitude of electrical drivingIt has the greatest electro-optic coefficient of the linear electro-optic effect among II-VI compound crystals (r=r=r=6.8×10 m/V).
It consisted of a cartridge which interfaced an analog to digital converter (with 10, 12 and 14 bit variants) and software.
with ""u"" the amplitude of lateral displacement, ""n"" the overtone order, ""d"" the piezoelectric strain coefficient, ""Q"" the quality factor, and ""U"" the amplitude of electrical drivingIt has the greatest electro-optic coefficient of the linear electro-optic effect among II-VI compound crystals (r41=r52=r63=6.8×10−12 m/V).
Amplitude of motion The amplitude of lateral displacement rarely exceeds a nanometerDue to the small amplitude, stress and strain usually are proportional to each otherGenerally, PVDF is not a good choice for an ultrasound transducer because it has a relatively poor transmitting constant, however, since acoustic angiography separates the transmitting and receiving elements, this is no longer an issue.
CdTe can be alloyed with mercury to make a versatile infrared detector material (HgCdTe)Generally, PVDF is not a good choice for an ultrasound transducer because it has a relatively poor transmitting constant, however, since acoustic angiography separates the transmitting and receiving elements, this is no longer an issue.
CdTe is also applied for electro-optic modulatorsThe QCM operates in the range of linear acoustics.CdTe alloyed with a small amount of zinc makes an excellent solid-state X-ray and gamma ray detector (CdZnTe).
CdTe is used as an infrared optical material for optical windows and lenses and is proven to provide a good performance across a wide range of temperaturesThe QCM operates in the range[SEP]What is the piezoelectric strain coefficient for AT-cut quartz crystals?","['B', 'D', 'C']",1.0
What is the difference between probability mass function (PMF) and probability density function (PDF)?,"""Density function"" itself is also used for the probability mass function, leading to further confusion.Ord, J.K. (1972) Families of Frequency Distributions, Griffin. (for example, Table 5.1 and Example 5.4) In general though, the PMF is used in the context of discrete random variables (random variables that take values on a countable set), while the PDF is used in the context of continuous random variables. ==Example== Suppose bacteria of a certain species typically live 4 to 6 hours. In other sources, ""probability distribution function"" may be used when the probability distribution is defined as a function over general sets of values or it may refer to the cumulative distribution function, or it may be a probability mass function (PMF) rather than the density. The probability mass function of a discrete random variable is the density with respect to the counting measure over the sample space (usually the set of integers, or some subset thereof). The terms probability distribution function and probability function have also sometimes been used to denote the probability density function. Probability distribution function may refer to: * Probability distribution * Cumulative distribution function * Probability mass function * Probability density function In probability theory, a probability density function (PDF), or density of an absolutely continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) can be interpreted as providing a relative likelihood that the value of the random variable would be equal to that sample. Mass function may refer to: *Binary mass function, a function that gives the minimum mass of a star or planet in a spectroscopic binary system *Halo mass function, a function that describes the mass distribution of dark matter halos *Initial mass function, a function that describes the distribution of star masses when they initially form, before evolution *Probability mass function, a function that gives the probability that a discrete random variable is exactly equal to some value Probability density is the probability per unit length, in other words, while the absolute likelihood for a continuous random variable to take on any particular value is 0 (since there is an infinite set of possible values to begin with), the value of the PDF at two different samples can be used to infer, in any particular draw of the random variable, how much more likely it is that the random variable would be close to one sample compared to the other sample. In a more precise sense, the PDF is used to specify the probability of the random variable falling within a particular range of values, as opposed to taking on any one value. Mass point may refer to: * Mass point geometry * Point mass in physics * The values of a probability mass function in probability and statistics More generally, if a discrete variable can take different values among real numbers, then the associated probability density function is: f(t) = \sum_{i=1}^n p_i\, \delta(t-x_i), where x_1, \ldots, x_n are the discrete values accessible to the variable and p_1, \ldots, p_n are the probabilities associated with these values. Not every probability distribution has a density function: the distributions of discrete random variables do not; nor does the Cantor distribution, even though it has no discrete component, i.e., does not assign positive probability to any individual point. The values of the two integrals are the same in all cases in which both and actually have probability density functions. PMF may stand for: * Danish Union of Educators (Danish: Pædagogisk Medhjælper Forbund), a former Danish trade union * Pacific Music Festival, an international classical music festival held annually in Sapporo, Japan * Paramilitary forces, a semi-militarized force * Private military firm, a private company providing armed combat or security services for financial gain. In the field of statistical physics, a non- formal reformulation of the relation above between the derivative of the cumulative distribution function and the probability density function is generally used as the definition of the probability density function. A distribution has a density function if and only if its cumulative distribution function is absolutely continuous. Intuitively, one can think of f_X(x) \, dx as being the probability of X falling within the infinitesimal interval [x,x+dx]. ==Formal definition== (This definition may be extended to any probability distribution using the measure-theoretic definition of probability.) The above expression allows for determining statistical characteristics of such a discrete variable (such as the mean, variance, and kurtosis), starting from the formulas given for a continuous distribution of the probability. == Families of densities == It is common for probability density functions (and probability mass functions) to be parametrized—that is, to be characterized by unspecified parameters. * Probability mass function, in statistics, function giving the probability that a variable takes a particular value * Product/market fit, in marketing, the degree to which a product satisfies a strong market demand * Professional Medical Film, a U.S. Army designation * Progressive massive fibrosis, an interstitial lung disease complication often seen in silicosis and pneumoconiosis * Protected Management Frames, a security feature of WiFi connections, see IEEE 802.11w-2009 * Proton motive force, a measure of energy in biological reactions * PMF, hacker turned as federal informant (operation Cybersnare) * .pmf, a Sony PlayStation Portable movie file, a proprietary format that can be extracted from PSP disk images This alternate definition is the following: If is an infinitely small number, the probability that is included within the interval is equal to , or: \Pr(t ==Link between discrete and continuous distributions== It is possible to represent certain discrete random variables as well as random variables involving both a continuous and a discrete part with a generalized probability density function using the Dirac delta function. ","PMF is used only for continuous random variables, while PDF is used for both continuous and discrete random variables.","PMF is used for both continuous and discrete random variables, while PDF is used only for continuous random variables.","PMF is used for continuous random variables, while PDF is used for discrete random variables.","PMF is used for discrete random variables, while PDF is used for continuous random variables.",PMF and PDF are interchangeable terms used for the same concept in probability theory.,D,kaggle200,"Univariate distribution is a dispersal type of a single random variable described either with a probability mass function (pmf) for discrete probability distribution, or probability density function (pdf) for continuous probability distribution. It is not to be confused with multivariate distribution.
whose values are to be estimated. Third, the continuous probability density function (pdf) or its discrete counterpart, the probability mass function (pmf), of the underlying distribution that generated the data must be stated conditional on the values of the parameters:
Whereas the ""pdf"" exists only for continuous random variables, the ""cdf"" exists for all random variables (including discrete random variables) that take values in formula_26
The terms """"probability distribution function"""" and """"probability function"""" have also sometimes been used to denote the probability density function. However, this use is not standard among probabilists and statisticians. In other sources, ""probability distribution function"" may be used when the probability distribution is defined as a function over general sets of values or it may refer to the cumulative distribution function, or it may be a probability mass function (PMF) rather than the density. ""Density function"" itself is also used for the probability mass function, leading to further confusion. In general though, the PMF is used in the context of discrete random variables (random variables that take values on a countable set), while the PDF is used in the context of continuous random variables.","In the case of real-valued random variables, the joint distribution, as a particular multivariate distribution, may be expressed by a multivariate cumulative distribution function, or by a multivariate probability density function together with a multivariate probability mass function. In the special case of continuous random variables, it is sufficient to consider probability density functions, and in the case of discrete random variables, it is sufficient to consider probability mass functions.
Two or more discrete random variables have a joint probability mass function, which gives the probability of each possible combination of realizations for the random variables.
The terms probability distribution function and probability function have also sometimes been used to denote the probability density function. However, this use is not standard among probabilists and statisticians. In other sources, ""probability distribution function"" may be used when the probability distribution is defined as a function over general sets of values or it may refer to the cumulative distribution function, or it may be a probability mass function (PMF) rather than the density. ""Density function"" itself is also used for the probability mass function, leading to further confusion. In general though, the PMF is used in the context of discrete random variables (random variables that take values on a countable set), while the PDF is used in the context of continuous random variables.","In general though, the PMF is used in the context of discrete random variables (random variables that take values on a countable set), while the PDF is used in the context of continuous random variablesIn general though, the PMF is used in the context of discrete random variables (random variables that take values on a countable set), while the PDF is used in the context of continuous random variables.Third, the continuous probability density function (pdf) or its discrete counterpart, the probability mass function (pmf), of the underlying distribution that generated the data must be stated conditional on the values of the parameters:
Whereas the ""pdf"" exists only for continuous random variables, the ""cdf"" exists for all random variables (including discrete random variables) that take values in formula_26
The terms """"probability distribution function"""" and """"probability function"""" have also sometimes been used to denote the probability density function""Density function"" itself is also used for the probability mass function, leading to further confusionIn the special case of continuous random variables, it is sufficient to consider probability density functions, and in the case of discrete random variables, it is sufficient to consider probability mass functions.
Two or more discrete random variables have a joint probability mass function, which gives the probability of each possible combination of realizations for the random variables.
The terms probability distribution function and probability function have also sometimes been used to denote the probability density functionIn other sources, ""probability distribution function"" may be used when the probability distribution is defined as a function over general sets of values or it may refer to the cumulative distribution function, or it may be a probability mass function (PMF) rather than the densityIn the case of real-valued random variables, the joint distribution, as a particular multivariate distribution, may be expressed by a multivariate cumulative distribution function, or by a multivariate probability densi","In general though, the PMF is used in the context of discrete random variables (random variables that take values on a countable set), while the PDF is used in the context of continuous random variablesIn general though, the PMF is used in the context of discrete random variables (random variables that take values on a countable set), while the PDF is used in the context of continuous random variables.Third, the continuous probability density function (pdf) or its discrete counterpart, the probability mass function (pmf), of the underlying distribution that generated the data must be stated conditional on the values of the parameters:
Whereas the ""pdf"" exists only for continuous random variables, the ""cdf"" exists for all random variables (including discrete random variables) that take values in formula_26
The terms """"probability distribution function"""" and """"probability function"""" have also sometimes been used to denote the probability density function""Density function"" itself is also used for the probability mass function, leading to further confusionIn the special case of continuous random variables, it is sufficient to consider probability density functions, and in the case of discrete random variables, it is sufficient to consider probability mass functions.
Two or more discrete random variables have a joint probability mass function, which gives the probability of each possible combination of realizations for the random variables.
The terms probability distribution function and probability function have also sometimes been used to denote the probability density functionIn other sources, ""probability distribution function"" may be used when the probability distribution is defined as a function over general sets of values or it may refer to the cumulative distribution function, or it may be a probability mass function (PMF) rather than the densityIn the case of real-valued random variables, the joint distribution, as a particular multivariate distribution, may be expressed by a multivariate cumulative distribution function, or by a multivariate probability densi[SEP]What is the difference between probability mass function (PMF) and probability density function (PDF)?","['D', 'C', 'A']",1.0
"How do the Lunar Laser Ranging Experiment, radar astronomy, and the Deep Space Network determine distances to the Moon, planets, and spacecraft?","For the first few years of the Lunar Laser Ranging Experiment, the distance between the observatory and the reflectors could be measured to an accuracy of about . The instantaneous precision of the Lunar Laser Ranging experiments can achieve few millimeter resolution, and is the most reliable method of determining the lunar distance to date. The measurement is also useful in characterizing the lunar radius, as well as the mass of and distance to the Sun. Millimeter-precision measurements of the lunar distance are made by measuring the time taken for laser beam light to travel between stations on Earth and retroreflectors placed on the Moon. The distance can be calculated from the round-trip time of laser light pulses travelling at the speed of light, which are reflected back to Earth by the Moon's surface or by one of five retroreflectors installed on the Moon during the Apollo program (11, 14, and 15) and Lunokhod 1 and 2 missions. Follow-on experiments lasting one month produced a semi-major axis of ( ± ), which was the most precise measurement of the lunar distance at the time. === Laser ranging === thumb|Lunar Laser Ranging Experiment from the Apollo 11 mission An experiment which measured the round-trip time of flight of laser pulses reflected directly off the surface of the Moon was performed in 1962, by a team from Massachusetts Institute of Technology, and a Soviet team at the Crimean Astrophysical Observatory. In a relative sense, this is one of the most precise distance measurements ever made, and is equivalent in accuracy to determining the distance between Los Angeles and New York to within the width of a human hair. == List of retroreflectors == == List of observatories == The table below presents a list of active and inactive Lunar Laser Ranging stations on Earth. thumb|Lunar Laser Ranging Experiment from the Apollo 11 mission Lunar Laser Ranging (LLR) is the practice of measuring the distance between the surfaces of the Earth and the Moon using laser ranging. Laser ranging measurements can also be made with retroreflectors installed on Moon-orbiting satellites such as the LRO. ==History== thumb|Apollo 15 LRRR thumb|Apollo 15 LRRR schematic The first successful lunar ranging tests were carried out in 1962 when Louis Smullin and Giorgio Fiocco from the Massachusetts Institute of Technology succeeded in observing laser pulses reflected from the Moon's surface using a laser with a 50J 0.5 millisecond pulse length. Using telescopes on Earth, the reflectors on the Moon, and accurate timing of laser pulses, scientists were able to measure and predict the orbit of the Moon to a precision of a few centimeters by the early 2000s. In APOLLO, the incoming photons are spread over an array of independent detectors, which reduces the chance that two or more photons hit any one of the detectors. === Modeling station locations === Any laser ranging station, APOLLO included, measures the transit time, and hence the distance, from the telescope to the reflector(s). Some of the findings of this long-term experiment are: === Properties of the Moon === * The distance to the Moon can be measured with millimeter precision. By confirming the accuracy of previous measurements, and making new even more accurate measurements, the still unresolved discrepancy between theory and experiment is now placed more firmly on the theoretical models. == The collaboration == APOLLO is collaboration between: University of California, San Diego (Tom Murphy Principal investigator), University of Washington, Harvard, Jet Propulsion Laboratory, Lincoln Laboratory, Northwest Analysis, Apache Point Observatory, and Humboldt State. == References == ==External links== *What Neil & Buzz Left on the Moon NASA description of the basics of Lunar Laser Ranging *Main web page for the Apache Point Lunar Laser Ranging Project Category:Lunar science Category:Tests of general relativity Category:2005 establishments in New Mexico The experiments have constrained the change in Newton's gravitational constant G to a factor of per year. ==Gallery== File:ALSEP AS14-67-9386.jpg|Apollo 14 Lunar Ranging Retro Reflector (LRRR) File:LunarPhotons.png|APOLLO collaboration photon pulse return times File:Wettzell Laser Ranging System.jpg|Laser ranging facility at Wettzell fundamental station, Bavaria, Germany File:Goddard Spaceflight Center Laser Ranging Facility.jpg|Laser Ranging at Goddard Space Flight Center ==See also== * Carroll Alley (first principal investigator of the Apollo Lunar Laser Ranging team) * Lidar * Lunar distance (astronomy) * Satellite laser ranging * Space geodesy * Third-party evidence for Apollo Moon landings * List of artificial objects on the Moon ==References== ==External links== * ""Theory and Model for the New Generation of the Lunar Laser Ranging Data"" by Sergei Kopeikin * Apollo 15 Experiments - Laser Ranging Retroreflector by the Lunar and Planetary Institute * ""History of Laser Ranging and MLRS"" by the University of Texas at Austin, Center for Space Research * ""Lunar Retroreflectors"" by Tom Murphy * Station de Télémétrie Laser-Lune in Grasse, France * Lunar Laser Ranging from International Laser Ranging Service * ""UW researcher plans project to pin down moon's distance from Earth"" by Vince Stricherz, UW Today, 14 January 2002 * ""What Neil & Buzz Left on the Moon"" by Science@NASA, 20 July 2004 * ""Apollo 11 Experiment Still Returning Results"" by Robin Lloyd, CNN, 21 July 1999 * ""Shooting Lasers at the Moon: Hal Walker and the Lunar Retroreflector"" by Smithsonian National Air and Space Museum, YouTube, 20 Aug 2019 Category:Lunar science Category:Apollo program hardware Category:Tests of general relativity For the terrestrial model, the IERS Conventions (2010) is a source of detailed information. ==Results== Lunar laser ranging measurement data is available from the Paris Observatory Lunar Analysis Center, the International Laser Ranging Service archives, and the active stations. A distance was calculated with an uncertainty of , and this remained the definitive lunar distance value for the next half century. ==== Occultations ==== By recording the instant when the Moon occults a background star, (or similarly, measuring the angle between the Moon and a background star at a predetermined moment) the lunar distance can be determined, as long as the measurements are taken from multiple locations of known separation. Modern Lunar Laser Ranging data can be fit with a 1 cm weighted rms residual. Analyzing the range data involves dynamics, terrestrial geophysics, and lunar geophysics. A review of Lunar Laser Ranging is available. As of 2009, the distance to the Moon can be measured with millimeter precision. It can be seen that the measured range is , approximately the distance from the Earth to the Moon The distance to the moon was measured by means of radar first in 1946 as part of Project Diana. ",They determine the values of electromagnetic constants.,They measure round-trip transit times.,They measure the actual speed of light waves.,They use interferometry to determine the speed of light.,They separately determine the frequency and wavelength of a light beam.,B,kaggle200,"The placement of 3 retroreflectors on the Moon by the Lunar Laser Ranging experiment and 2 retroreflectors by Lunokhod rovers allowed accurate measurement of the physical librations by laser ranging to the Moon.
Lunar laser ranging measurement data is available from the Paris Observatory Lunar Analysis Center, the International Laser Ranging Service archives, and the active stations. Some of the findings of this long-term experiment are:
Laser ranging measurements can also be made with retroreflectors installed on Moon-orbiting satellites such as the LRO.
Radar systems measure the distance to a target by the time it takes a radio-wave pulse to return to the radar antenna after being reflected by the target: the distance to the target is half the round-trip transit time multiplied by the speed of light. A Global Positioning System (GPS) receiver measures its distance to GPS satellites based on how long it takes for a radio signal to arrive from each satellite, and from these distances calculates the receiver's position. Because light travels about () in one second, these measurements of small fractions of a second must be very precise. The Lunar Laser Ranging Experiment, radar astronomy and the Deep Space Network determine distances to the Moon, planets and spacecraft, respectively, by measuring round-trip transit times.","The table below presents a list of active and inactive Lunar Laser Ranging stations on Earth.
A review of Lunar Laser Ranging is available.Laser ranging measurements can also be made with retroreflectors installed on Moon-orbiting satellites such as the LRO.
Distance measurement Radar systems measure the distance to a target by the time it takes a radio-wave pulse to return to the radar antenna after being reflected by the target: the distance to the target is half the round-trip transit time multiplied by the speed of light. A Global Positioning System (GPS) receiver measures its distance to GPS satellites based on how long it takes for a radio signal to arrive from each satellite, and from these distances calculates the receiver's position. Because light travels about 300000 kilometres (186000 mi) in one second, these measurements of small fractions of a second must be very precise. The Lunar Laser Ranging experiment, radar astronomy and the Deep Space Network determine distances to the Moon, planets and spacecraft, respectively, by measuring round-trip transit times.","The Lunar Laser Ranging Experiment, radar astronomy and the Deep Space Network determine distances to the Moon, planets and spacecraft, respectively, by measuring round-trip transit times.The Lunar Laser Ranging experiment, radar astronomy and the Deep Space Network determine distances to the Moon, planets and spacecraft, respectively, by measuring round-trip transit timesSome of the findings of this long-term experiment are:
Laser ranging measurements can also be made with retroreflectors installed on Moon-orbiting satellites such as the LRO.
Radar systems measure the distance to a target by the time it takes a radio-wave pulse to return to the radar antenna after being reflected by the target: the distance to the target is half the round-trip transit time multiplied by the speed of lightThe table below presents a list of active and inactive Lunar Laser Ranging stations on Earth.
A review of Lunar Laser Ranging is available.Laser ranging measurements can also be made with retroreflectors installed on Moon-orbiting satellites such as the LRO.
Distance measurement Radar systems measure the distance to a target by the time it takes a radio-wave pulse to return to the radar antenna after being reflected by the target: the distance to the target is half the round-trip transit time multiplied by the speed of light- The placement of 3 retroreflectors on the Moon by the Lunar Laser Ranging experiment and 2 retroreflectors by Lunokhod rovers allowed accurate measurement of the physical librations by laser ranging to the Moon.
Lunar laser ranging measurement data is available from the Paris Observatory Lunar Analysis Center, the International Laser Ranging Service archives, and the active stationsBecause light travels about 300000 kilometres (186000 mi) in one second, these measurements of small fractions of a second must be very preciseA Global Positioning System (GPS) receiver measures its distance to GPS satellites based on how long it takes for a radio signal to arrive from each satellite, and from these distances calculates the receiver's positionBecause light travels","The Lunar Laser Ranging Experiment, radar astronomy and the Deep Space Network determine distances to the Moon, planets and spacecraft, respectively, by measuring round-trip transit times.The Lunar Laser Ranging experiment, radar astronomy and the Deep Space Network determine distances to the Moon, planets and spacecraft, respectively, by measuring round-trip transit timesSome of the findings of this long-term experiment are:
Laser ranging measurements can also be made with retroreflectors installed on Moon-orbiting satellites such as the LRO.
Radar systems measure the distance to a target by the time it takes a radio-wave pulse to return to the radar antenna after being reflected by the target: the distance to the target is half the round-trip transit time multiplied by the speed of lightThe table below presents a list of active and inactive Lunar Laser Ranging stations on Earth.
A review of Lunar Laser Ranging is available.Laser ranging measurements can also be made with retroreflectors installed on Moon-orbiting satellites such as the LRO.
Distance measurement Radar systems measure the distance to a target by the time it takes a radio-wave pulse to return to the radar antenna after being reflected by the target: the distance to the target is half the round-trip transit time multiplied by the speed of light- The placement of 3 retroreflectors on the Moon by the Lunar Laser Ranging experiment and 2 retroreflectors by Lunokhod rovers allowed accurate measurement of the physical librations by laser ranging to the Moon.
Lunar laser ranging measurement data is available from the Paris Observatory Lunar Analysis Center, the International Laser Ranging Service archives, and the active stationsBecause light travels about 300000 kilometres (186000 mi) in one second, these measurements of small fractions of a second must be very preciseA Global Positioning System (GPS) receiver measures its distance to GPS satellites based on how long it takes for a radio signal to arrive from each satellite, and from these distances calculates the receiver's positionBecause light travels[SEP]How do the Lunar Laser Ranging Experiment, radar astronomy, and the Deep Space Network determine distances to the Moon, planets, and spacecraft?","['B', 'C', 'A']",1.0
What is the Ozma Problem?,"Özalp Özer Ph.D. is an American business professor specializing in pricing science and operations research. thumb|right|A vehicle tire showing signs of ozone cracking An antiozonant, also known as anti-ozonant, is an organic compound that prevents or retards damage caused by ozone. The Ozmapolitan of Oz is a 1986 novel written and illustrated by Dick Martin. Oziel is a given name. Özer is currently one of the associate editors for the journals Management Science and Operations Research. He is the Ashbel Smith Professor of Management Science at the Naveen Jindal School of Management and also currently serves as an affiliated faculty at the MIT Sloan School of Management. ==Career== Originally from Turkey, Özer attended Bilkent University, where he earned an undergraduate degree in Industrial Engineering in 1996. After receiving his Ph.D. in 2000, Özer worked as an assistant professor in the Management Science and Engineering Department at Stanford University until 2007. As its title indicates, the book is an entrant in the long-running series of stories on the Land of Oz written by L. Frank Baum and various successors.Paul Nathanson, Over the Rainbow: The Wizard of Oz as a Secular Myth of America, Albany, NY, State University of New York Press, 1991.Suzanne Rahn, The Wizard of Oz: Shaping an Imaginary World, New York, Twayne, 1998.Michael O'Neal Riley, Oz and Beyond: The Fantasy World of L. Frank Baum, Lawrence, KS, University Press of Kansas, 1997. ==Authorship== Like his predecessor John R. Neill, Dick Martin was a veteran Oz illustrator who moved into Oz authorship; The Ozmapolitan of Oz is Martin's single sustained work of Oz fiction. Ground-level ozone is naturally present, but it is also a product of smog and thus degradation is faster in areas of high air pollution. A decade and a half later, Dave Hardenbrook would also offer a teenage protagonist in his 2000 novel The Unknown Witches of Oz; Martin does not go as far as Hardenbrook later would in making his teen hero a romantic interest. ==The term ""Ozmapolitan""== The word ""Ozmopolitan"" was first used in 1904, in promotional material created by Baum's publisher Reilly & Britton. A number of research projects study the application of another type of antiozonants to protect plants. == Effect of ozone == thumb|upright=1.25|The distribution of atmospheric ozone Many elastomers are rich in unsaturated double bonds, which can react with ozone present in the air in process known as ozonolysis. Martin may have been writing mainly to amuse his young readers; but his handling of the subject suggests that he was out of sympathy with much of twentieth-century art. ==Response== In 1987, a year after the appearance of The Ozmapolitan of Oz, Chris Dulabone published his The Colorful Kitten of Oz, in which Eureka is the title character. In 2014, Özer was awarded the Best Paper Award by Management Science for his research in Trust in Forecast Information Sharing. They all return to the Emerald City, with abundant material for the Ozmapolitan. The book includes an afterword that addresses perceived inconsistencies in Martin's book. ==References== ==External links== * The Ozmapolitan press releases, 1904 and after Category:Oz (franchise) books Category:1986 fantasy novels Category:1986 American novels Category:1986 children's books The rate of degradation is effected both by the chemical structure of the elastomer and the amount of ozone in the environment. his ""Game Preserve"" is a Parcheesi-like board game laid out in a landscape.The Ozmapolitan of Oz, pp. 52-7. He includes Decalcomania, Xenophobia, Yahooism, and Zymolysis in a list of human diseases;Dick Martin, The Ozmapolitan of Oz, Kinderhook, IL, The International Wizard of Oz Club, 1986; p. The idea was that the Wizard of Oz started an Oz newspaper so titled (a conceit that Martin adopts for his novel). The most obvious effect of this is cracking of the elastomer (ozone cracking), which is exacerbated by mechanical stress. ",The Ozma Problem is a chapter in a book that discusses the versatility of carbon and chirality in biochemistry.,"The Ozma Problem is a discussion about time invariance and reversal in particle physics, theoretical physics, and cosmology.","The Ozma Problem is a conundrum that examines whether there is any fundamental asymmetry to the universe. It concerns various aspects of atomic and subatomic physics and how they relate to mirror asymmetry and the related concepts of chirality, antimatter, magnetic and electrical polarity, parity, charge and spin.",The Ozma Problem is a measure of how symmetry and asymmetry have evolved from the beginning of life on Earth.,The Ozma Problem is a comparison between the level of a desired signal and the level of background noise used in science and engineering.,C,kaggle200,"The solution to the Ozma Problem was finally realized in the famous Wu experiment, conducted in 1956 by Chinese-American physicist Chien-Shiung Wu (1912–1997), involving the beta decay of cobalt-60. At a conference earlier that year, Richard Feynman had asked (on behalf of Martin M. Block) whether parity was sometimes violated, leading Tsung-Dao Lee and Chen-Ning Yang to propose Wu's experiment, for which Lee and Yang were awarded the 1957 Nobel Prize in Physics. It was the first experiment to disprove the conservation of parity, and according to Gardner, one could use it to convey the meaning of left and right to remote extraterrestrials. An earlier example of asymmetry had actually been detected as early as 1928 in the decay of a radionuclide of radium, but its significance was not then realized.
The degree to which such a description is subjective is rather subtle. See the Ozma Problem for an illustration of this.
The last several chapters deal with a conundrum called the Ozma Problem, which examines whether there is any fundamental asymmetry to the universe. This discussion concerns various aspects of atomic and subatomic physics and how they relate to mirror asymmetry and the related concepts of chirality, antimatter, magnetic and electrical polarity, parity, charge and spin. Time invariance (and reversal) is discussed. Implications for particle physics, theoretical physics and cosmology are covered and brought up to date (in later editions of the book) with regard to Grand Unified Theories, theories of everything, superstring theory and .
The 18th chapter, ""The Ozma Problem"", poses a problem that Gardner claims would arise if Earth should ever enter into communication with life on another planet through Project Ozma. This is the problem of how to communicate the meaning of left and right, where the two communicants are conditionally not allowed to view any one object in common.","The problem was first implied in Immanuel Kant's discussion of a hand isolated in space, which would have no meaning as left or right by itself; Gardner posits that Kant would today explain his problem using the reversibility of objects through a higher dimension. A three-dimensional hand can be reversed in a mirror or a hypothetical fourth dimension. In more easily visualizable terms, an outline of a hand in Flatland could be flipped over; the meaning of left or right would not apply until a being missing a corresponding hand came along. Charles Howard Hinton expressed the essential problem in 1888, as did William James in his The Principles of Psychology (1890). Gardner follows the thread of several false leads on the road to the solution of the problem, such as the magnetic poles of astronomical bodies and the chirality of life molecules, which could be arbitrary based on how life locally originated.The solution to the Ozma Problem was finally realized in the famous Wu experiment, conducted in 1956 by Chinese-American physicist Chien-Shiung Wu (1912–1997), involving the beta decay of cobalt-60. At a conference earlier that year, Richard Feynman had asked (on behalf of Martin M. Block) whether parity was sometimes violated, leading Tsung-Dao Lee and Chen-Ning Yang to propose Wu's experiment, for which Lee and Yang were awarded the 1957 Nobel Prize in Physics. It was the first experiment to disprove the conservation of parity, and according to Gardner, one could use it to convey the meaning of left and right to remote extraterrestrials. An earlier example of asymmetry had actually been detected as early as 1928 in the decay of a radionuclide of radium, but its significance was not then realized.
The last several chapters deal with a conundrum called the Ozma Problem, which examines whether there is any fundamental asymmetry to the universe. This discussion concerns various aspects of atomic and subatomic physics and how they relate to mirror asymmetry and the related concepts of chirality, antimatter, magnetic and electrical polarity, parity, charge and spin. Time invariance (and reversal) is discussed. Implications for particle physics, theoretical physics and cosmology are covered and brought up to date (in later editions of the book) with regard to Grand Unified Theories, theories of everything, superstring theory and M-theory.
The Ozma Problem The 18th chapter, ""The Ozma Problem"", poses a problem that Gardner claims would arise if Earth should ever enter into communication with life on another planet through Project Ozma. This is the problem of how to communicate the meaning of left and right, where the two communicants are conditionally not allowed to view any one object in common.","- The solution to the Ozma Problem was finally realized in the famous Wu experiment, conducted in 1956 by Chinese-American physicist Chien-Shiung Wu (1912–1997), involving the beta decay of cobalt-60Implications for particle physics, theoretical physics and cosmology are covered and brought up to date (in later editions of the book) with regard to Grand Unified Theories, theories of everything, superstring theory and M-theory.
The Ozma Problem The 18th chapter, ""The Ozma Problem"", poses a problem that Gardner claims would arise if Earth should ever enter into communication with life on another planet through Project OzmaGardner follows the thread of several false leads on the road to the solution of the problem, such as the magnetic poles of astronomical bodies and the chirality of life molecules, which could be arbitrary based on how life locally originated.The solution to the Ozma Problem was finally realized in the famous Wu experiment, conducted in 1956 by Chinese-American physicist Chien-Shiung Wu (1912–1997), involving the beta decay of cobalt-60Implications for particle physics, theoretical physics and cosmology are covered and brought up to date (in later editions of the book) with regard to Grand Unified Theories, theories of everything, superstring theory and .
The 18th chapter, ""The Ozma Problem"", poses a problem that Gardner claims would arise if Earth should ever enter into communication with life on another planet through Project OzmaAn earlier example of asymmetry had actually been detected as early as 1928 in the decay of a radionuclide of radium, but its significance was not then realized.
The last several chapters deal with a conundrum called the Ozma Problem, which examines whether there is any fundamental asymmetry to the universeIt was the first experiment to disprove the conservation of parity, and according to Gardner, one could use it to convey the meaning of left and right to remote extraterrestrialsSee the Ozma Problem for an illustration of this.
The last several chapters deal with a conundrum called the Ozma Problem, which examines whet","- The solution to the Ozma Problem was finally realized in the famous Wu experiment, conducted in 1956 by Chinese-American physicist Chien-Shiung Wu (1912–1997), involving the beta decay of cobalt-60Implications for particle physics, theoretical physics and cosmology are covered and brought up to date (in later editions of the book) with regard to Grand Unified Theories, theories of everything, superstring theory and M-theory.
The Ozma Problem The 18th chapter, ""The Ozma Problem"", poses a problem that Gardner claims would arise if Earth should ever enter into communication with life on another planet through Project OzmaGardner follows the thread of several false leads on the road to the solution of the problem, such as the magnetic poles of astronomical bodies and the chirality of life molecules, which could be arbitrary based on how life locally originated.The solution to the Ozma Problem was finally realized in the famous Wu experiment, conducted in 1956 by Chinese-American physicist Chien-Shiung Wu (1912–1997), involving the beta decay of cobalt-60Implications for particle physics, theoretical physics and cosmology are covered and brought up to date (in later editions of the book) with regard to Grand Unified Theories, theories of everything, superstring theory and .
The 18th chapter, ""The Ozma Problem"", poses a problem that Gardner claims would arise if Earth should ever enter into communication with life on another planet through Project OzmaAn earlier example of asymmetry had actually been detected as early as 1928 in the decay of a radionuclide of radium, but its significance was not then realized.
The last several chapters deal with a conundrum called the Ozma Problem, which examines whether there is any fundamental asymmetry to the universeIt was the first experiment to disprove the conservation of parity, and according to Gardner, one could use it to convey the meaning of left and right to remote extraterrestrialsSee the Ozma Problem for an illustration of this.
The last several chapters deal with a conundrum called the Ozma Problem, which examines whet[SEP]What is the Ozma Problem?","['C', 'B', 'E']",1.0
What is a Hilbert space in quantum mechanics?,"In quantum mechanics, the Hilbert space is the space of complex-valued functions belonging to L^2 (\mathbb{R}^3 , d^3x), where the simple \mathbb{R}^3 is the classical configuration space of free particle which has finite degrees of freedom, and d^3 x is the Lebesgue measure on \mathbb{R}^3. Phase-space representation of quantum state vectors is a formulation of quantum mechanics elaborating the phase-space formulation with a Hilbert space. In mathematics and the foundations of quantum mechanics, the projective Hilbert space P(H) of a complex Hilbert space H is the set of equivalence classes of non-zero vectors v in H, for the relation \sim on H given by :w \sim v if and only if v = \lambda w for some non-zero complex number \lambda. For this purpose, the Hilbert space of a quantum system is enlarged by introducing an auxiliary quantum system. In quantum field theory, it is expected that the Hilbert space is also the L^2 space on the configuration space of the field, which is infinite dimensional, with respect to some Borel measure naturally defined. In the mathematical physics of quantum mechanics, Liouville space, also known as line space, is the space of operators on Hilbert space. The term Hilbert geometry may refer to several things named after David Hilbert: * Hilbert's axioms, a modern axiomatization of Euclidean geometry * Hilbert space, a space in many ways resembling a Euclidean space, but in important instances infinite-dimensional * Hilbert metric, a metric that makes a bounded convex subset of a Euclidean space into an unbounded metric space In the quantum mechanics the domain space of the wave functions \psi is the classical configuration space \mathbb{R}^3. Liouville space is itself a Hilbert space under the Hilbert-Schmidt inner product. Liouville space underlies the density operator formalism and is a common computation technique in the study of open quantum systems. ==References== Category:Hilbert spaces Category:Linear algebra Category:Operator theory Category:Functional analysis This is the usual construction of projectivization, applied to a complex Hilbert space. ==Overview== The physical significance of the projective Hilbert space is that in quantum theory, the wave functions \psi and \lambda \psi represent the same physical state, for any \lambda e 0. The same construction can be applied also to real Hilbert spaces. Complex projective Hilbert space may be given a natural metric, the Fubini–Study metric, derived from the Hilbert space's norm. ==Product== The Cartesian product of projective Hilbert spaces is not a projective space. Relative-position state and relative- momentum state are defined in the extended Hilbert space of the composite quantum system and expressions of basic operators such as canonical position and momentum operators, acting on these states, are obtained."" For the finite-dimensional complex Hilbert space, one writes :P(H_{n})=\mathbb{C}P^{n-1} so that, for example, the projectivization of two-dimensional complex Hilbert space (the space describing one qubit) is the complex projective line \mathbb{C}P^{1}. Thus the intuitive expectation should be modified, and the concept of quantum configuration space should be introduced as a suitable enlargement of the classical configuration space so that an infinite dimensional measure, often a cylindrical measure, can be well defined on it. This symplectic Hilbert space is denoted by \mathcal{H}(\Gamma). In quantum field theory, the quantum configuration space, the domain of the wave functions \Psi, is larger than the classical configuration space. In the case where \psi(q,p)\propto W(q,p), worked in the beginning of the section, the Oliveira approach and phase-space formulation are indistinguishable, at least for pure states. == Equivalence of representations == As it was states before, the first wave-function formulation of quantum mechanics was developed by Torres- Vega and Frederick, its phase-space operators are given by :\widehat{x}_{{}_\text{TV}}=\frac{1}{2}x+i\hbar\frac{\partial}{\partial p} , and :\widehat{p\,}_{{}_\text{TV}}=\frac{1}{2}p-i\hbar\frac{\partial}{\partial x} . Then \psi(x,p)\propto W(q,p). === Torres-Vega–Frederick representation === With the operators of position and momentum a Schrödinger picture is developed in phase space :i\hbar\frac{\partial}{\partial t}\psi(x,p,t)=\widehat{H}_{{}_\text{TV}}\psi(x,p,t) . ",A complex vector space where the state of a classical mechanical system is described by a vector |Ψ⟩.,A physical space where the state of a classical mechanical system is described by a vector |Ψ⟩.,A physical space where the state of a quantum mechanical system is described by a vector |Ψ⟩.,A mathematical space where the state of a classical mechanical system is described by a vector |Ψ⟩.,A complex vector space where the state of a quantum mechanical system is described by a vector |Ψ⟩.,E,kaggle200,"Any system that can be described by a Pfaffian constraint and has a configuration space or state space of only two variables or one variable is holonomic.
In a formal setup, any system in quantum mechanics is described by a state, which is a vector , residing in an abstract complex vector space, called a Hilbert space. It may be either infinite- or finite-dimensional. A usual presentation of that Hilbert space is a special function space, called , on certain set , that is either some configuration space or a discrete set.
with the vector |""ψ""> representing the complete potential state in Hilbert space, co-efficients c for ""i"" = 1, ... , n numbers in the complex plane related to the probability of each corresponding vector and each vector |""i""> representing each n-indeterminate state forming an orthogonal basis spanning |""i"">.
In elementary quantum mechanics, the state of a quantum-mechanical system is represented by a complex-valued wavefunction . More abstractly, the state may be represented as a state vector, or ""ket"", |""ψ""⟩. This ket is an element of a ""Hilbert space"", a vector space containing all possible states of the system. A quantum-mechanical operator is a function which takes a ket |""ψ""⟩ and returns some other ket |""ψ′""⟩.","In the mathematically rigorous formulation of quantum mechanics, the state of a quantum mechanical system is a vector ψ belonging to a (separable) complex Hilbert space H . This vector is postulated to be normalized under the Hilbert space inner product, that is, it obeys ⟨ψ,ψ⟩=1 , and it is well-defined up to a complex number of modulus 1 (the global phase), that is, ψ and eiαψ represent the same physical system. In other words, the possible states are points in the projective space of a Hilbert space, usually called the complex projective space. The exact nature of this Hilbert space is dependent on the system – for example, for describing position and momentum the Hilbert space is the space of complex square-integrable functions L2(C) , while the Hilbert space for the spin of a single proton is simply the space of two-dimensional complex vectors C2 with the usual inner product.
Mathematical In a formal setup, any system in quantum mechanics is described by a state, which is a vector |Ψ⟩, residing in an abstract complex vector space, called a Hilbert space. It may be either infinite- or finite-dimensional. A usual presentation of that Hilbert space is a special function space, called L2(X), on certain set X, that is either some configuration space or a discrete set.
In quantum mechanics, the conjugate to a ket vector |ψ⟩ is denoted as ⟨ψ| – a bra vector (see bra–ket notation).","The exact nature of this Hilbert space is dependent on the system – for example, for describing position and momentum the Hilbert space is the space of complex square-integrable functions L2(C) , while the Hilbert space for the spin of a single proton is simply the space of two-dimensional complex vectors C2 with the usual inner product.
Mathematical In a formal setup, any system in quantum mechanics is described by a state, which is a vector |Ψ⟩, residing in an abstract complex vector space, called a Hilbert spaceThis ket is an element of a ""Hilbert space"", a vector space containing all possible states of the systemIn the mathematically rigorous formulation of quantum mechanics, the state of a quantum mechanical system is a vector ψ belonging to a (separable) complex Hilbert space H A usual presentation of that Hilbert space is a special function space, called L2(X), on certain set X, that is either some configuration space or a discrete set.
In quantum mechanics, the conjugate to a ket vector |ψ⟩ is denoted as ⟨ψ| – a bra vector (see bra–ket notation)In other words, the possible states are points in the projective space of a Hilbert space, usually called the complex projective spaceA usual presentation of that Hilbert space is a special function space, called , on certain set , that is either some configuration space or a discrete set.
with the vector |""ψ""> representing the complete potential state in Hilbert space, co-efficients c for ""i"" = 1, ..A quantum-mechanical operator is a function which takes a ket |""ψ""⟩ and returns some other ket |""ψ′""⟩., n numbers in the complex plane related to the probability of each corresponding vector and each vector |""i""> representing each n-indeterminate state forming an orthogonal basis spanning |""i"">.
In elementary quantum mechanics, the state of a quantum-mechanical system is represented by a complex-valued wavefunction - Any system that can be described by a Pfaffian constraint and has a configuration space or state space of only two variables or one variable is holonomic.
In a formal setup, any system in quantum mech","The exact nature of this Hilbert space is dependent on the system – for example, for describing position and momentum the Hilbert space is the space of complex square-integrable functions L2(C) , while the Hilbert space for the spin of a single proton is simply the space of two-dimensional complex vectors C2 with the usual inner product.
Mathematical In a formal setup, any system in quantum mechanics is described by a state, which is a vector |Ψ⟩, residing in an abstract complex vector space, called a Hilbert spaceThis ket is an element of a ""Hilbert space"", a vector space containing all possible states of the systemIn the mathematically rigorous formulation of quantum mechanics, the state of a quantum mechanical system is a vector ψ belonging to a (separable) complex Hilbert space H A usual presentation of that Hilbert space is a special function space, called L2(X), on certain set X, that is either some configuration space or a discrete set.
In quantum mechanics, the conjugate to a ket vector |ψ⟩ is denoted as ⟨ψ| – a bra vector (see bra–ket notation)In other words, the possible states are points in the projective space of a Hilbert space, usually called the complex projective spaceA usual presentation of that Hilbert space is a special function space, called , on certain set , that is either some configuration space or a discrete set.
with the vector |""ψ""> representing the complete potential state in Hilbert space, co-efficients c for ""i"" = 1, ..A quantum-mechanical operator is a function which takes a ket |""ψ""⟩ and returns some other ket |""ψ′""⟩., n numbers in the complex plane related to the probability of each corresponding vector and each vector |""i""> representing each n-indeterminate state forming an orthogonal basis spanning |""i"">.
In elementary quantum mechanics, the state of a quantum-mechanical system is represented by a complex-valued wavefunction - Any system that can be described by a Pfaffian constraint and has a configuration space or state space of only two variables or one variable is holonomic.
In a formal setup, any system in quantum mech[SEP]What is a Hilbert space in quantum mechanics?","['E', 'A', 'C']",1.0
What is the significance of the speed of light in vacuum?,"In this context, the ""speed of light"" refers to the limiting speed c of the theory rather than to the velocity of propagation of photons. ==Historical proposals== ===Background=== Einstein's equivalence principle, on which general relativity is founded, requires that in any local, freely falling reference frame, the speed of light is always the same. In electromagnetism, electromagnetic waves in vacuum travel at the speed of light c, according to Maxwell's Equations. Accepted classical theories of physics, and in particular general relativity, predict a constant speed of light in any local frame of reference and in some situations these predict apparent variations of the speed of light depending on frame of reference, but this article does not refer to this as a variable speed of light. Depending on the value assumed for the astronomical unit, this yields the speed of light as just a little more than 300,000 kilometres per second. Relativistic speed refers to speed at which relativistic effects become significant to the desired accuracy of measurement of the phenomenon being observed. This leaves open the possibility, however, that an inertial observer inferring the apparent speed of light in a distant region might calculate a different value. The light-second is a unit of length useful in astronomy, telecommunications and relativistic physics. VSL cosmologies remain out of mainstream physics. ==References== ==External links== *Is the speed of light constant? Rømer's determination of the speed of light was the demonstration in 1676 that light has an apprehensible, measurable speed and so does not travel instantaneously. A variable speed of light (VSL) is a feature of a family of hypotheses stating that the speed of light may in some way not be constant, for example, that it varies in space or time, or depending on frequency. It is defined as the distance that light travels in free space in one second, and is equal to exactly 299 792 458 metres (approximately 983 571 055 ft). Speed is a scalar, being the magnitude of the velocity vector which in relativity is the four-velocity and in three-dimension Euclidean space a three-velocity. Non-relativistic discrepancies include cosine error which occurs in speed detection devices when only one scalar component of the three- velocity is measured and the Doppler effect which may affect observations of wavelength and frequency.thumb|Inverse of Lorentz factor as a function of speed, v, as a proportion of light speed, c - a circular arc.|center|346x346px Relativistic effects are highly non-linear and for everyday purposes are insignificant because the Newtonian model closely approximates the relativity model. It would be another thirty years before A. A. Michelson in the United States published his more precise results (299,910±50 km/s) and Simon Newcomb confirmed the agreement with astronomical measurements, almost exactly two centuries after Rømer's announcement. ==Later discussion== ===Did Rømer measure the speed of light?=== Several discussions have suggested that Rømer should not be credited with the measurement of the speed of light, as he never gave a value in Earth-based units.Cohen (1940). These authors credit Huygens with the first calculation of the speed of light.French (1990), pp. 120–21. Several hypotheses for varying speed of light, seemingly in contradiction to general relativity theory, have been published, including those of Giere and Tan (1986) and Sanejouand (2009). The first measurements of the speed of light using completely terrestrial apparatus were published in 1849 by Hippolyte Fizeau (1819–96). The apparent speed of light will change in a gravity field and, in particular, go to zero at an event horizon as viewed by a distant observer. It is usually quoted as ""light-time for unit distance"" in tables of astronomical constants, and its currently accepted value is s.. Spatial variation of the speed of light in a gravitational potential as measured against a distant observer's time reference is implicitly present in general relativity. ",The speed of light in vacuum is only relevant when measuring the one-way speed of light.,The speed of light in vacuum is only relevant when measuring the two-way speed of light.,The speed of light in vacuum is independent of the motion of the wave source and the observer's inertial frame of reference.,The speed of light in vacuum is dependent on the motion of the wave source and the observer's inertial frame of reference.,The speed of light in vacuum is only relevant when c appears explicitly in the units of measurement.,C,kaggle200,"Exponentiation with base is used in scientific notation to denote large or small numbers. For instance, (the speed of light in vacuum, in metres per second) can be written as and then approximated as .
In 1905 Einstein postulated from the outset that the speed of light in vacuum, measured by a non-accelerating observer, is independent of the motion of the source or observer. Using this and the principle of relativity as a basis he derived the special theory of relativity, in which the speed of light in vacuum ""c"" featured as a fundamental constant, also appearing in contexts unrelated to light. This made the concept of the stationary aether (to which Lorentz and Poincaré still adhered) useless and revolutionized the concepts of space and time.
The speed of light in vacuum is usually denoted by a lowercase , for ""constant"" or the Latin (meaning 'swiftness, celerity'). In 1856, Wilhelm Eduard Weber and Rudolf Kohlrausch had used for a different constant that was later shown to equal times the speed of light in vacuum. Historically, the symbol ""V"" was used as an alternative symbol for the speed of light, introduced by James Clerk Maxwell in 1865. In 1894, Paul Drude redefined with its modern meaning. Einstein used ""V"" in his original German-language papers on special relativity in 1905, but in 1907 he switched to , which by then had become the standard symbol for the speed of light.
Sometimes is used for the speed of waves in any material medium, and for the speed of light in vacuum. This subscripted notation, which is endorsed in official SI literature, has the same form as related electromagnetic constants: namely, ""μ"" for the vacuum permeability or magnetic constant, ""ε"" for the vacuum permittivity or electric constant, and ""Z"" for the impedance of free space. This article uses exclusively for the speed of light in vacuum.","Special relativity In 1905 Einstein postulated from the outset that the speed of light in vacuum, measured by a non-accelerating observer, is independent of the motion of the source or observer. Using this and the principle of relativity as a basis he derived the special theory of relativity, in which the speed of light in vacuum c featured as a fundamental constant, also appearing in contexts unrelated to light. This made the concept of the stationary aether (to which Lorentz and Poincaré still adhered) useless and revolutionized the concepts of space and time.
The speed of light in vacuum is defined to be exactly 299 792 458 m/s (approx. 186,282 miles per second). The fixed value of the speed of light in SI units results from the fact that the metre is now defined in terms of the speed of light. All forms of electromagnetic radiation move at exactly this same speed in vacuum.
The speed of light in vacuum is usually denoted by a lowercase c, for ""constant"" or the Latin celeritas (meaning 'swiftness, celerity'). In 1856, Wilhelm Eduard Weber and Rudolf Kohlrausch had used c for a different constant that was later shown to equal √2 times the speed of light in vacuum. Historically, the symbol V was used as an alternative symbol for the speed of light, introduced by James Clerk Maxwell in 1865. In 1894, Paul Drude redefined c with its modern meaning. Einstein used V in his original German-language papers on special relativity in 1905, but in 1907 he switched to c, which by then had become the standard symbol for the speed of light.Sometimes c is used for the speed of waves in any material medium, and c0 for the speed of light in vacuum. This subscripted notation, which is endorsed in official SI literature, has the same form as related electromagnetic constants: namely, μ0 for the vacuum permeability or magnetic constant, ε0 for the vacuum permittivity or electric constant, and Z0 for the impedance of free space. This article uses c exclusively for the speed of light in vacuum.","This made the concept of the stationary aether (to which Lorentz and Poincaré still adhered) useless and revolutionized the concepts of space and time.
The speed of light in vacuum is defined to be exactly 299 792 458 m/s (approxUsing this and the principle of relativity as a basis he derived the special theory of relativity, in which the speed of light in vacuum ""c"" featured as a fundamental constant, also appearing in contexts unrelated to lightThis made the concept of the stationary aether (to which Lorentz and Poincaré still adhered) useless and revolutionized the concepts of space and time.
The speed of light in vacuum is usually denoted by a lowercase , for ""constant"" or the Latin (meaning 'swiftness, celerity')Using this and the principle of relativity as a basis he derived the special theory of relativity, in which the speed of light in vacuum c featured as a fundamental constant, also appearing in contexts unrelated to lightAll forms of electromagnetic radiation move at exactly this same speed in vacuum.
The speed of light in vacuum is usually denoted by a lowercase c, for ""constant"" or the Latin celeritas (meaning 'swiftness, celerity')This article uses c exclusively for the speed of light in vacuumThis article uses exclusively for the speed of light in vacuum.For instance, (the speed of light in vacuum, in metres per second) can be written as and then approximated as .
In 1905 Einstein postulated from the outset that the speed of light in vacuum, measured by a non-accelerating observer, is independent of the motion of the source or observerSpecial relativity In 1905 Einstein postulated from the outset that the speed of light in vacuum, measured by a non-accelerating observer, is independent of the motion of the source or observerIn 1856, Wilhelm Eduard Weber and Rudolf Kohlrausch had used for a different constant that was later shown to equal times the speed of light in vacuumEinstein used V in his original German-language papers on special relativity in 1905, but in 1907 he switched to c, which by then had become the standard symbol for the speed of l","This made the concept of the stationary aether (to which Lorentz and Poincaré still adhered) useless and revolutionized the concepts of space and time.
The speed of light in vacuum is defined to be exactly 299 792 458 m/s (approxUsing this and the principle of relativity as a basis he derived the special theory of relativity, in which the speed of light in vacuum ""c"" featured as a fundamental constant, also appearing in contexts unrelated to lightThis made the concept of the stationary aether (to which Lorentz and Poincaré still adhered) useless and revolutionized the concepts of space and time.
The speed of light in vacuum is usually denoted by a lowercase , for ""constant"" or the Latin (meaning 'swiftness, celerity')Using this and the principle of relativity as a basis he derived the special theory of relativity, in which the speed of light in vacuum c featured as a fundamental constant, also appearing in contexts unrelated to lightAll forms of electromagnetic radiation move at exactly this same speed in vacuum.
The speed of light in vacuum is usually denoted by a lowercase c, for ""constant"" or the Latin celeritas (meaning 'swiftness, celerity')This article uses c exclusively for the speed of light in vacuumThis article uses exclusively for the speed of light in vacuum.For instance, (the speed of light in vacuum, in metres per second) can be written as and then approximated as .
In 1905 Einstein postulated from the outset that the speed of light in vacuum, measured by a non-accelerating observer, is independent of the motion of the source or observerSpecial relativity In 1905 Einstein postulated from the outset that the speed of light in vacuum, measured by a non-accelerating observer, is independent of the motion of the source or observerIn 1856, Wilhelm Eduard Weber and Rudolf Kohlrausch had used for a different constant that was later shown to equal times the speed of light in vacuumEinstein used V in his original German-language papers on special relativity in 1905, but in 1907 he switched to c, which by then had become the standard symbol for the speed of l[SEP]What is the significance of the speed of light in vacuum?","['C', 'D', 'B']",1.0
What is the term used to describe the proportionality factor to the Stefan-Boltzmann law that is utilized in subsequent evaluations of the radiative behavior of grey bodies?,"The Stefan–Boltzmann law, also known as Stefan's law, describes the intensity of the thermal radiation emitted by matter in terms of that matter's temperature. A so-called grey body is a body for which the spectral emissivity is independent of wavelength, so that the total emissivity, \varepsilon, is a constant. For an ideal absorber/emitter or black body, the Stefan–Boltzmann law states that the total energy radiated per unit surface area per unit time (also known as the radiant exitance) is directly proportional to the fourth power of the black body's temperature, T: : M^{\circ} = \sigma\, T^{4}. The Stefan–Boltzmann law may be expressed as a formula for radiance as a function of temperature. The total emissivity, as applicable to the Stefan–Boltzmann law, may be calculated as a weighted average of the spectral emissivity, with the blackbody emission spectrum serving as the weighting function. The Stefan–Boltzmann law for the radiance of a black body is: : L^\circ_\Omega = \frac{M^{\circ}}\pi = \frac\sigma\pi\, T^{4}. However, the emissivity which appears in the non-directional form of the Stefan–Boltzmann law is the hemispherical total emissivity, which reflects emissions as totaled over all wavelengths, directions, and polarizations. In the general case, the Stefan–Boltzmann law for radiant exitance takes the form: : M = \varepsilon\,M^{\circ} = \varepsilon\,\sigma\, T^{4} where \varepsilon is the emissivity of the matter doing the emitting. The formula is given, where E is the radiant heat emitted from a unit of area per unit time, T is the absolute temperature, and is the Stefan–Boltzmann constant. ==Equations== ===Planck's law of black-body radiation=== Planck's law states that :B_ u(T) = \frac{2h u^3}{c^2}\frac{1}{e^{h u/kT} - 1}, where :B_{ u}(T) is the spectral radiance (the power per unit solid angle and per unit of area normal to the propagation) density of frequency u radiation per unit frequency at thermal equilibrium at temperature T. Units: power / [area × solid angle × frequency]. :h is the Planck constant; :c is the speed of light in vacuum; :k is the Boltzmann constant; : u is the frequency of the electromagnetic radiation; :T is the absolute temperature of the body. The emitted energy flux density or irradiance B_ u(T,E), is related to the photon flux density b_ u(T,E) through :B_ u(T,E) = Eb_ u(T,E) ===Wien's displacement law=== Wien's displacement law shows how the spectrum of black-body radiation at any temperature is related to the spectrum at any other temperature. A consequence of Wien's displacement law is that the wavelength at which the intensity per unit wavelength of the radiation produced by a black body has a local maximum or peak, \lambda_\text{peak}, is a function only of the temperature: :\lambda_\text{peak} = \frac{b}{T}, where the constant b, known as Wien's displacement constant, is equal to \frac{hc}k\frac 1{5+W_0(-5e^{-5})} (where W_0 is the Lambert W function). The intensity of the light emitted from the blackbody surface is given by Planck's law, I( u,T) =\frac{2 h u^3}{c^2}\frac{1}{ e^{h u/(kT)}-1}, where *I( u,T) is the amount of power per unit surface area per unit solid angle per unit frequency emitted at a frequency u by a black body at temperature T. *h is the Planck constant *c is the speed of light, and *k is the Boltzmann constant. An emissivity of one corresponds to a black body. ==Detailed explanation== The radiant exitance (previously called radiant emittance), M, has dimensions of energy flux (energy per unit time per unit area), and the SI units of measure are joules per second per square metre (J s m), or equivalently, watts per square metre (W m). The constant of proportionality, \sigma, is called the Stefan–Boltzmann constant. The emissivity of a material specifies how well a real body radiates energy as compared with a black body. The Gebhart factors are used in radiative heat transfer, it is a means to describe the ratio of radiation absorbed by any other surface versus the total emitted radiation from given surface. For simpler cases it can also be formulated as a single expression. ==See also== * Radiosity * Thermal radiation * Black body == References == Category:Heat transfer Through Planck's law the temperature spectrum of a black body is proportionally related to the frequency of light and one may substitute the temperature (T) for the frequency in this equation. The law, including the theoretical prediction of the Stefan–Boltzmann constant as a function of the speed of light, the Boltzmann constant and the Planck constant, is a direct consequence of Planck's law as formulated in 1900. == Stefan–Boltzmann constant == The Stefan–Boltzmann constant, σ, is derived from other known physical constants: :\sigma = \frac{2 \pi^5 k^4}{15 c^2 h^3} where k is the Boltzmann constant, the h is Planck's constant, and c is the speed of light in a vacuum. The wavelength at which the radiation is strongest is given by Wien's displacement law, and the overall power emitted per unit area is given by the Stefan–Boltzmann law. ",Emissivity,Wien's displacement law,Reflectance,Black-body radiation,Albedo,A,kaggle200,"There is no consistent evidence that glomerulations are correlated to severity of urinary symptoms, quality of life, bladder inflammation, or bladder capacity. One study suggests that the severity of glomerulations may change over time as seen in a few individuals who have either worsened or diminished glomerulations in their subsequent evaluations.
The proportionality factor in the definition of Ross' time constant is dependent upon the magnitude of the disturbance on the plant and the specifications for feedback control. When there are no disturbances, Ross' -lemma shows that the open-loop optimal solution is the same as the closed-loop one. In the presence of disturbances, the proportionality factor can be written in terms of the Lambert W-function.
formula_1 assuming the emission layer of the atmosphere radiates like a blackbody according to the Stefan-Boltzmann law. σ is the Stefan-Boltzmann constant.
The concepts of emissivity and absorptivity, as properties of matter and radiation, appeared in the late-eighteenth thru mid-nineteenth century writings of Pierre Prévost, John Leslie, Balfour Stewart and others. In 1860, Gustav Kirchhoff published a mathematical description of their relationship under conditions of thermal equilibrium (i.e. Kirchoff's law of thermal radiation). By 1884 the emissive power of a perfect blackbody was inferred by Josef Stefan using John Tyndall's experimental measurements, and derived by Ludwig Boltzmann from fundamental statistical principles. Emissivity, defined as a further proportionality factor to the Stefan-Boltzmann law, was thus implied and utilized in subsequent evaluations of the radiative behavior of grey bodies. For example, Svante Arrhenius applied the recent theoretical developments to his 1896 investigation of Earth's surface temperatures as calculated from the planet's radiative equilibrium with all of space. By 1900 Max Planck empirically derived a generalized law of blackbody radiation, thus clarifying the emissivity and absorptivity concepts at individual wavelengths.","In physical chemistry, Henry's law is a gas law that states that the amount of dissolved gas in a liquid is directly proportional to its partial pressure above the liquid. The proportionality factor is called Henry's law constant. It was formulated by the English chemist William Henry, who studied the topic in the early 19th century.
The proportionality factor in the definition of Ross' time constant is dependent upon the magnitude of the disturbance on the plant and the specifications for feedback control. When there are no disturbances, Ross' π-lemma shows that the open-loop optimal solution is the same as the closed-loop one. In the presence of disturbances, the proportionality factor can be written in terms of the Lambert W-function.
The concepts of emissivity and absorptivity, as properties of matter and radiation, appeared in the late-eighteenth thru mid-nineteenth century writings of Pierre Prévost, John Leslie, Balfour Stewart and others. In 1860, Gustav Kirchhoff published a mathematical description of their relationship under conditions of thermal equilibrium (i.e. Kirchoff's law of thermal radiation). By 1884 the emissive power of a perfect blackbody was inferred by Josef Stefan using John Tyndall's experimental measurements, and derived by Ludwig Boltzmann from fundamental statistical principles. Emissivity, defined as a further proportionality factor to the Stefan-Boltzmann law, was thus implied and utilized in subsequent evaluations of the radiative behavior of grey bodies. For example, Svante Arrhenius applied the recent theoretical developments to his 1896 investigation of Earth's surface temperatures as calculated from the planet's radiative equilibrium with all of space. By 1900 Max Planck empirically derived a generalized law of blackbody radiation, thus clarifying the emissivity and absorptivity concepts at individual wavelengths.","Emissivity, defined as a further proportionality factor to the Stefan-Boltzmann law, was thus implied and utilized in subsequent evaluations of the radiative behavior of grey bodiesIn the presence of disturbances, the proportionality factor can be written in terms of the Lambert W-function.
formula_1 assuming the emission layer of the atmosphere radiates like a blackbody according to the Stefan-Boltzmann lawThe proportionality factor is called Henry's law constantIn the presence of disturbances, the proportionality factor can be written in terms of the Lambert W-function.
The concepts of emissivity and absorptivity, as properties of matter and radiation, appeared in the late-eighteenth thru mid-nineteenth century writings of Pierre Prévost, John Leslie, Balfour Stewart and othersBy 1900 Max Planck empirically derived a generalized law of blackbody radiation, thus clarifying the emissivity and absorptivity concepts at individual wavelengths.By 1900 Max Planck empirically derived a generalized law of blackbody radiation, thus clarifying the emissivity and absorptivity concepts at individual wavelengthsKirchoff's law of thermal radiation)σ is the Stefan-Boltzmann constant.
The concepts of emissivity and absorptivity, as properties of matter and radiation, appeared in the late-eighteenth thru mid-nineteenth century writings of Pierre Prévost, John Leslie, Balfour Stewart and othersBy 1884 the emissive power of a perfect blackbody was inferred by Josef Stefan using John Tyndall's experimental measurements, and derived by Ludwig Boltzmann from fundamental statistical principlesFor example, Svante Arrhenius applied the recent theoretical developments to his 1896 investigation of Earth's surface temperatures as calculated from the planet's radiative equilibrium with all of spaceIn 1860, Gustav Kirchhoff published a mathematical description of their relationship under conditions of thermal equilibrium (i.eIn physical chemistry, Henry's law is a gas law that states that the amount of dissolved gas in a liquid is directly proportional to its partial pressure above the liqui","Emissivity, defined as a further proportionality factor to the Stefan-Boltzmann law, was thus implied and utilized in subsequent evaluations of the radiative behavior of grey bodiesIn the presence of disturbances, the proportionality factor can be written in terms of the Lambert W-function.
formula_1 assuming the emission layer of the atmosphere radiates like a blackbody according to the Stefan-Boltzmann lawThe proportionality factor is called Henry's law constantIn the presence of disturbances, the proportionality factor can be written in terms of the Lambert W-function.
The concepts of emissivity and absorptivity, as properties of matter and radiation, appeared in the late-eighteenth thru mid-nineteenth century writings of Pierre Prévost, John Leslie, Balfour Stewart and othersBy 1900 Max Planck empirically derived a generalized law of blackbody radiation, thus clarifying the emissivity and absorptivity concepts at individual wavelengths.By 1900 Max Planck empirically derived a generalized law of blackbody radiation, thus clarifying the emissivity and absorptivity concepts at individual wavelengthsKirchoff's law of thermal radiation)σ is the Stefan-Boltzmann constant.
The concepts of emissivity and absorptivity, as properties of matter and radiation, appeared in the late-eighteenth thru mid-nineteenth century writings of Pierre Prévost, John Leslie, Balfour Stewart and othersBy 1884 the emissive power of a perfect blackbody was inferred by Josef Stefan using John Tyndall's experimental measurements, and derived by Ludwig Boltzmann from fundamental statistical principlesFor example, Svante Arrhenius applied the recent theoretical developments to his 1896 investigation of Earth's surface temperatures as calculated from the planet's radiative equilibrium with all of spaceIn 1860, Gustav Kirchhoff published a mathematical description of their relationship under conditions of thermal equilibrium (i.eIn physical chemistry, Henry's law is a gas law that states that the amount of dissolved gas in a liquid is directly proportional to its partial pressure above the liqui[SEP]What is the term used to describe the proportionality factor to the Stefan-Boltzmann law that is utilized in subsequent evaluations of the radiative behavior of grey bodies?","['A', 'E', 'D']",1.0
What is the reason for the formation of stars exclusively within molecular clouds?,"This is a natural consequence of their low temperatures and high densities, because the gravitational force acting to collapse the cloud must exceed the internal pressures that are acting ""outward"" to prevent a collapse. There is observed evidence that the large, star-forming clouds are confined to a large degree by their own gravity (like stars, planets, and galaxies) rather than by external pressure. The theory of low-mass star formation, which is well- supported by observation, suggests that low-mass stars form by the gravitational collapse of rotating density enhancements within molecular clouds. In triggered star formation, one of several events might occur to compress a molecular cloud and initiate its gravitational collapse. A molecular cloud, sometimes called a stellar nursery (if star formation is occurring within), is a type of interstellar cloud, the density and size of which permit absorption nebulae, the formation of molecules (most commonly molecular hydrogen, H2), and the formation of H II regions. These clouds have a typical density of 30 particles per cubic centimetre. ==Processes== ===Star formation=== The formation of stars occurs exclusively within molecular clouds. Stars have very high temperatures, primarily in their interior, and therefore there are few molecules formed in stars. Observations indicate that the coldest clouds tend to form low-mass stars, observed first in the infrared inside the clouds, then in visible light at their surface when the clouds dissipate, while giant molecular clouds, which are generally warmer, produce stars of all masses. These can form in association with collapsing molecular clouds or possibly independently. As it collapses, a molecular cloud breaks into smaller and smaller pieces in a hierarchical manner, until the fragments reach stellar mass. In the dense nebulae where stars are produced, much of the hydrogen is in the molecular (H2) form, so these nebulae are called molecular clouds. Star formation is the process by which dense regions within molecular clouds in The ""medium"" is present further soon.-->interstellar space, sometimes referred to as ""stellar nurseries"" or ""star-forming regions"", collapse and form stars. The densest parts of small molecular clouds are equivalent to the molecular cores found in GMCs and are often included in the same studies. ===High-latitude diffuse molecular clouds=== In 1984 IRAS identified a new type of diffuse molecular cloud. Within molecular clouds are regions with higher density, where much dust and many gas cores reside, called clumps. These clumps are the beginning of star formation if gravitational forces are sufficient to cause the dust and gas to collapse. ==History== The form of molecular clouds by interstellar dust and hydrogen gas traces its links to the formation of the Solar System, approximately 4.6 billion years ago. ==Occurrence== Within the Milky Way, molecular gas clouds account for less than one percent of the volume of the interstellar medium (ISM), yet it is also the densest part of the medium, comprising roughly half of the total gas mass interior to the Sun's galactic orbit. At the same time, the clouds are known to be disrupted by some process—most likely the effects of massive stars—before a significant fraction of their mass has become stars. It has been speculated that as long as the air remains saturated, the natural force of cohesion that hold the molecules of a substance together may act to keep the cloud from breaking up. Higher density regions of the interstellar medium form clouds, or diffuse nebulae, where star formation takes place. Turbulence is instrumental in causing fragmentation of the cloud, and on the smallest scales it promotes collapse. ==Protostar== A protostellar cloud will continue to collapse as long as the gravitational binding energy can be eliminated. The evidence comes from the fact that the ""turbulent"" velocities inferred from CO linewidth scale in the same manner as the orbital velocity (a virial relation). ===Physics=== The physics of molecular clouds is poorly understood and much debated. ",The formation of stars occurs exclusively outside of molecular clouds.,"The low temperatures and high densities of molecular clouds cause the gravitational force to exceed the internal pressures that are acting ""outward"" to prevent a collapse.","The low temperatures and low densities of molecular clouds cause the gravitational force to be less than the internal pressures that are acting ""outward"" to prevent a collapse.","The high temperatures and low densities of molecular clouds cause the gravitational force to exceed the internal pressures that are acting ""outward"" to prevent a collapse.","The high temperatures and high densities of molecular clouds cause the gravitational force to be less than the internal pressures that are acting ""outward"" to prevent a collapse.",B,kaggle200,"Isolated gravitationally-bound small molecular clouds with masses less than a few hundred times that of the Sun are called Bok globules. The densest parts of small molecular clouds are equivalent to the molecular cores found in GMCs and are often included in the same studies.
In molecular clouds, formation of formyl cyanide is speculated to result from formaldehyde and the cyanide radical:
In the dense nebulae where stars are produced, much of the hydrogen is in the molecular (H) form, so these nebulae are called molecular clouds. The Herschel Space Observatory has revealed that filaments are truly ubiquitous in molecular clouds. Dense molecular filaments, which are central to the star formation process, will fragment into gravitationally bound cores, most of which will evolve into stars. Continuous accretion of gas, geometrical bending, and magnetic fields may control the detailed fragmentation manner of the filaments. In supercritical filaments observations have revealed quasi-periodic chains of dense cores with spacing comparable to the filament inner width, and includes embedded protostars with outflows. Observations indicate that the coldest clouds tend to form low-mass stars, observed first in the infrared inside the clouds, then in visible light at their surface when the clouds dissipate, while giant molecular clouds, which are generally warmer, produce stars of all masses. These giant molecular clouds have typical densities of 100 particles per cm, diameters of , masses of up to 6 million solar masses (), and an average interior temperature of 10 K. About half the total mass of the galactic ISM is found in molecular clouds and in the Milky Way there are an estimated 6,000 molecular clouds, each with more than . The nearest nebula to the Sun where massive stars are being formed is the Orion Nebula, away. However, lower mass star formation is occurring about 400–450 light years distant in the ρ Ophiuchi cloud complex.
The formation of stars occurs exclusively within molecular clouds. This is a natural consequence of their low temperatures and high densities, because the gravitational force acting to collapse the cloud must exceed the internal pressures that are acting ""outward"" to prevent a collapse. There is observed evidence that the large, star-forming clouds are confined to a large degree by their own gravity (like stars, planets, and galaxies) rather than by external pressure. The evidence comes from the fact that the ""turbulent"" velocities inferred from CO linewidth scale in the same manner as the orbital velocity (a virial relation).","Small molecular clouds Isolated gravitationally-bound small molecular clouds with masses less than a few hundred times that of the Sun are called Bok globules. The densest parts of small molecular clouds are equivalent to the molecular cores found in GMCs and are often included in the same studies.
High-latitude diffuse molecular clouds In 1984 IRAS identified a new type of diffuse molecular cloud. These were diffuse filamentary clouds that are visible at high galactic latitudes. These clouds have a typical density of 30 particles per cubic centimetre.
One of the most obvious manifestations of astrophysical photoevaporation is seen in the eroding structures of molecular clouds that luminous stars are born within.
Star formation The formation of stars occurs exclusively within molecular clouds. This is a natural consequence of their low temperatures and high densities, because the gravitational force acting to collapse the cloud must exceed the internal pressures that are acting ""outward"" to prevent a collapse. There is observed evidence that the large, star-forming clouds are confined to a large degree by their own gravity (like stars, planets, and galaxies) rather than by external pressure. The evidence comes from the fact that the ""turbulent"" velocities inferred from CO linewidth scale in the same manner as the orbital velocity (a virial relation).","These clouds have a typical density of 30 particles per cubic centimetre.
One of the most obvious manifestations of astrophysical photoevaporation is seen in the eroding structures of molecular clouds that luminous stars are born within.
Star formation The formation of stars occurs exclusively within molecular cloudsObservations indicate that the coldest clouds tend to form low-mass stars, observed first in the infrared inside the clouds, then in visible light at their surface when the clouds dissipate, while giant molecular clouds, which are generally warmer, produce stars of all massesThere is observed evidence that the large, star-forming clouds are confined to a large degree by their own gravity (like stars, planets, and galaxies) rather than by external pressureThe densest parts of small molecular clouds are equivalent to the molecular cores found in GMCs and are often included in the same studies.
In molecular clouds, formation of formyl cyanide is speculated to result from formaldehyde and the cyanide radical:
In the dense nebulae where stars are produced, much of the hydrogen is in the molecular (H) form, so these nebulae are called molecular cloudsHowever, lower mass star formation is occurring about 400–450 light years distant in the ρ Ophiuchi cloud complex.
The formation of stars occurs exclusively within molecular cloudsThis is a natural consequence of their low temperatures and high densities, because the gravitational force acting to collapse the cloud must exceed the internal pressures that are acting ""outward"" to prevent a collapseThese giant molecular clouds have typical densities of 100 particles per cm, diameters of , masses of up to 6 million solar masses (), and an average interior temperature of 10 KThe Herschel Space Observatory has revealed that filaments are truly ubiquitous in molecular cloudsSmall molecular clouds Isolated gravitationally-bound small molecular clouds with masses less than a few hundred times that of the Sun are called Bok globulesThe densest parts of small molecular clouds are equivalent to the molecular cores found in ","These clouds have a typical density of 30 particles per cubic centimetre.
One of the most obvious manifestations of astrophysical photoevaporation is seen in the eroding structures of molecular clouds that luminous stars are born within.
Star formation The formation of stars occurs exclusively within molecular cloudsObservations indicate that the coldest clouds tend to form low-mass stars, observed first in the infrared inside the clouds, then in visible light at their surface when the clouds dissipate, while giant molecular clouds, which are generally warmer, produce stars of all massesThere is observed evidence that the large, star-forming clouds are confined to a large degree by their own gravity (like stars, planets, and galaxies) rather than by external pressureThe densest parts of small molecular clouds are equivalent to the molecular cores found in GMCs and are often included in the same studies.
In molecular clouds, formation of formyl cyanide is speculated to result from formaldehyde and the cyanide radical:
In the dense nebulae where stars are produced, much of the hydrogen is in the molecular (H) form, so these nebulae are called molecular cloudsHowever, lower mass star formation is occurring about 400–450 light years distant in the ρ Ophiuchi cloud complex.
The formation of stars occurs exclusively within molecular cloudsThis is a natural consequence of their low temperatures and high densities, because the gravitational force acting to collapse the cloud must exceed the internal pressures that are acting ""outward"" to prevent a collapseThese giant molecular clouds have typical densities of 100 particles per cm, diameters of , masses of up to 6 million solar masses (), and an average interior temperature of 10 KThe Herschel Space Observatory has revealed that filaments are truly ubiquitous in molecular cloudsSmall molecular clouds Isolated gravitationally-bound small molecular clouds with masses less than a few hundred times that of the Sun are called Bok globulesThe densest parts of small molecular clouds are equivalent to the molecular cores found in [SEP]What is the reason for the formation of stars exclusively within molecular clouds?","['B', 'E', 'C']",1.0
What is the identity operation in symmetry groups?,"The need for such an identity operation arises from the mathematical requirements of group theory. === Reflection through mirror planes === thumb|Reflection operation The reflection operation is carried out with respect to symmetry elements known as planes of symmetry or mirror planes. In group theory, geometry, representation theory and molecular geometry, a symmetry operation is a geometric transformation of an object that leaves the object looking the same after it has been carried out. In the context of molecular symmetry, a symmetry operation is a permutation of atoms such that the molecule or crystal is transformed into a state indistinguishable from the starting state. It is equivalent to the Identity () operation. is a rotation of 180°, as is a rotation of 120°, as and so on. For example, as transformations of an object in space, rotations, reflections and inversions are all symmetry operations. Such symmetry operations are performed with respect to symmetry elements (for example, a point, line or plane). Even the most asymmetric molecule possesses the identity operation. In group theory, the symmetry group of a geometric object is the group of all transformations under which the object is invariant, endowed with the group operation of composition. The four symmetry operations , , and form the point group . This figure has four symmetry operations: the identity operation, one twofold axis of rotation, and two nonequivalent mirror planes. In the context of molecular symmetry, quantum wavefunctions need not be invariant, because the operation can multiply them by a phase or mix states within a degenerate representation, without affecting any physical property. == Molecules == === Identity Operation === The identity operation corresponds to doing nothing to the object. In addition, many abstract features of the group (defined purely in terms of the group operation) can be interpreted in terms of symmetries. The group of isometries of space induces a group action on objects in it, and the symmetry group Sym(X) consists of those isometries which map X to itself (as well as mapping any further pattern to itself). The identity operation is denoted by or . In the identity operation, no change can be observed for the molecule. The above is sometimes called the full symmetry group of X to emphasize that it includes orientation-reversing isometries (reflections, glide reflections and improper rotations), as long as those isometries map this particular X to itself. In invariant theory, the symmetric group acts on the variables of a multi-variate function, and the functions left invariant are the so-called symmetric functions. In abstract algebra, the symmetric group defined over any set is the group whose elements are all the bijections from the set to itself, and whose group operation is the composition of functions. Identity group may refer to: *Identity (social science) *Social group *Trivial group, a mathematical group consisting of a single element. # Symmetry operations can be collected together in groups which are isomorphic to permutation groups. ",The identity operation leaves the molecule unchanged and forms the identity element in the symmetry group.,The identity operation rotates the molecule about its center of mass.,The identity operation inverts the molecule about its center of inversion.,The identity operation reflects the molecule across a plane of symmetry.,The identity operation translates the molecule in 3-D space.,A,kaggle200,"In chemistry, there are five important symmetry operations. They are identity operation (E), rotation operation or proper rotation (C), reflection operation (σ), inversion (i) and rotation reflection operation or improper rotation (S). The identity operation (E) consists of leaving the molecule as it is. This is equivalent to any number of full rotations around any axis. This is a symmetry of all molecules, whereas the symmetry group of a chiral molecule consists of only the identity operation. An identity operation is a characteristic of every molecule even if it has no symmetry. Rotation around an axis (C) consists of rotating the molecule around a specific axis by a specific angle. It is rotation through the angle 360°/""n"", where ""n"" is an integer, about a rotation axis. For example, if a water molecule rotates 180° around the axis that passes through the oxygen atom and between the hydrogen atoms, it is in the same configuration as it started. In this case, , since applying it twice produces the identity operation. In molecules with more than one rotation axis, the C axis having the largest value of n is the highest order rotation axis or principal axis. For example in boron trifluoride (BF), the highest order of rotation axis is C, so the principal axis of rotation is C.
This expression of the identity operation is called a ""representation"" or a ""resolution"" of the identity. This formal representation satisfies the basic property of the identity:
If for some , the left operation is the identity operation, then is called a left identity. Similarly, if , then is a right identity.
The identity operation corresponds to doing nothing to the object. Because every molecule is indistinguishable from itself if nothing is done to it, every object possesses at least the identity operation. The identity operation is denoted by ""E"" or ""I"". In the identity operation, no change can be observed for the molecule. Even the most asymmetric molecule possesses the identity operation. The need for such an identity operation arises from the mathematical requirements of group theory.","D1 is the 2-element group containing the identity operation and a single reflection, which occurs when the figure has only a single axis of bilateral symmetry, for example the letter ""A"".
D2, which is isomorphic to the Klein four-group, is the symmetry group of a non-equilateral rectangle. This figure has four symmetry operations: the identity operation, one twofold axis of rotation, and two nonequivalent mirror planes.
D3, D4 etc. are the symmetry groups of the regular polygons.
Within each of these symmetry types, there are two degrees of freedom for the center of rotation, and in the case of the dihedral groups, one more for the positions of the mirrors.
Basic point group symmetry operations The five basic symmetry operations mentioned above are: Identity Operation E (from the German 'Einheit' meaning unity): The identity operation leaves the molecule unchanged. It forms the identity element in the symmetry group. Though its inclusion seems to be trivial, it is important also because even for the most asymmetric molecule, this symmetry is present. The corresponding symmetry element is the entire molecule itself.
Identity Operation The identity operation corresponds to doing nothing to the object. Because every molecule is indistinguishable from itself if nothing is done to it, every object possesses at least the identity operation. The identity operation is denoted by E or I. In the identity operation, no change can be observed for the molecule. Even the most asymmetric molecule possesses the identity operation. The need for such an identity operation arises from the mathematical requirements of group theory.","It forms the identity element in the symmetry groupThe corresponding symmetry element is the entire molecule itself.
Identity Operation The identity operation corresponds to doing nothing to the objectThe need for such an identity operation arises from the mathematical requirements of group theoryThe need for such an identity operation arises from the mathematical requirements of group theory.This is a symmetry of all molecules, whereas the symmetry group of a chiral molecule consists of only the identity operationare the symmetry groups of the regular polygons.
Within each of these symmetry types, there are two degrees of freedom for the center of rotation, and in the case of the dihedral groups, one more for the positions of the mirrors.
Basic point group symmetry operations The five basic symmetry operations mentioned above are: Identity Operation E (from the German 'Einheit' meaning unity): The identity operation leaves the molecule unchangedAn identity operation is a characteristic of every molecule even if it has no symmetryThe identity operation is denoted by E or IThe identity operation is denoted by ""E"" or ""I""They are identity operation (E), rotation operation or proper rotation (C), reflection operation (σ), inversion (i) and rotation reflection operation or improper rotation (S)The identity operation (E) consists of leaving the molecule as it isThis formal representation satisfies the basic property of the identity:
If for some , the left operation is the identity operation, then is called a left identity- In chemistry, there are five important symmetry operationsThis figure has four symmetry operations: the identity operation, one twofold axis of rotation, and two nonequivalent mirror planes.
D3, D4 etcFor example in boron trifluoride (BF), the highest order of rotation axis is C, so the principal axis of rotation is C.
This expression of the identity operation is called a ""representation"" or a ""resolution"" of the identityIn this case, , since applying it twice produces the identity operationEven the most asymmetric molecule possesses the identity ope","It forms the identity element in the symmetry groupThe corresponding symmetry element is the entire molecule itself.
Identity Operation The identity operation corresponds to doing nothing to the objectThe need for such an identity operation arises from the mathematical requirements of group theoryThe need for such an identity operation arises from the mathematical requirements of group theory.This is a symmetry of all molecules, whereas the symmetry group of a chiral molecule consists of only the identity operationare the symmetry groups of the regular polygons.
Within each of these symmetry types, there are two degrees of freedom for the center of rotation, and in the case of the dihedral groups, one more for the positions of the mirrors.
Basic point group symmetry operations The five basic symmetry operations mentioned above are: Identity Operation E (from the German 'Einheit' meaning unity): The identity operation leaves the molecule unchangedAn identity operation is a characteristic of every molecule even if it has no symmetryThe identity operation is denoted by E or IThe identity operation is denoted by ""E"" or ""I""They are identity operation (E), rotation operation or proper rotation (C), reflection operation (σ), inversion (i) and rotation reflection operation or improper rotation (S)The identity operation (E) consists of leaving the molecule as it isThis formal representation satisfies the basic property of the identity:
If for some , the left operation is the identity operation, then is called a left identity- In chemistry, there are five important symmetry operationsThis figure has four symmetry operations: the identity operation, one twofold axis of rotation, and two nonequivalent mirror planes.
D3, D4 etcFor example in boron trifluoride (BF), the highest order of rotation axis is C, so the principal axis of rotation is C.
This expression of the identity operation is called a ""representation"" or a ""resolution"" of the identityIn this case, , since applying it twice produces the identity operationEven the most asymmetric molecule possesses the identity ope[SEP]What is the identity operation in symmetry groups?","['A', 'D', 'E']",1.0
What is a regular polytope?,"In mathematics, a regular 4-polytope is a regular four-dimensional polytope. In geometry, H. S. M. Coxeter called a regular polytope a special kind of configuration. These are fitted together along their respective faces (face-to-face) in a regular fashion. === Properties === Like their 3-dimensional analogues, the convex regular 4-polytopes can be naturally ordered by size as a measure of 4-dimensional content (hypervolume) for the same radius. Each convex regular 4-polytope is bounded by a set of 3-dimensional cells which are all Platonic solids of the same type and size. The following table lists some properties of the six convex regular 4-polytopes. This polyhedron can be used as the core for a set of stellations. == Regular compounds == A regular polyhedral compound can be defined as a compound which, like a regular polyhedron, is vertex-transitive, edge-transitive, and face-transitive. It generalizes the set of semiregular polyhedra and Johnson solids to higher dimensions. == Uniform cases== The set of convex uniform 4-polytopes (also called semiregular 4-polytopes) are completely known cases, nearly all grouped by their Wythoff constructions, sharing symmetries of the convex regular 4-polytopes and prismatic forms. In geometry, a Blind polytope is a convex polytope composed of regular polytope facets. Regular polytopes will have one row and column per k-face element, while other polytopes will have one row and column for each k-face type by their symmetry classes. Hence, regular polyhedral compounds can also be regarded as dual-regular compounds. *Abstract regular 4-polytopes: ** 11-cell {3,5,3} ** 57-cell {5,3,5} *Uniform 4-polytope uniform 4-polytope families constructed from these 6 regular forms. Removing the coincident faces results in the compound of twenty octahedra. == 4-polytope compounds == Orthogonal projections 200px 200px 75 {4,3,3} 75 {3,3,4} In 4-dimensions, there are a large number of regular compounds of regular polytopes. For example, there are 2 vertices in each edge (each edge has 2 vertices), and 2 cells meet at each face (each face belongs to 2 cells), in any regular 4-polytope. Gosset's figures 3D honeycombs 3D honeycombs 3D honeycombs 150px Simple tetroctahedric check 150px Complex tetroctahedric check 4D polytopes 4D polytopes 4D polytopes 150px Tetroctahedric 150px Octicosahedric 150px Tetricosahedric In geometry, by Thorold Gosset's definition a semiregular polytope is usually taken to be a polytope that is vertex-transitive and has all its facets being regular polytopes. Unlike the case of polyhedra, this is not equivalent to the symmetry group acting transitively on its flags; the compound of two tetrahedra is the only regular compound with that property. They are the four-dimensional analogues of the regular polyhedra in three dimensions and the regular polygons in two dimensions. However, since not all uniform polyhedra are regular, the number of semiregular polytopes in dimensions higher than three is much smaller than the number of uniform polytopes in the same number of dimensions. Every polytope, and abstract polytope has a Hasse diagram expressing these connectivities, which can be systematically described with an incidence matrix. == Configuration matrix for regular polytopes== A configuration for a regular polytope is represented by a matrix where the diagonal element, Ni, is the number of i-faces in the polytope. E.L. Elte compiled a longer list in 1912 as The Semiregular Polytopes of the Hyperspaces which included a wider definition. == Gosset's list == In three-dimensional space and below, the terms semiregular polytope and uniform polytope have identical meanings, because all uniform polygons must be regular. There are six convex and ten star regular 4-polytopes, giving a total of sixteen. == History == The convex regular 4-polytopes were first described by the Swiss mathematician Ludwig Schläfli in the mid-19th century. ",A regular polytope is a geometric shape whose symmetry group is transitive on its diagonals.,A regular polytope is a geometric shape whose symmetry group is transitive on its vertices.,A regular polytope is a geometric shape whose symmetry group is transitive on its flags.,A regular polytope is a geometric shape whose symmetry group is transitive on its edges.,A regular polytope is a geometric shape whose symmetry group is transitive on its faces.,C,kaggle200,"For a regular abstract polytope, if the combinatorial automorphisms of the abstract polytope are realized by geometric symmetries then the geometric figure will be a regular polytope.
A regular polytope can be represented by a Schläfli symbol of the form with regular facets as and regular vertex figures as
Regular polytopes have the highest degree of symmetry of all polytopes. The symmetry group of a regular polytope acts transitively on its flags; hence, the dual polytope of a regular polytope is also regular.
In mathematics, a regular polytope is a polytope whose symmetry group acts transitively on its flags, thus giving it the highest degree of symmetry. All its elements or -faces (for all , where is the dimension of the polytope) — cells, faces and so on — are also transitive on the symmetries of the polytope, and are regular polytopes of dimension .","An n-polytope is regular if any set consisting of a vertex, an edge containing it, a 2-dimensional face containing the edge, and so on up to n−1 dimensions, can be mapped to any other such set by a symmetry of the polytope.So for example, the cube is regular because if we choose a vertex of the cube, and one of the three edges it is on, and one of the two faces containing the edge, then this triplet, or flag, (vertex, edge, face) can be mapped to any other such flag by a suitable symmetry of the cube. Thus we can define a regular polytope very succinctly: A regular polytope is one whose symmetry group is transitive on its flags.In the 20th century, some important developments were made. The symmetry groups of the classical regular polytopes were generalised into what are now called Coxeter groups. Coxeter groups also include the symmetry groups of regular tessellations of space or of the plane. For example, the symmetry group of an infinite chessboard would be the Coxeter group [4,4].
Regular polytopes Regular polytopes have the highest degree of symmetry of all polytopes. The symmetry group of a regular polytope acts transitively on its flags; hence, the dual polytope of a regular polytope is also regular.
There are three main classes of regular polytope which occur in any number of dimensions: Simplices, including the equilateral triangle and the regular tetrahedron.
Hypercubes or measure polytopes, including the square and the cube.
In mathematics, a regular polytope is a polytope whose symmetry group acts transitively on its flags, thus giving it the highest degree of symmetry. All its elements or j-faces (for all 0 ≤ j ≤ n, where n is the dimension of the polytope) — cells, faces and so on — are also transitive on the symmetries of the polytope, and are regular polytopes of dimension ≤ n.","The symmetry group of a regular polytope acts transitively on its flags; hence, the dual polytope of a regular polytope is also regular.
There are three main classes of regular polytope which occur in any number of dimensions: Simplices, including the equilateral triangle and the regular tetrahedron.
Hypercubes or measure polytopes, including the square and the cube.
In mathematics, a regular polytope is a polytope whose symmetry group acts transitively on its flags, thus giving it the highest degree of symmetryAn n-polytope is regular if any set consisting of a vertex, an edge containing it, a 2-dimensional face containing the edge, and so on up to n−1 dimensions, can be mapped to any other such set by a symmetry of the polytope.So for example, the cube is regular because if we choose a vertex of the cube, and one of the three edges it is on, and one of the two faces containing the edge, then this triplet, or flag, (vertex, edge, face) can be mapped to any other such flag by a suitable symmetry of the cubeThe symmetry group of a regular polytope acts transitively on its flags; hence, the dual polytope of a regular polytope is also regular.
In mathematics, a regular polytope is a polytope whose symmetry group acts transitively on its flags, thus giving it the highest degree of symmetry- For a regular abstract polytope, if the combinatorial automorphisms of the abstract polytope are realized by geometric symmetries then the geometric figure will be a regular polytope.
A regular polytope can be represented by a Schläfli symbol of the form with regular facets as and regular vertex figures as
Regular polytopes have the highest degree of symmetry of all polytopesThus we can define a regular polytope very succinctly: A regular polytope is one whose symmetry group is transitive on its flags.In the 20th century, some important developments were madeAll its elements or -faces (for all , where is the dimension of the polytope) — cells, faces and so on — are also transitive on the symmetries of the polytope, and are regular polytopes of dimension .All its elements or j-face","The symmetry group of a regular polytope acts transitively on its flags; hence, the dual polytope of a regular polytope is also regular.
There are three main classes of regular polytope which occur in any number of dimensions: Simplices, including the equilateral triangle and the regular tetrahedron.
Hypercubes or measure polytopes, including the square and the cube.
In mathematics, a regular polytope is a polytope whose symmetry group acts transitively on its flags, thus giving it the highest degree of symmetryAn n-polytope is regular if any set consisting of a vertex, an edge containing it, a 2-dimensional face containing the edge, and so on up to n−1 dimensions, can be mapped to any other such set by a symmetry of the polytope.So for example, the cube is regular because if we choose a vertex of the cube, and one of the three edges it is on, and one of the two faces containing the edge, then this triplet, or flag, (vertex, edge, face) can be mapped to any other such flag by a suitable symmetry of the cubeThe symmetry group of a regular polytope acts transitively on its flags; hence, the dual polytope of a regular polytope is also regular.
In mathematics, a regular polytope is a polytope whose symmetry group acts transitively on its flags, thus giving it the highest degree of symmetry- For a regular abstract polytope, if the combinatorial automorphisms of the abstract polytope are realized by geometric symmetries then the geometric figure will be a regular polytope.
A regular polytope can be represented by a Schläfli symbol of the form with regular facets as and regular vertex figures as
Regular polytopes have the highest degree of symmetry of all polytopesThus we can define a regular polytope very succinctly: A regular polytope is one whose symmetry group is transitive on its flags.In the 20th century, some important developments were madeAll its elements or -faces (for all , where is the dimension of the polytope) — cells, faces and so on — are also transitive on the symmetries of the polytope, and are regular polytopes of dimension .All its elements or j-face[SEP]What is a regular polytope?","['C', 'A', 'D']",1.0
What is the reason behind the largest externally observed electrical effects when two conductors are separated by the smallest distance without touching?,"In electromagnetics, proximity effect is a redistribution of electric current occurring in nearby parallel electrical conductors carrying alternating current flowing in the same direction which causes the current distribution in the conductor to concentrate on the side away from the nearby conductor. The proximity effect can significantly increase the AC resistance of adjacent conductors when compared to its resistance to a DC current. The result is that the current is concentrated in the areas of the conductor farthest away from nearby conductors carrying current in the same direction. Contact electrification is a phrase that describes the phenomenon whereby two surfaces become electrically charged when they contact and then separate. The concentration of current on the side of the conductor gets larger with increasing frequency. The Johnsen–Rahbek effect occurs when an electric potential is applied across the boundary between a metallic surface and the surface of a semiconducting material. While many aspects of contact electrification are now understood, and consequences have been extensively documented, there remain disagreements in the current literature about the underlying mechanisms. It is caused by eddy currents induced by the time-varying magnetic field of the other conductor. Similarly, in two adjacent conductors carrying alternating currents flowing in opposite directions, such as are found in power cables and pairs of bus bars, the current in each conductor is concentrated into a strip on the side facing the other conductor. == Effects == The additional resistance increases power losses which, in power circuits, can generate undesirable heating. Similarly, in adjacent conductors carrying AC flowing in opposite directions, the current will be redistributed to the side of the conductor closest to the other conductor. == Explanation == A changing magnetic field will influence the distribution of an electric current flowing within an electrical conductor, by electromagnetic induction. This ""current crowding"" effect causes the current to occupy a smaller effective cross-sectional area of the conductor, increasing current density and AC electrical resistance of the conductor. The Ferranti effect is more pronounced the longer the line and the higher the voltage applied.Line-Charging Current Interruption by HV and EHV Circuit Breakers, Carl-Ejnar Sölver, Ph. D. and Sérgio de A. Morais, M. Sc. The alternating magnetic field induces eddy currents in adjacent conductors, altering the overall distribution of current flowing through them. As mentioned above contact electrification is when two bodies contact then separate; triboelectricity includes sliding. The relative voltage rise is proportional to the square of the line length and the square of frequency.A Knowledge Base for Switching Surge Transients, A. I. Ibrahim and H. W. Dommel The Ferranti effect is much more pronounced in underground cables, even in short lengths, because of their high capacitance per unit length, and lower electrical impedance. thumb|right|Illustration of the Ferranti effect; addition of voltages across the line inductance In electrical engineering, the Ferranti effect is the increase in voltage occurring at the receiving end of a very long (> 200 km) AC electric power transmission line, relative to the voltage at the sending end, when the load is very small, or no load is connected. Under these conditions an attractive force appears, whose magnitude depends on the voltage and the specific materials involved. At higher frequencies, the AC resistance of a conductor can easily exceed ten times its DC resistance. == Example == For example, if two wires carrying the same alternating current lie parallel to one another, as would be found in a coil used in an inductor or transformer, the magnetic field of one wire will induce longitudinal eddy currents in the adjacent wire, that flow in long loops along the wire, in the same direction as the main current on the side of the wire facing away from the other wire, and back in the opposite direction on the side of the wire facing the other wire. It was first observed during the installation of underground cables in Sebastian Ziani de Ferranti's 10,000-volt AC power distribution system in 1887.J. F. Wilson, Ferranti and the British Electrical Industry, 1864-1930, Manchester University Press, 1988 page 44 The capacitive line charging current produces a voltage drop across the line inductance that is in-phase with the sending-end voltage, assuming negligible line resistance. The winding is usually limited to a single layer, and often the turns are spaced apart to separate the conductors. ","The surface charge on a conductor depends on the magnitude of the electric field, which in turn depends on the temperature between the surfaces.","The surface charge on a conductor depends on the magnitude of the magnetic field, which in turn depends on the distance between the surfaces.","The surface charge on a conductor depends on the magnitude of the electric field, which in turn depends on the angle between the surfaces.","The surface charge on a conductor depends on the magnitude of the electric field, which in turn depends on the distance between the surfaces.","The surface charge on a conductor depends on the magnitude of the electric field, which in turn depends on the pressure between the surfaces.",D,kaggle200,"where for simplicity, we assume an orthogonal lattice in which α only depends on ""m"", β only depends on ""n"" and γ only depends on ""p"". With this assumption,
The operational definition of synonymy depends on the distinctions between these classes of sememes. For example, the differentiation between what some academics call cognitive synonyms and near-synonyms depends on these differences.
The ampacity of a conductor depends on its ability to dissipate heat without damage to the conductor or its insulation. This is a function of the
The masculine accusative singular before the adjective is like either the nominative or the genitive, as in masculine nouns. Which form is used depends on which form the accompanying noun uses, which in turn depends on whether the noun is animate or inanimate.","Electrostatic pressure On a conductor, a surface charge will experience a force in the presence of an electric field. This force is the average of the discontinuous electric field at the surface charge. This average in terms of the field just outside the surface amounts to: P=ε02E2, This pressure tends to draw the conductor into the field, regardless of the sign of the surface charge.
Surface charge density is defined as the amount of electric charge, q, that is present on a surface of given area, A: Conductors According to Gauss’s law, a conductor at equilibrium carrying an applied current has no charge on its interior. Instead, the entirety of the charge of the conductor resides on the surface, and can be expressed by the equation: where E is the electric field caused by the charge on the conductor and ε0 is the permittivity of the free space. This equation is only strictly accurate for conductors with infinitely large area, but it provides a good approximation if E is measured at an infinitesimally small Euclidean distance from the surface of the conductor.
Contact electrification If two conducting surfaces are moved relative to each other, and there is potential difference in the space between them, then an electric current will be driven. This is because the surface charge on a conductor depends on the magnitude of the electric field, which in turn depends on the distance between the surfaces. The externally observed electrical effects are largest when the conductors are separated by the smallest distance without touching (once brought into contact, the charge will instead flow internally through the junction between the conductors). Since two conductors in equilibrium can have a built-in potential difference due to work function differences, this means that bringing dissimilar conductors into contact, or pulling them apart, will drive electric currents. These contact currents can damage sensitive microelectronic circuitry and occur even when the conductors would be grounded in the absence of motion.","The externally observed electrical effects are largest when the conductors are separated by the smallest distance without touching (once brought into contact, the charge will instead flow internally through the junction between the conductors)This is because the surface charge on a conductor depends on the magnitude of the electric field, which in turn depends on the distance between the surfacesSince two conductors in equilibrium can have a built-in potential difference due to work function differences, this means that bringing dissimilar conductors into contact, or pulling them apart, will drive electric currentsThis equation is only strictly accurate for conductors with infinitely large area, but it provides a good approximation if E is measured at an infinitesimally small Euclidean distance from the surface of the conductor.
Contact electrification If two conducting surfaces are moved relative to each other, and there is potential difference in the space between them, then an electric current will be drivenThese contact currents can damage sensitive microelectronic circuitry and occur even when the conductors would be grounded in the absence of motionInstead, the entirety of the charge of the conductor resides on the surface, and can be expressed by the equation: where E is the electric field caused by the charge on the conductor and ε0 is the permittivity of the free spaceThis average in terms of the field just outside the surface amounts to: P=ε02E2, This pressure tends to draw the conductor into the field, regardless of the sign of the surface charge.
Surface charge density is defined as the amount of electric charge, q, that is present on a surface of given area, A: Conductors According to Gauss’s law, a conductor at equilibrium carrying an applied current has no charge on its interiorElectrostatic pressure On a conductor, a surface charge will experience a force in the presence of an electric fieldFor example, the differentiation between what some academics call cognitive synonyms and near-synonyms depends on these differences.
The ampacity of a conducto","The externally observed electrical effects are largest when the conductors are separated by the smallest distance without touching (once brought into contact, the charge will instead flow internally through the junction between the conductors)This is because the surface charge on a conductor depends on the magnitude of the electric field, which in turn depends on the distance between the surfacesSince two conductors in equilibrium can have a built-in potential difference due to work function differences, this means that bringing dissimilar conductors into contact, or pulling them apart, will drive electric currentsThis equation is only strictly accurate for conductors with infinitely large area, but it provides a good approximation if E is measured at an infinitesimally small Euclidean distance from the surface of the conductor.
Contact electrification If two conducting surfaces are moved relative to each other, and there is potential difference in the space between them, then an electric current will be drivenThese contact currents can damage sensitive microelectronic circuitry and occur even when the conductors would be grounded in the absence of motionInstead, the entirety of the charge of the conductor resides on the surface, and can be expressed by the equation: where E is the electric field caused by the charge on the conductor and ε0 is the permittivity of the free spaceThis average in terms of the field just outside the surface amounts to: P=ε02E2, This pressure tends to draw the conductor into the field, regardless of the sign of the surface charge.
Surface charge density is defined as the amount of electric charge, q, that is present on a surface of given area, A: Conductors According to Gauss’s law, a conductor at equilibrium carrying an applied current has no charge on its interiorElectrostatic pressure On a conductor, a surface charge will experience a force in the presence of an electric fieldFor example, the differentiation between what some academics call cognitive synonyms and near-synonyms depends on these differences.
The ampacity of a conducto[SEP]What is the reason behind the largest externally observed electrical effects when two conductors are separated by the smallest distance without touching?","['D', 'E', 'C']",1.0
What is the formalism that angular momentum is associated with in rotational invariance?,"In this formalism, angular momentum is the 2-form Noether charge associated with rotational invariance. The symmetry associated with conservation of angular momentum is rotational invariance. The close relationship between angular momentum and rotations is reflected in Noether's theorem that proves that angular momentum is conserved whenever the laws of physics are rotationally invariant. == Angular momentum in electrodynamics == When describing the motion of a charged particle in an electromagnetic field, the canonical momentum P (derived from the Lagrangian for this system) is not gauge invariant. According to Noether's theorem, if the action (the integral over time of its Lagrangian) of a physical system is invariant under rotation, then angular momentum is conserved. === Application to quantum mechanics === In quantum mechanics, rotational invariance is the property that after a rotation the new system still obeys Schrödinger's equation. In physics, angular momentum (sometimes called moment of momentum or rotational momentum) is the rotational analog of linear momentum. Angular momentum can be considered a rotational analog of linear momentum. In the special case of a single particle with no electric charge and no spin, the orbital angular momentum operator can be written in the position basis as:\mathbf{L} = -i\hbar(\mathbf{r} \times abla) where is the vector differential operator, del. ===Spin angular momentum=== There is another type of angular momentum, called spin angular momentum (more often shortened to spin), represented by the spin operator \mathbf{S} = \left(S_x, S_y, S_z\right). The gauge-invariant angular momentum, that is kinetic angular momentum, is given by \mathbf{K}= \mathbf{r} \times ( \mathbf{P} - e\mathbf{A} ) The interplay with quantum mechanics is discussed further in the article on canonical commutation relations. == Angular momentum in optics == In classical Maxwell electrodynamics the Poynting vector is a linear momentum density of electromagnetic field. \mathbf{S}(\mathbf{r}, t) = \epsilon_0 c^2 \mathbf{E}(\mathbf{r}, t) \times \mathbf{B}(\mathbf{r}, t). Angular momentum is a property of a physical system that is a constant of motion (also referred to as a conserved property, time-independent and well-defined) in two situations: #The system experiences a spherically symmetric potential field. In both classical and quantum mechanical systems, angular momentum (together with linear momentum and energy) is one of the three fundamental properties of motion.Introductory Quantum Mechanics, Richard L. Liboff, 2nd Edition, There are several angular momentum operators: total angular momentum (usually denoted J), orbital angular momentum (usually denoted L), and spin angular momentum (spin for short, usually denoted S). The direction of angular momentum is related to the angular velocity of the rotation. The total angular momentum corresponds to the Casimir invariant of the Lie algebra so(3) of the three-dimensional rotation group. ==See also== * * Principal quantum number * Orbital angular momentum quantum number * Magnetic quantum number * Spin quantum number * Angular momentum coupling * Clebsch–Gordan coefficients * Angular momentum diagrams (quantum mechanics) * Rotational spectroscopy ==References== * *Albert Messiah, (1966). The total angular momentum is the sum of the spin and orbital angular momenta. Angular momentum is an extensive quantity; that is, the total angular momentum of any composite system is the sum of the angular momenta of its constituent parts. In quantum mechanics, the total angular momentum quantum number parametrises the total angular momentum of a given particle, by combining its orbital angular momentum and its intrinsic angular momentum (i.e., its spin). Similarly, for a point mass m the moment of inertia is defined as, I=r^2mwhere r is the radius of the point mass from the center of rotation, and for any collection of particles m_i as the sum, \sum_i I_i = \sum_i r_i^2m_i Angular momentum's dependence on position and shape is reflected in its units versus linear momentum: kg⋅m2/s or N⋅m⋅s for angular momentum versus kg⋅m/s or N⋅s for linear momentum. Therefore, the total moment of inertia, and the angular momentum, is a complex function of the configuration of the matter about the center of rotation and the orientation of the rotation for the various bits. In simpler terms, the total angular momentum operator characterizes how a quantum system is changed when it is rotated. For a continuous rigid body or a fluid, the total angular momentum is the volume integral of angular momentum density (angular momentum per unit volume in the limit as volume shrinks to zero) over the entire body. The rotational equivalent for point particles may be derived as follows: \mathbf{L} = I\boldsymbol{\omega} which means that the torque (i.e. the time derivative of the angular momentum) is \boldsymbol{\tau} = \frac{dI}{dt}\boldsymbol{\omega} + I\frac{d\boldsymbol{\omega}}{dt}. ",Angular momentum is the 1-form Noether charge associated with rotational invariance.,Angular momentum is the 3-form Noether charge associated with rotational invariance.,Angular momentum is the 5-form Noether charge associated with rotational invariance.,Angular momentum is the 2-form Noether charge associated with rotational invariance.,Angular momentum is the 4-form Noether charge associated with rotational invariance.,D,kaggle200,"for any rotation ""R"". Since the rotation does not depend explicitly on time, it commutes with the energy operator. Thus for rotational invariance we must have [""R"", ""H""] = 0.
In quantum mechanics, rotational invariance is the property that after a rotation the new system still obeys Schrödinger's equation. That is
Noether's theorem states that every conservation law is associated with a symmetry (invariant) of the underlying physics. The symmetry associated with conservation of angular momentum is rotational invariance. The fact that the physics of a system is unchanged if it is rotated by any angle about an axis implies that angular momentum is conserved.
In mathematics, a function defined on an inner product space is said to have rotational invariance if its value does not change when arbitrary rotations are applied to its argument.","Noether's theorem states that every conservation law is associated with a symmetry (invariant) of the underlying physics. The symmetry associated with conservation of angular momentum is rotational invariance. The fact that the physics of a system is unchanged if it is rotated by any angle about an axis implies that angular momentum is conserved.
In mathematics, a function defined on an inner product space is said to have rotational invariance if its value does not change when arbitrary rotations are applied to its argument.
In modern (20th century) theoretical physics, angular momentum (not including any intrinsic angular momentum – see below) is described using a different formalism, instead of a classical pseudovector. In this formalism, angular momentum is the 2-form Noether charge associated with rotational invariance. As a result, angular momentum is not conserved for general curved spacetimes, unless it happens to be asymptotically rotationally invariant.In classical mechanics, the angular momentum of a particle can be reinterpreted as a plane element: in which the exterior product (∧) replaces the cross product (×) (these products have similar characteristics but are nonequivalent). This has the advantage of a clearer geometric interpretation as a plane element, defined using the vectors x and p, and the expression is true in any number of dimensions. In Cartesian coordinates: or more compactly in index notation: The angular velocity can also be defined as an anti-symmetric second order tensor, with components ωij. The relation between the two anti-symmetric tensors is given by the moment of inertia which must now be a fourth order tensor: Again, this equation in L and ω as tensors is true in any number of dimensions. This equation also appears in the geometric algebra formalism, in which L and ω are bivectors, and the moment of inertia is a mapping between them.","The fact that the physics of a system is unchanged if it is rotated by any angle about an axis implies that angular momentum is conserved.
In mathematics, a function defined on an inner product space is said to have rotational invariance if its value does not change when arbitrary rotations are applied to its argument.
In modern (20th century) theoretical physics, angular momentum (not including any intrinsic angular momentum – see below) is described using a different formalism, instead of a classical pseudovectorThe symmetry associated with conservation of angular momentum is rotational invarianceIn this formalism, angular momentum is the 2-form Noether charge associated with rotational invarianceThe fact that the physics of a system is unchanged if it is rotated by any angle about an axis implies that angular momentum is conserved.
In mathematics, a function defined on an inner product space is said to have rotational invariance if its value does not change when arbitrary rotations are applied to its argument.Thus for rotational invariance we must have [""R"", ""H""] = 0.
In quantum mechanics, rotational invariance is the property that after a rotation the new system still obeys Schrödinger's equationThis equation also appears in the geometric algebra formalism, in which L and ω are bivectors, and the moment of inertia is a mapping between themAs a result, angular momentum is not conserved for general curved spacetimes, unless it happens to be asymptotically rotationally invariant.In classical mechanics, the angular momentum of a particle can be reinterpreted as a plane element: in which the exterior product (∧) replaces the cross product (×) (these products have similar characteristics but are nonequivalent)In Cartesian coordinates: or more compactly in index notation: The angular velocity can also be defined as an anti-symmetric second order tensor, with components ωijSince the rotation does not depend explicitly on time, it commutes with the energy operatorThat is
Noether's theorem states that every conservation law is associated with a symmetry (invariant) of ","The fact that the physics of a system is unchanged if it is rotated by any angle about an axis implies that angular momentum is conserved.
In mathematics, a function defined on an inner product space is said to have rotational invariance if its value does not change when arbitrary rotations are applied to its argument.
In modern (20th century) theoretical physics, angular momentum (not including any intrinsic angular momentum – see below) is described using a different formalism, instead of a classical pseudovectorThe symmetry associated with conservation of angular momentum is rotational invarianceIn this formalism, angular momentum is the 2-form Noether charge associated with rotational invarianceThe fact that the physics of a system is unchanged if it is rotated by any angle about an axis implies that angular momentum is conserved.
In mathematics, a function defined on an inner product space is said to have rotational invariance if its value does not change when arbitrary rotations are applied to its argument.Thus for rotational invariance we must have [""R"", ""H""] = 0.
In quantum mechanics, rotational invariance is the property that after a rotation the new system still obeys Schrödinger's equationThis equation also appears in the geometric algebra formalism, in which L and ω are bivectors, and the moment of inertia is a mapping between themAs a result, angular momentum is not conserved for general curved spacetimes, unless it happens to be asymptotically rotationally invariant.In classical mechanics, the angular momentum of a particle can be reinterpreted as a plane element: in which the exterior product (∧) replaces the cross product (×) (these products have similar characteristics but are nonequivalent)In Cartesian coordinates: or more compactly in index notation: The angular velocity can also be defined as an anti-symmetric second order tensor, with components ωijSince the rotation does not depend explicitly on time, it commutes with the energy operatorThat is
Noether's theorem states that every conservation law is associated with a symmetry (invariant) of [SEP]What is the formalism that angular momentum is associated with in rotational invariance?","['D', 'E', 'B']",1.0
"Which hand should be used to apply the right-hand rule when tightening or loosening nuts, screws, bolts, bottle caps, and jar lids?","Ambidexterity is the ability to use both the right and left hand equally well. For instance, most weapons in ancient China were wielded primarily with the right hand and on the right side; this habit has carried on to the practice of those weapons in modern times. Having more precise coordination with the left hand is believed to allow better-controlled, and stronger drives. thumb|right|Vibratory hammer bolting with a hydraulic torque wrench. thumb|right|Flange bolting with hydraulic torque wrench. thumb|Four narrow- clearance hydraulic torque wrenches on a flange. right|thumb|A jar opener for screw-off lids thumb|Prestige Jar Opener for screw-off lids using rubber timing belt thumb|Jar opener for preserving jar with lift-off lid - patented by Havolit, manufactured in 1950s thumb|Automatic jar opener one-touch / Robotwist A jar opener is a kitchen device which is used to open glass jars. A natural right- hander, by 1986 he could throw well enough with his left hand that he felt capable of pitching with either hand in a game. Since many everyday devices (such as can openers and scissors) are asymmetrical and designed for right-handed people, many left- handers learn to use them right-handedly due to the rarity or lack of left- handed models. Alastair Cook, Jimmy Anderson, Stuart Broad, Ben Dunk, Ben Stokes, Adam Gilchrist, Eoin Morgan and Kagiso Rabada are natural right- handers, but bat left-handed. thumb|A typical helping hand A helping hand, also known as a third hand, soldering hand, or X-tra Hands, is a type of extremely adjustable jig used in soldering and craftwork to hold materials near each other so that the user can work on them. ==Description== A commonly produced version consists of a weighted base, a pair of twice-adjustable arms ending in crocodile clips, and optionally a magnifying glass, held together by flexible joints. When referring to humans, it indicates that a person has no marked preference for the use of the right or left hand. The dominant hand is typically placed on the top of the stick to allow for better stickhandling and control of the puck. thumb|right|Oyster glove An oyster glove is a special glove worn to protect the hand holding an oyster when opening it with an oyster knife. For the most part, right-handed players shoot left and, likewise, most left-handed players shoot right as the player will often wield the stick one-handed. Sachin Tendulkar uses his left hand for writing, but bats and bowls with his right hand, it is the same with Kane Williamson. Although not ambidextrous, Phil Mickelson and Mike Weir are both right-handers who golf left-handed; Ben Hogan was the opposite, being a natural left-hander who played golf right- handed, as is Cristie Kerr. Such have the case of Rafael Nadal who uses his right hand for writing, but plays tennis with left. In floorball, like ice hockey, right-handed players shoot left and, likewise, most left- handed players shoot right as the player will often wield the stick one- handed. In an 1992 New York Times Q&A; article on ambidexterity, the term was used to describe people ""...with both hands as skilled as a right-hander's left hand."" He played guitar exclusively left-handed. ==Tools== With respect to tools, ambidextrous may be used to mean that the tool may be used equally well with either hand; an ""ambidextrous knife"" refers to the opening mechanism and locking mechanism on a folding knife. There are many players who are naturally right handed but play lefty and vice versa. ",One's dominant hand,The right hand,Both hands,The left hand,Either hand,B,kaggle200,"Friction torque can also be an asset in engineering. Bolts and nuts, or screws are often designed to be fastened with a given amount of torque, where the friction is adequate during use or operation for the bolt, nut, or screw to remain safely fastened. This is true with such applications as lug nuts retaining wheels to vehicles, or equipment subjected to vibration with sufficiently well-attached bolts, nuts, or screws to prevent the vibration from shaking them loose.
T-spanners (wrenches) and straight spanners for tightening and loosening the wingnuts of the helmet were available from the helmet manufacturers to suit the pattern of wingnut used by the manufacturer.
Bottle Caps are sweet tablet candies made to look like metal soda bottle caps in grape, cola, orange, root beer, and cherry flavors. Bottle Caps candy was originally introduced by Breaker Confections in 1972. They are currently sold by the Ferrara Candy Company.
Typical nuts, screws, bolts, bottle caps, and jar lids are tightened (moved away from the observer) clockwise and loosened (moved towards the observer) counterclockwise in accordance with the right-hand rule.","A nut driver is a tool for tightening or loosening nuts and bolts. It essentially consists of a socket attached to a shaft and cylindrical handle and is similar in appearance and use to a screwdriver. They generally have a hollow shaft to accommodate a shank onto which a nut is threaded. They are typically used for lower torque applications than wrenches or ratchets and are frequently used in the appliance repair and electronics industries.Variations include T-shaped handles for providing the operator with a better grip, ratcheting handles, sockets with recessed magnets for holding fasteners, and flex shafts for bending around obstructions.
Bottle Caps are sweet tablet candies made to look like metal soda bottle caps in grape, cola, orange, root beer, and cherry flavors. Bottle Caps candy was originally introduced by Breaker Confections in 1972. They are currently sold by the Ferrara Candy Company.
Shop-work Typical nuts, screws, bolts, bottle caps, and jar lids are tightened (moved away from the observer) clockwise and loosened (moved towards the observer) counterclockwise in accordance with the right-hand rule.","They are currently sold by the Ferrara Candy Company.
Typical nuts, screws, bolts, bottle caps, and jar lids are tightened (moved away from the observer) clockwise and loosened (moved towards the observer) counterclockwise in accordance with the right-hand rule.They are currently sold by the Ferrara Candy Company.
Shop-work Typical nuts, screws, bolts, bottle caps, and jar lids are tightened (moved away from the observer) clockwise and loosened (moved towards the observer) counterclockwise in accordance with the right-hand ruleA nut driver is a tool for tightening or loosening nuts and boltsThey are typically used for lower torque applications than wrenches or ratchets and are frequently used in the appliance repair and electronics industries.Variations include T-shaped handles for providing the operator with a better grip, ratcheting handles, sockets with recessed magnets for holding fasteners, and flex shafts for bending around obstructions.
Bottle Caps are sweet tablet candies made to look like metal soda bottle caps in grape, cola, orange, root beer, and cherry flavorsThis is true with such applications as lug nuts retaining wheels to vehicles, or equipment subjected to vibration with sufficiently well-attached bolts, nuts, or screws to prevent the vibration from shaking them loose.
T-spanners (wrenches) and straight spanners for tightening and loosening the wingnuts of the helmet were available from the helmet manufacturers to suit the pattern of wingnut used by the manufacturer.
Bottle Caps are sweet tablet candies made to look like metal soda bottle caps in grape, cola, orange, root beer, and cherry flavorsBolts and nuts, or screws are often designed to be fastened with a given amount of torque, where the friction is adequate during use or operation for the bolt, nut, or screw to remain safely fastenedThey generally have a hollow shaft to accommodate a shank onto which a nut is threaded- Friction torque can also be an asset in engineeringBottle Caps candy was originally introduced by Breaker Confections in 1972It essentially consists of a socket attached to","They are currently sold by the Ferrara Candy Company.
Typical nuts, screws, bolts, bottle caps, and jar lids are tightened (moved away from the observer) clockwise and loosened (moved towards the observer) counterclockwise in accordance with the right-hand rule.They are currently sold by the Ferrara Candy Company.
Shop-work Typical nuts, screws, bolts, bottle caps, and jar lids are tightened (moved away from the observer) clockwise and loosened (moved towards the observer) counterclockwise in accordance with the right-hand ruleA nut driver is a tool for tightening or loosening nuts and boltsThey are typically used for lower torque applications than wrenches or ratchets and are frequently used in the appliance repair and electronics industries.Variations include T-shaped handles for providing the operator with a better grip, ratcheting handles, sockets with recessed magnets for holding fasteners, and flex shafts for bending around obstructions.
Bottle Caps are sweet tablet candies made to look like metal soda bottle caps in grape, cola, orange, root beer, and cherry flavorsThis is true with such applications as lug nuts retaining wheels to vehicles, or equipment subjected to vibration with sufficiently well-attached bolts, nuts, or screws to prevent the vibration from shaking them loose.
T-spanners (wrenches) and straight spanners for tightening and loosening the wingnuts of the helmet were available from the helmet manufacturers to suit the pattern of wingnut used by the manufacturer.
Bottle Caps are sweet tablet candies made to look like metal soda bottle caps in grape, cola, orange, root beer, and cherry flavorsBolts and nuts, or screws are often designed to be fastened with a given amount of torque, where the friction is adequate during use or operation for the bolt, nut, or screw to remain safely fastenedThey generally have a hollow shaft to accommodate a shank onto which a nut is threaded- Friction torque can also be an asset in engineeringBottle Caps candy was originally introduced by Breaker Confections in 1972It essentially consists of a socket attached to[SEP]Which hand should be used to apply the right-hand rule when tightening or loosening nuts, screws, bolts, bottle caps, and jar lids?","['B', 'D', 'E']",1.0
What is the Minkowski diagram used for?,"Minkowski geometry may refer to: * The geometry of a finite-dimensional normed space * The geometry of Minkowski space thumb In mathematics, in the field of functional analysis, a Minkowski functional (after Hermann Minkowski) or gauge function is a function that recovers a notion of distance on a linear space. An alternative definition of the Minkowski difference is sometimes used for computing intersection of convex shapes. * Minkowski's addition of convex shapes by Alexander Bogomolny: an applet * Wikibooks:OpenSCAD User Manual/Transformations#minkowski by Marius Kintel: Application * Application of Minkowski Addition to robotics by Joan Gerard Category:Abelian group theory Category:Affine geometry Category:Binary operations Category:Convex geometry Category:Digital geometry Category:Geometric algorithms Category:Hermann Minkowski Category:Sumsets Category:Theorems in convex geometry Category:Variational analysis In geometry, the Minkowski sum of two sets of position vectors A and B in Euclidean space is formed by adding each vector in A to each vector in B: : A + B = \\{\mathbf{a}+\mathbf{b}\,|\,\mathbf{a}\in A,\ \mathbf{b}\in B\\} The Minkowski difference (also Minkowski subtraction, Minkowski decomposition, or geometric difference) is the corresponding inverse, where (A - B) produces a set that could be summed with B to recover A. : A - B = \\{\mathbf{a}-\mathbf{b}\,|\,\mathbf{a}\in A,\ \mathbf{b}\in B\\} = A + (-B) The concept is named for Hermann Minkowski. == Example == [[File:Minkowski sum graph - vector version.svg|thumb | alt=Three squares are shown in the non- negative quadrant of the Cartesian plane. In particular, through these relationships, Minkowski functionals allow one to ""translate"" certain properties of a subset of X into certain properties of a function on X. ==Definition== Let K be a subset of a real or complex vector space X. Define the of K or the associated with or induced by K as being the function p_K : X \to [0, \infty], valued in the extended real numbers, defined by p_K(x) := \inf \\{r > 0 : x \in r K\\}, where recall that the infimum of the empty set is \,\infty\, (that is, \inf \varnothing = \infty). Instead it replaces the vector addition of the Minkowski sum with a vector subtraction. It has also been shown to be closely connected to the Earth mover's distance, and by extension, optimal transport. ===Motion planning=== Minkowski sums are used in motion planning of an object among obstacles. In the simple model of translational motion of an object in the plane, where the position of an object may be uniquely specified by the position of a fixed point of this object, the configuration space are the Minkowski sum of the set of obstacles and the movable object placed at the origin and rotated 180 degrees. ===Numerical control (NC) machining=== In numerical control machining, the programming of the NC tool exploits the fact that the Minkowski sum of the cutting piece with its trajectory gives the shape of the cut in the material. ===3D solid modeling=== In OpenSCAD Minkowski sums are used to outline a shape with another shape creating a composite of both shapes. ===Aggregation theory=== Minkowski sums are also frequently used in aggregation theory when individual objects to be aggregated are characterized via sets. === Collision detection === Minkowski sums, specifically Minkowski differences, are often used alongside GJK algorithms to compute collision detection for convex hulls in physics engines. ==Algorithms for computing Minkowski sums== thumb|300px | alt=Minkowski addition of four line-segments. : -B = \\{\mathbf{-b}\,|\,\mathbf{b}\in B\\} : A - B = \left(A^c + (-B)\right)^c This definition allows a symmetrical relationship between the Minkowski sum and difference. The Minkowski content (named after Hermann Minkowski), or the boundary measure, of a set is a basic concept that uses concepts from geometry and measure theory to generalize the notions of length of a smooth curve in the plane, and area of a smooth surface in space, to arbitrary measurable sets. This definition is fundamental in the Lp Brunn-Minkowski theory. ==See also== * * , an inequality on the volumes of Minkowski sums * * * * * (a.k.a. Quermassintegral or intrinsic volume) * * * * * ==Notes== ==References== * * * * * *. *. *. *. * * ==External links== * * * Minkowski Sums, in Computational Geometry Algorithms Library * The Minkowski Sum of Two Triangles and The Minkowski Sum of a Disk and a Polygon by George Beck, The Wolfram Demonstrations Project. Indeed, clearly the Minkowski content assigns the same value to the set A as well as its closure. thumb|alt=|The red figure is the Minkowski sum of blue and green figures. : (A - B) + B \subseteq A : (A + B) - B \supseteq A : A - B = \left(A^c + (-B)\right)^c : A + B = \left(A^c - (-B)\right)^c In 2D image processing the Minkowski sum and difference are known as dilation and erosion. If the upper and lower m-dimensional Minkowski content of A are equal, then their common value is called the Minkowski content Mm(A). == Properties == * The Minkowski content is (generally) not a measure. Category:Measure theory Category:Geometry Category:Analytic geometry Category:Dimension theory Category:Dimension Category:Measures (measure theory) Category:Fractals Category:Hermann Minkowski For Minkowski addition, the , \\{ 0 \\}, containing only the zero vector, 0, is an identity element: for every subset S of a vector space, :S + \\{0\\} = S. The Minkowski function is always non-negative (meaning p_K \geq 0) and p_K(x) is a real number if and only if \\{r > 0 : x \in r K\\} is not empty. ",The Minkowski diagram is used to define concepts and demonstrate properties of Newtonian mechanics and to provide geometrical interpretation to the generalization of Lorentz transformations to relativistic mechanics.,The Minkowski diagram is used to define concepts and demonstrate properties of general relativity and to provide geometrical interpretation to the generalization of special relativity to relativistic mechanics.,The Minkowski diagram is used to define concepts and demonstrate properties of Lorentz transformations and to provide geometrical interpretation to the generalization of quantum mechanics to relativistic mechanics.,The Minkowski diagram is used to define concepts and demonstrate properties of special relativity and to provide geometrical interpretation to the generalization of general relativity to relativistic mechanics.,The Minkowski diagram is used to define concepts and demonstrate properties of Lorentz transformations and to provide geometrical interpretation to the generalization of Newtonian mechanics to relativistic mechanics.,E,kaggle200,"In a Minkowski diagram, lengths on the page cannot be directly compared to each other, due to warping factor between the axes' unit lengths in a Minkowski diagram. In particular, if formula_19 and formula_20 are the unit lengths of the rest frame axes and moving frame axes, respectively, in a Minkowski diagram, then the two unit lengths are warped relative to each other via the formula:
In Minkowski's 1908 paper there were three diagrams, first to illustrate the Lorentz transformation, then the partition of the plane by the light-cone, and finally illustration of worldlines. The first diagram used a branch of the unit hyperbola formula_5 to show the locus of a unit of proper time depending on velocity, thus illustrating time dilation. The second diagram showed the conjugate hyperbola to calibrate space, where a similar stretching leaves the impression of FitzGerald contraction. In 1914 Ludwik Silberstein included a diagram of ""Minkowski's representation of the Lorentz transformation"". This diagram included the unit hyperbola, its conjugate, and a pair of conjugate diameters. Since the 1960s a version of this more complete configuration has been referred to as The Minkowski Diagram, and used as a standard illustration of the transformation geometry of special relativity. E. T. Whittaker has pointed out that the principle of relativity is tantamount to the arbitrariness of what hyperbola radius is selected for time in the Minkowski diagram. In 1912 Gilbert N. Lewis and Edwin B. Wilson applied the methods of synthetic geometry to develop the properties of the non-Euclidean plane that has Minkowski diagrams.
The cosmos of special relativity consists of Minkowski spacetime and the addition of velocities corresponds to composition of Lorentz transformations. In the special theory of relativity Newtonian mechanics is modified into relativistic mechanics.
Minkowski's principal tool is the Minkowski diagram, and he uses it to define concepts and demonstrate properties of Lorentz transformations (e.g. proper time and length contraction) and to provide geometrical interpretation to the generalization of Newtonian mechanics to relativistic mechanics. For these special topics, see the referenced articles, as the presentation below will be principally confined to the mathematical structure (Minkowski metric and from it derived quantities and the Poincaré group as symmetry group of spacetime) ""following"" from the invariance of the spacetime interval on the spacetime manifold as consequences of the postulates of special relativity, not to specific application or ""derivation"" of the invariance of the spacetime interval. This structure provides the background setting of all present relativistic theories, barring general relativity for which flat Minkowski spacetime still provides a springboard as curved spacetime is locally Lorentzian.","Overview The term Minkowski diagram refers to a specific form of spacetime diagram frequently used in special relativity. A Minkowski diagram is a two-dimensional graphical depiction of a portion of Minkowski space, usually where space has been curtailed to a single dimension. The units of measurement in these diagrams are taken such that the light cone at an event consists of the lines of slope plus or minus one through that event. The horizontal lines correspond to the usual notion of simultaneous events for a stationary observer at the origin.
History Albert Einstein discovered special relativity in 1905, with Hermann Minkowski providing his graphical representation in 1908.In Minkowski's 1908 paper there were three diagrams, first to illustrate the Lorentz transformation, then the partition of the plane by the light-cone, and finally illustration of worldlines. The first diagram used a branch of the unit hyperbola t2−x2=1 to show the locus of a unit of proper time depending on velocity, thus illustrating time dilation. The second diagram showed the conjugate hyperbola to calibrate space, where a similar stretching leaves the impression of FitzGerald contraction. In 1914 Ludwik Silberstein included a diagram of ""Minkowski's representation of the Lorentz transformation"". This diagram included the unit hyperbola, its conjugate, and a pair of conjugate diameters. Since the 1960s a version of this more complete configuration has been referred to as The Minkowski Diagram, and used as a standard illustration of the transformation geometry of special relativity. E. T. Whittaker has pointed out that the principle of relativity is tantamount to the arbitrariness of what hyperbola radius is selected for time in the Minkowski diagram. In 1912 Gilbert N. Lewis and Edwin B. Wilson applied the methods of synthetic geometry to develop the properties of the non-Euclidean plane that has Minkowski diagrams.When Taylor and Wheeler composed Spacetime Physics (1966), they did not use the term Minkowski diagram for their spacetime geometry. Instead they included an acknowledgement of Minkowski's contribution to philosophy by the totality of his innovation of 1908.
Minkowski's principal tool is the Minkowski diagram, and he uses it to define concepts and demonstrate properties of Lorentz transformations (e.g. proper time and length contraction) and to provide geometrical interpretation to the generalization of Newtonian mechanics to relativistic mechanics. For these special topics, see the referenced articles, as the presentation below will be principally confined to the mathematical structure (Minkowski metric and from it derived quantities and the Poincaré group as symmetry group of spacetime) following from the invariance of the spacetime interval on the spacetime manifold as consequences of the postulates of special relativity, not to specific application or derivation of the invariance of the spacetime interval. This structure provides the background setting of all present relativistic theories, barring general relativity for which flat Minkowski spacetime still provides a springboard as curved spacetime is locally Lorentzian.","Overview The term Minkowski diagram refers to a specific form of spacetime diagram frequently used in special relativityA Minkowski diagram is a two-dimensional graphical depiction of a portion of Minkowski space, usually where space has been curtailed to a single dimensionSince the 1960s a version of this more complete configuration has been referred to as The Minkowski Diagram, and used as a standard illustration of the transformation geometry of special relativityIn the special theory of relativity Newtonian mechanics is modified into relativistic mechanics.
Minkowski's principal tool is the Minkowski diagram, and he uses it to define concepts and demonstrate properties of Lorentz transformations (e.gInstead they included an acknowledgement of Minkowski's contribution to philosophy by the totality of his innovation of 1908.
Minkowski's principal tool is the Minkowski diagram, and he uses it to define concepts and demonstrate properties of Lorentz transformations (e.g- In a Minkowski diagram, lengths on the page cannot be directly compared to each other, due to warping factor between the axes' unit lengths in a Minkowski diagramIn particular, if formula_19 and formula_20 are the unit lengths of the rest frame axes and moving frame axes, respectively, in a Minkowski diagram, then the two unit lengths are warped relative to each other via the formula:
In Minkowski's 1908 paper there were three diagrams, first to illustrate the Lorentz transformation, then the partition of the plane by the light-cone, and finally illustration of worldlinesWilson applied the methods of synthetic geometry to develop the properties of the non-Euclidean plane that has Minkowski diagrams.When Taylor and Wheeler composed Spacetime Physics (1966), they did not use the term Minkowski diagram for their spacetime geometryIn 1914 Ludwik Silberstein included a diagram of ""Minkowski's representation of the Lorentz transformation""The horizontal lines correspond to the usual notion of simultaneous events for a stationary observer at the origin.
History Albert Einstein discovered special relativit","Overview The term Minkowski diagram refers to a specific form of spacetime diagram frequently used in special relativityA Minkowski diagram is a two-dimensional graphical depiction of a portion of Minkowski space, usually where space has been curtailed to a single dimensionSince the 1960s a version of this more complete configuration has been referred to as The Minkowski Diagram, and used as a standard illustration of the transformation geometry of special relativityIn the special theory of relativity Newtonian mechanics is modified into relativistic mechanics.
Minkowski's principal tool is the Minkowski diagram, and he uses it to define concepts and demonstrate properties of Lorentz transformations (e.gInstead they included an acknowledgement of Minkowski's contribution to philosophy by the totality of his innovation of 1908.
Minkowski's principal tool is the Minkowski diagram, and he uses it to define concepts and demonstrate properties of Lorentz transformations (e.g- In a Minkowski diagram, lengths on the page cannot be directly compared to each other, due to warping factor between the axes' unit lengths in a Minkowski diagramIn particular, if formula_19 and formula_20 are the unit lengths of the rest frame axes and moving frame axes, respectively, in a Minkowski diagram, then the two unit lengths are warped relative to each other via the formula:
In Minkowski's 1908 paper there were three diagrams, first to illustrate the Lorentz transformation, then the partition of the plane by the light-cone, and finally illustration of worldlinesWilson applied the methods of synthetic geometry to develop the properties of the non-Euclidean plane that has Minkowski diagrams.When Taylor and Wheeler composed Spacetime Physics (1966), they did not use the term Minkowski diagram for their spacetime geometryIn 1914 Ludwik Silberstein included a diagram of ""Minkowski's representation of the Lorentz transformation""The horizontal lines correspond to the usual notion of simultaneous events for a stationary observer at the origin.
History Albert Einstein discovered special relativit[SEP]What is the Minkowski diagram used for?","['E', 'D', 'C']",1.0
What are the two main interpretations for the disparity between the presence of matter and antimatter in the observable universe?,"The formation of antimatter galaxies was originally thought to explain the baryon asymmetry, as from a distance, antimatter atoms are indistinguishable from matter atoms; both produce light (photons) in the same way. This asymmetry of matter and antimatter in the visible universe is one of the great unsolved problems in physics. There is strong evidence that the observable universe is composed almost entirely of ordinary matter, as opposed to an equal mixture of matter and antimatter. The presence of remaining matter, and absence of detectable remaining antimatter, is called baryon asymmetry. The behavioral differences between matter and antimatter are specific to each individual experiment. In physical cosmology, the baryon asymmetry problem, also known as the matter asymmetry problem or the matter–antimatter asymmetry problem, is the observed imbalance in baryonic matter (the type of matter experienced in everyday life) and antibaryonic matter in the observable universe. Antimatter galaxies, if they exist, are expected to have the same chemistry and absorption and emission spectra as normal-matter galaxies, and their astronomical objects would be observationally identical, making them difficult to distinguish. Initial analysis focused on whether antimatter should react the same as matter or react oppositely. In modern physics, antimatter is defined as matter composed of the antiparticles (or ""partners"") of the corresponding particles in ""ordinary"" matter, and can be thought of as matter with reversed charge, parity, and time, known as CPT reversal. As such, an EDM would allow matter and antimatter to decay at different rates leading to a possible matter–antimatter asymmetry as observed today. Several theoretical arguments arose which convinced physicists that antimatter would react exactly the same as normal matter. Antimatter may exist in relatively large amounts in far-away galaxies due to cosmic inflation in the primordial time of the universe. On the basis of such analyses, it is now deemed unlikely that any region within the observable universe is dominated by antimatter. ===Electric dipole moment=== The presence of an electric dipole moment (EDM) in any fundamental particle would violate both parity (P) and time (T) symmetries. The antiuniverse would flow back in time from the Big Bang, becoming bigger as it does so, and would be also dominated by antimatter. The results, which are identical to that of hydrogen, confirmed the validity of quantum mechanics for antimatter. ==Origin and asymmetry== Most matter observable from the Earth seems to be made of matter rather than antimatter. They inferred that gravitational repulsion between matter and antimatter was implausible as it would violate CPT invariance, conservation of energy, result in vacuum instability, and result in CP violation. This measurement represents the first time that a property of antimatter is known more precisely than the equivalent property in matter. In this situation the particles and their corresponding antiparticles do not achieve thermal equilibrium due to rapid expansion decreasing the occurrence of pair- annihilation. ==Other explanations== ===Regions of the universe where antimatter dominates=== Another possible explanation of the apparent baryon asymmetry is that matter and antimatter are essentially separated into different, widely distant regions of the universe. There are compelling theoretical reasons to believe that, aside from the fact that antiparticles have different signs on all charges (such as electric and baryon charges), matter and antimatter have exactly the same properties. High-precision experiments could reveal small previously unseen differences between the behavior of matter and antimatter. ","The universe began with a small preference for matter, or it was originally perfectly asymmetric, but a set of phenomena contributed to a small imbalance in favor of antimatter over time.","The universe began with a small preference for antimatter, or it was originally perfectly symmetric, but a set of phenomena contributed to a small imbalance in favor of antimatter over time.","The universe began with equal amounts of matter and antimatter, or it was originally perfectly symmetric, but a set of phenomena contributed to a small imbalance in favor of antimatter over time.","The universe began with a small preference for matter, or it was originally perfectly symmetric, but a set of phenomena contributed to a small imbalance in favor of matter over time.","The universe began with equal amounts of matter and antimatter, or it was originally perfectly asymmetric, but a set of phenomena contributed to a small imbalance in favor of matter over time.",D,kaggle200,"One of the outstanding problems in modern physics is the predominance of matter over antimatter in the universe. The universe, as a whole, seems to have a nonzero positive baryon number density. Since it is assumed in cosmology that the particles we see were created using the same physics we measure today, it would normally be expected that the overall baryon number should be zero, as matter and antimatter should have been created in equal amounts. A number of theoretical mechanisms are proposed to account for this discrepancy, namely identifying conditions that favour symmetry breaking and the creation of normal matter (as opposed to antimatter). This imbalance has to be exceptionally small, on the order of 1 in every (≈) particles a small fraction of a second after the Big Bang. After most of the matter and antimatter was annihilated, what remained was all the baryonic matter in the current universe, along with a much greater number of bosons. Experiments reported in 2010 at Fermilab, however, seem to show that this imbalance is much greater than previously assumed. These experiments involved a series of particle collisions and found that the amount of generated matter was approximately 1% larger than the amount of generated antimatter. The reason for this discrepancy is not yet known.
One of the outstanding problems in modern physics is the predominance of matter over antimatter in the universe. The universe, as a whole, seems to have a nonzero positive baryon number density – that is, matter exists. Since it is assumed in cosmology that the particles we see were created using the same physics we measure today, it would normally be expected that the overall baryon number should be zero, as matter and antimatter should have been created in equal amounts. This has led to a number of proposed mechanisms for symmetry breaking that favour the creation of normal matter (as opposed to antimatter) under certain conditions. This imbalance would have been exceptionally small, on the order of 1 in every (10) particles a small fraction of a second after the Big Bang, but after most of the matter and antimatter annihilated, what was left over was all the baryonic matter in the current universe, along with a much greater number of bosons.
There is strong evidence that the observable universe is composed almost entirely of ordinary matter, as opposed to an equal mixture of matter and antimatter. This asymmetry of matter and antimatter in the visible universe is one of the great unsolved problems in physics. The process by which this inequality between matter and antimatter particles developed is called baryogenesis.
There are two main interpretations for this disparity: either the universe began with a small preference for matter (total baryonic number of the universe different from zero), or the universe was originally perfectly symmetric, but somehow a set of phenomena contributed to a small imbalance in favour of matter over time. The second point of view is preferred, although there is no clear experimental evidence indicating either of them to be the correct one.","One of the outstanding problems in modern physics is the predominance of matter over antimatter in the universe. The universe, as a whole, seems to have a nonzero positive baryon number density – that is, there is more matter than antimatter. Since it is assumed in cosmology that the particles we see were created using the same physics we measure today, it would normally be expected that the overall baryon number should be zero, as matter and antimatter should have been created in equal amounts. This has led to a number of proposed mechanisms for symmetry breaking that favour the creation of normal matter (as opposed to antimatter) under certain conditions. This imbalance would have been exceptionally small, on the order of 1 in every 1010 particles a small fraction of a second after the Big Bang, but after most of the matter and antimatter annihilated, what was left over was all the baryonic matter in the current universe, along with a much greater number of bosons.
There is strong evidence that the observable universe is composed almost entirely of ordinary matter, as opposed to an equal mixture of matter and antimatter. This asymmetry of matter and antimatter in the visible universe is one of the great unsolved problems in physics. The process by which this inequality between matter and antimatter particles developed is called baryogenesis.
There are two main interpretations for this disparity: either the universe began with a small preference for matter (total baryonic number of the universe different from zero), or the universe was originally perfectly symmetric, but somehow a set of phenomena contributed to a small imbalance in favour of matter over time. The second point of view is preferred, although there is no clear experimental evidence indicating either of them to be the correct one.","This asymmetry of matter and antimatter in the visible universe is one of the great unsolved problems in physicsThe reason for this discrepancy is not yet known.
One of the outstanding problems in modern physics is the predominance of matter over antimatter in the universeOne of the outstanding problems in modern physics is the predominance of matter over antimatter in the universeThe universe, as a whole, seems to have a nonzero positive baryon number density – that is, there is more matter than antimatterThe process by which this inequality between matter and antimatter particles developed is called baryogenesis.
There are two main interpretations for this disparity: either the universe began with a small preference for matter (total baryonic number of the universe different from zero), or the universe was originally perfectly symmetric, but somehow a set of phenomena contributed to a small imbalance in favour of matter over time- One of the outstanding problems in modern physics is the predominance of matter over antimatter in the universeThis imbalance would have been exceptionally small, on the order of 1 in every (10) particles a small fraction of a second after the Big Bang, but after most of the matter and antimatter annihilated, what was left over was all the baryonic matter in the current universe, along with a much greater number of bosons.
There is strong evidence that the observable universe is composed almost entirely of ordinary matter, as opposed to an equal mixture of matter and antimatterThis imbalance would have been exceptionally small, on the order of 1 in every 1010 particles a small fraction of a second after the Big Bang, but after most of the matter and antimatter annihilated, what was left over was all the baryonic matter in the current universe, along with a much greater number of bosons.
There is strong evidence that the observable universe is composed almost entirely of ordinary matter, as opposed to an equal mixture of matter and antimatterThe universe, as a whole, seems to have a nonzero positive baryon number density – that is, matt","This asymmetry of matter and antimatter in the visible universe is one of the great unsolved problems in physicsThe reason for this discrepancy is not yet known.
One of the outstanding problems in modern physics is the predominance of matter over antimatter in the universeOne of the outstanding problems in modern physics is the predominance of matter over antimatter in the universeThe universe, as a whole, seems to have a nonzero positive baryon number density – that is, there is more matter than antimatterThe process by which this inequality between matter and antimatter particles developed is called baryogenesis.
There are two main interpretations for this disparity: either the universe began with a small preference for matter (total baryonic number of the universe different from zero), or the universe was originally perfectly symmetric, but somehow a set of phenomena contributed to a small imbalance in favour of matter over time- One of the outstanding problems in modern physics is the predominance of matter over antimatter in the universeThis imbalance would have been exceptionally small, on the order of 1 in every (10) particles a small fraction of a second after the Big Bang, but after most of the matter and antimatter annihilated, what was left over was all the baryonic matter in the current universe, along with a much greater number of bosons.
There is strong evidence that the observable universe is composed almost entirely of ordinary matter, as opposed to an equal mixture of matter and antimatterThis imbalance would have been exceptionally small, on the order of 1 in every 1010 particles a small fraction of a second after the Big Bang, but after most of the matter and antimatter annihilated, what was left over was all the baryonic matter in the current universe, along with a much greater number of bosons.
There is strong evidence that the observable universe is composed almost entirely of ordinary matter, as opposed to an equal mixture of matter and antimatterThe universe, as a whole, seems to have a nonzero positive baryon number density – that is, matt[SEP]What are the two main interpretations for the disparity between the presence of matter and antimatter in the observable universe?","['D', 'E', 'C']",1.0
What is the Ramsauer-Townsend effect?,"The Ramsauer–Townsend effect, also sometimes called the Ramsauer effect or the Townsend effect, is a physical phenomenon involving the scattering of low- energy electrons by atoms of a noble gas. It was here that he conducted research on the quantum effect of the transparency of noble gases to slow electrons, known as the Ramsauer–Townsend effect. This is the Ramsauer–Townsend effect. The effect can not be explained by Classical mechanics, but requires the wave theory of quantum mechanics. == Definitions == When an electron moves through a gas, its interactions with the gas atoms cause scattering to occur. thumb|right|Ramsauer in 1928 Carl Wilhelm Ramsauer (6 February 1879 – 24 December 1955) was a German professor of physics and research physicist, famous for the discovery of the Ramsauer–Townsend effect. In 1970 Gryzinski has proposed classical explanation of Ramsauer effect using effective picture of atom as oscillating multipole of electric field (dipole, quadrupole, octupole), which was a consequence of his free-fall atomic model. == References == * * * * * * * Bohm, D., Quantum Theory. Because noble gas atoms have a relatively high first ionization energy and the electrons do not carry enough energy to cause excited electronic states, ionization and excitation of the atom are unlikely, and the probability of elastic scattering over all angles is approximately equal to the probability of collision. == Description == The effect is named for Carl Ramsauer (1879-1955) and John Sealy Townsend (1868-1957), who each independently studied the collisions between atoms and low-energy electrons in the early 1920s. A simple model of the collision that makes use of wave theory can predict the existence of the Ramsauer–Townsend minimum. Notable people with the surname include: *Carl Ramsauer (1879–1955), professor of physics who discovered of the Ramsauer-Townsend effect *Johann Georg Ramsauer (1795–1874), Austrian mine operator, director of the excavations at the Hallstatt cemetery from 1846 to 1863 *Peter Ramsauer (born 1954), German politician ==See also== *Ramsauer Ache, a river of Bavaria, Germany *Ramsauer–Townsend effect, physical phenomenon involving the scattering of low-energy electrons by atoms of a noble gas Category:German- language surnames de:Ramsauer Predicting from theory the kinetic energy that will produce a Ramsauer–Townsend minimum is quite complicated since the problem involves understanding the wave nature of particles. No good explanation for the phenomenon existed until the introduction of quantum mechanics, which explains that the effect results from the wave-like properties of the electron. He pioneered the field of electron and proton collisions with gas molecules.Mehra, Volume 1, Part 2, 2001, p. 620. == Early life == Ramsauer was born in Osternburg, Oldenburg. If one tries to predict the probability of collision with a classical model that treats the electron and atom as hard spheres, one finds that the probability of collision should be independent of the incident electron energy (see Kukolich “Demonstration of the Ramsauer - Townsend Effect in a Xenon Thyratron”, S.G.Kukolich, Am. J. Phys. 36, 1968, pages 701 - 70 ). Ramsauer is a surname. * * Griffiths, D.J., Introduction to Quantum Mechanics,Section 2.6 Category:Scattering Category:Physical phenomena He was awarded his doctorate at Kiel.Hentschel, 1966, Appendix F, pp. XLII-XLII. == Career == From 1907 to 1909, Ramsauer was a teaching assistant to Philipp Lenard in the physics department at the Ruprecht Karl University of Heidelberg. Document #93 in Hentschel, 1996, pp. 290 – 292. addressed the atrocious state of physics instruction in Germany, which Ramsauer concluded was the result of politicization of education.Hentschel, 1966, Appendix F; see the entry for Carl Ramsauer. Dieter Ramsauer (* May 2, 1939 in Velbert; † April 23, 2021 in Schwelm) was a German engineer who was renowned for numerous inventions. Barth, 1957) == Selected publications == *Carl Ramsauer Über den Wirkungsquerschnitt der Gasmoleküle gegenüber langsamen Elektronen, Annalen der Physik (4) 64 513–540 (1921). These interactions are classified as inelastic if they cause excitation or ionization of the atom to occur and elastic if they do not. ",The Ramsauer-Townsend effect is a physical phenomenon that involves the scattering of low-energy electrons by atoms of a non-noble gas. It can be explained by classical mechanics.,The Ramsauer-Townsend effect is a physical phenomenon that involves the scattering of low-energy electrons by atoms of a noble gas. It requires the wave theory of quantum mechanics to be explained.,The Ramsauer-Townsend effect is a physical phenomenon that involves the scattering of high-energy electrons by atoms of a noble gas. It can be explained by classical mechanics.,The Ramsauer-Townsend effect is a physical phenomenon that involves the scattering of high-energy electrons by atoms of a non-noble gas. It requires the wave theory of quantum mechanics to be explained.,The Ramsauer-Townsend effect is a physical phenomenon that involves the scattering of electrons by atoms of any gas. It can be explained by classical mechanics.,B,kaggle200,"A sensor is a device that produces an output signal for the purpose of sensing a physical phenomenon.
In agriculture, gardening, and forestry, broadcast seeding is a method of seeding that involves scattering seed, by hand or mechanically, over a relatively large area. This is in contrast to:
The Gibbs phenomenon involves both the fact that Fourier sums overshoot at a jump discontinuity, and that this overshoot does not die out as more sinusoidal terms are added.
The Ramsauer–Townsend effect, also sometimes called the Ramsauer effect or the Townsend effect, is a physical phenomenon involving the scattering of low-energy electrons by atoms of a noble gas. The effect can not be explained by Classical mechanics, but requires the wave theory of quantum mechanics.","The Kerker effect is a phenomenon in scattering directionality, which occurs when different multipole responses are presented and not negligible.
A sensor is a device that produces an output signal for the purpose of sensing a physical phenomenon.
The Ramsauer–Townsend effect, also sometimes called the Ramsauer effect or the Townsend effect, is a physical phenomenon involving the scattering of low-energy electrons by atoms of a noble gas. The effect can not be explained by classical mechanics, but requires the wave theory of quantum mechanics.","This is in contrast to:
The Gibbs phenomenon involves both the fact that Fourier sums overshoot at a jump discontinuity, and that this overshoot does not die out as more sinusoidal terms are added.
The Ramsauer–Townsend effect, also sometimes called the Ramsauer effect or the Townsend effect, is a physical phenomenon involving the scattering of low-energy electrons by atoms of a noble gasThe Kerker effect is a phenomenon in scattering directionality, which occurs when different multipole responses are presented and not negligible.
A sensor is a device that produces an output signal for the purpose of sensing a physical phenomenon.
The Ramsauer–Townsend effect, also sometimes called the Ramsauer effect or the Townsend effect, is a physical phenomenon involving the scattering of low-energy electrons by atoms of a noble gasThe effect can not be explained by classical mechanics, but requires the wave theory of quantum mechanicsThe effect can not be explained by Classical mechanics, but requires the wave theory of quantum mechanics.- A sensor is a device that produces an output signal for the purpose of sensing a physical phenomenon.
In agriculture, gardening, and forestry, broadcast seeding is a method of seeding that involves scattering seed, by hand or mechanically, over a relatively large area","This is in contrast to:
The Gibbs phenomenon involves both the fact that Fourier sums overshoot at a jump discontinuity, and that this overshoot does not die out as more sinusoidal terms are added.
The Ramsauer–Townsend effect, also sometimes called the Ramsauer effect or the Townsend effect, is a physical phenomenon involving the scattering of low-energy electrons by atoms of a noble gasThe Kerker effect is a phenomenon in scattering directionality, which occurs when different multipole responses are presented and not negligible.
A sensor is a device that produces an output signal for the purpose of sensing a physical phenomenon.
The Ramsauer–Townsend effect, also sometimes called the Ramsauer effect or the Townsend effect, is a physical phenomenon involving the scattering of low-energy electrons by atoms of a noble gasThe effect can not be explained by classical mechanics, but requires the wave theory of quantum mechanicsThe effect can not be explained by Classical mechanics, but requires the wave theory of quantum mechanics.- A sensor is a device that produces an output signal for the purpose of sensing a physical phenomenon.
In agriculture, gardening, and forestry, broadcast seeding is a method of seeding that involves scattering seed, by hand or mechanically, over a relatively large area[SEP]What is the Ramsauer-Townsend effect?","['B', 'E', 'D']",1.0
What is Minkowski space?,"Minkowski space is closely associated with Einstein's theories of special relativity and general relativity and is the most common mathematical structure on which special relativity is formulated. In mathematics, specifically the field of algebraic number theory, a Minkowski space is a Euclidean space associated with an algebraic number field. In mathematical physics, Minkowski space (or Minkowski spacetime) (. For an overview, Minkowski space is a -dimensional real vector space equipped with a nondegenerate, symmetric bilinear form on the tangent space at each point in spacetime, here simply called the Minkowski inner product, with metric signature either or . Minkowski space is a suitable basis for special relativity, a good description of physical systems over finite distances in systems without significant gravitation. Thus, the structure of Minkowski space is still essential in the description of general relativity. == Geometry == The meaning of the term geometry for the Minkowski space depends heavily on the context. Minkowski geometry may refer to: * The geometry of a finite-dimensional normed space * The geometry of Minkowski space The Minkowski distance or Minkowski metric is a metric in a normed vector space which can be considered as a generalization of both the Euclidean distance and the Manhattan distance. Introducing more terminology (but not more structure), Minkowski space is thus a pseudo- Euclidean space with total dimension and signature or . Extract of page 184 Equipped with this inner product, the mathematical model of spacetime is called Minkowski space. Because it treats time differently to how it treats the 3 spatial dimensions, Minkowski space differs from four-dimensional Euclidean space. Even in curved space, Minkowski space is still a good description in an infinitesimal region surrounding any point (barring gravitational singularities).This similarity between flat space and curved space at infinitesimally small distance scales is foundational to the definition of a manifold in general. thumb In mathematics, in the field of functional analysis, a Minkowski functional (after Hermann Minkowski) or gauge function is a function that recovers a notion of distance on a linear space. However, the mathematics can easily be extended or simplified to create an analogous generalized Minkowski space in any number of dimensions. The Minkowski metric is the metric tensor of Minkowski space. Minkowski space is not endowed with a Euclidean geometry, and not with any of the generalized Riemannian geometries with intrinsic curvature, those exposed by the model spaces in hyperbolic geometry (negative curvature) and the geometry modeled by the sphere (positive curvature). Minkowski space is, in particular, not a metric space and not a Riemannian manifold with a Riemannian metric. Minkowski space is thus a comparatively simple special case of a Lorentzian manifold. Minkowski gave an alternative formulation of this idea that used a real time coordinate instead of an imaginary one, representing the four variables of space and time in coordinate form in a four dimensional real vector space. Although initially developed by mathematician Hermann Minkowski for Maxwell's equations of electromagnetism, the mathematical structure of Minkowski spacetime was shown to be implied by the postulates of special relativity. ",Minkowski space is a physical space where objects move in a straight line unless acted upon by a force.,Minkowski space is a mathematical model that combines inertial space and time manifolds with a non-inertial reference frame of space and time into a four-dimensional model relating a position to the field.,Minkowski space is a mathematical model that combines space and time into a two-dimensional model relating a position to the field.,Minkowski space is a mathematical model that combines space and time into a three-dimensional model relating a position to the field.,Minkowski space is a physical space where objects move in a curved line unless acted upon by a force.,B,kaggle200,"Even in curved space, Minkowski space is still a good description in an infinitesimal region surrounding any point (barring gravitational singularities). More abstractly, we say that in the presence of gravity spacetime is described by a curved 4-dimensional manifold for which the tangent space to any point is a 4-dimensional Minkowski space. Thus, the structure of Minkowski space is still essential in the description of general relativity.
This supergroup has the following Lie superalgebra. Suppose that formula_20 is Minkowski space (of dimension formula_21), and formula_22 is a finite sum of irreducible real spinor representations for formula_21-dimensional Minkowski space.
A standard or orthonormal basis for Minkowski space is a set of four mutually orthogonal vectors such that
Introducing more terminology (but not more structure), Minkowski space is thus a pseudo-Euclidean space with total dimension and signature or . Elements of Minkowski space are called events. Minkowski space is often denoted or to emphasize the chosen signature, or just . It is perhaps the simplest example of a pseudo-Riemannian manifold.","Even in curved space, Minkowski space is still a good description in an infinitesimal region surrounding any point (barring gravitational singularities). More abstractly, we say that in the presence of gravity spacetime is described by a curved 4-dimensional manifold for which the tangent space to any point is a 4-dimensional Minkowski space. Thus, the structure of Minkowski space is still essential in the description of general relativity.
Introducing more terminology (but not more structure), Minkowski space is thus a pseudo-Euclidean space with total dimension n = 4 and signature (3, 1) or (1, 3). Elements of Minkowski space are called events. Minkowski space is often denoted R3,1 or R1,3 to emphasize the chosen signature, or just M. It is perhaps the simplest example of a pseudo-Riemannian manifold.
In mathematical physics, Minkowski space (or Minkowski spacetime) () combines inertial space and time manifolds (x,y) with a non-inertial reference frame of space and time (x',t') into a four-dimensional model relating a position (inertial frame of reference) to the field (physics). A four-vector (x,y,z,t) consists of a coordinate axes such as a Euclidean space plus time. This may be used with the non-inertial frame to illustrate specifics of motion, but should not be confused with the spacetime model generally. The model helps show how a spacetime interval between any two events is independent of the inertial frame of reference in which they are recorded. Mathematician Hermann Minkowski developed it from the work of Hendrik Lorentz, Henri Poincaré, and others, and said it ""was grown on experimental physical grounds."" Minkowski space is closely associated with Einstein's theories of special relativity and general relativity and is the most common mathematical structure by which special relativity is formalized. While the individual components in Euclidean space and time might differ due to length contraction and time dilation, in Minkowski spacetime, all frames of reference will agree on the total interval in spacetime between events. Minkowski space differs from four-dimensional Euclidean space insofar as it treats time differently than the three spatial dimensions.","Thus, the structure of Minkowski space is still essential in the description of general relativity.
Introducing more terminology (but not more structure), Minkowski space is thus a pseudo-Euclidean space with total dimension n = 4 and signature (3, 1) or (1, 3)- Even in curved space, Minkowski space is still a good description in an infinitesimal region surrounding any point (barring gravitational singularities)Even in curved space, Minkowski space is still a good description in an infinitesimal region surrounding any point (barring gravitational singularities)Mathematician Hermann Minkowski developed it from the work of Hendrik Lorentz, Henri Poincaré, and others, and said it ""was grown on experimental physical grounds."" Minkowski space is closely associated with Einstein's theories of special relativity and general relativity and is the most common mathematical structure by which special relativity is formalizedMinkowski space differs from four-dimensional Euclidean space insofar as it treats time differently than the three spatial dimensionsElements of Minkowski space are called eventsIt is perhaps the simplest example of a pseudo-Riemannian manifold.
In mathematical physics, Minkowski space (or Minkowski spacetime) () combines inertial space and time manifolds (x,y) with a non-inertial reference frame of space and time (x',t') into a four-dimensional model relating a position (inertial frame of reference) to the field (physics)More abstractly, we say that in the presence of gravity spacetime is described by a curved 4-dimensional manifold for which the tangent space to any point is a 4-dimensional Minkowski spaceMinkowski space is often denoted or to emphasize the chosen signature, or just Minkowski space is often denoted R3,1 or R1,3 to emphasize the chosen signature, or just MThus, the structure of Minkowski space is still essential in the description of general relativity.
This supergroup has the following Lie superalgebraSuppose that formula_20 is Minkowski space (of dimension formula_21), and formula_22 is a finite sum of irreducible real spinor represen","Thus, the structure of Minkowski space is still essential in the description of general relativity.
Introducing more terminology (but not more structure), Minkowski space is thus a pseudo-Euclidean space with total dimension n = 4 and signature (3, 1) or (1, 3)- Even in curved space, Minkowski space is still a good description in an infinitesimal region surrounding any point (barring gravitational singularities)Even in curved space, Minkowski space is still a good description in an infinitesimal region surrounding any point (barring gravitational singularities)Mathematician Hermann Minkowski developed it from the work of Hendrik Lorentz, Henri Poincaré, and others, and said it ""was grown on experimental physical grounds."" Minkowski space is closely associated with Einstein's theories of special relativity and general relativity and is the most common mathematical structure by which special relativity is formalizedMinkowski space differs from four-dimensional Euclidean space insofar as it treats time differently than the three spatial dimensionsElements of Minkowski space are called eventsIt is perhaps the simplest example of a pseudo-Riemannian manifold.
In mathematical physics, Minkowski space (or Minkowski spacetime) () combines inertial space and time manifolds (x,y) with a non-inertial reference frame of space and time (x',t') into a four-dimensional model relating a position (inertial frame of reference) to the field (physics)More abstractly, we say that in the presence of gravity spacetime is described by a curved 4-dimensional manifold for which the tangent space to any point is a 4-dimensional Minkowski spaceMinkowski space is often denoted or to emphasize the chosen signature, or just Minkowski space is often denoted R3,1 or R1,3 to emphasize the chosen signature, or just MThus, the structure of Minkowski space is still essential in the description of general relativity.
This supergroup has the following Lie superalgebraSuppose that formula_20 is Minkowski space (of dimension formula_21), and formula_22 is a finite sum of irreducible real spinor represen[SEP]What is Minkowski space?","['B', 'E', 'D']",1.0
What is the Optical Signal-to-Noise Ratio (OSNR)?,"The OSNR is the ratio between the signal power and the noise power in a given bandwidth. To describe the signal quality without taking the receiver into account, the optical SNR (OSNR) is used. OSNR is measured with an optical spectrum analyzer. ==Types and abbreviations== Signal to noise ratio may be abbreviated as SNR and less commonly as S/N. PSNR stands for peak signal-to-noise ratio. OSNR, a four-letter acronym or abbreviation, may refer to: *Optical signal-to- noise ratio *Optical spectrum analyzer *Optical performance monitoring *Other / Signature Not Required - a delivery classification used by some shippers. Signal-to-noise ratio (SNR or S/N) is a measure used in science and engineering that compares the level of a desired signal to the level of background noise. SNR measures the ratio between an arbitrary signal level (not necessarily the most powerful signal possible) and noise. Depending on whether the signal is a constant () or a random variable (), the signal-to-noise ratio for random noise becomes: : \mathrm{SNR} = \frac{s^2}{\mathrm{E}[N^2]} where E refers to the expected value, i.e. in this case the mean square of , or : \mathrm{SNR} = \frac{\mathrm{E}[S^2]}{\mathrm{E}[N^2]} If the noise has expected value of zero, as is common, the denominator is its variance, the square of its standard deviation . SNR is usually taken to indicate an average signal-to-noise ratio, as it is possible that instantaneous signal-to-noise ratios will be considerably different. SNR is defined as the ratio of signal power to noise power, often expressed in decibels. Related measures are the ""contrast ratio"" and the ""contrast-to- noise ratio"". ==Modulation system measurements== ===Amplitude modulation=== Channel signal-to-noise ratio is given by :\mathrm{(SNR)_{C,AM}} = \frac{A_C^2 (1 + k_a^2 P)} {2 W N_0} where W is the bandwidth and k_a is modulation index Output signal-to-noise ratio (of AM receiver) is given by :\mathrm{(SNR)_{O,AM}} = \frac{A_c^2 k_a^2 P} {2 W N_0} ===Frequency modulation=== Channel signal-to-noise ratio is given by :\mathrm{(SNR)_{C,FM}} = \frac{A_c^2} {2 W N_0} Output signal-to-noise ratio is given by :\mathrm{(SNR)_{O,FM}} = \frac{A_c^2 k_f^2 P} {2 N_0 W^3} ==Noise reduction== All real measurements are disturbed by noise. A ratio higher than 1:1 (greater than 0 dB) indicates more signal than noise. Other definitions of SNR may use different factors or bases for the logarithm, depending on the context and application. ==Definition== Signal-to-noise ratio is defined as the ratio of the power of a signal (meaningful input) to the power of background noise (meaningless or unwanted input): : \mathrm{SNR} = \frac{P_\mathrm{signal}}{P_\mathrm{noise}}, where is average power. Audio uses RMS, Video P-P, which gave +9 dB more SNR for video. ==Optical signals== Optical signals have a carrier frequency (about and more) that is much higher than the modulation frequency. GSNR stands for geometric signal-to- noise ratio. Yet another alternative, very specific, and distinct definition of SNR is employed to characterize sensitivity of imaging systems; see Signal-to-noise ratio (imaging). Peak signal-to-noise ratio (PSNR) is an engineering term for the ratio between the maximum possible power of a signal and the power of corrupting noise that affects the fidelity of its representation. In this case, the SNR is approximately : \mathrm{SNR_{dB}} \approx 20 \log_{10} (2^n {\textstyle\sqrt {3/2}}) \approx 6.02 \cdot n + 1.761 ===Floating point=== Floating-point numbers provide a way to trade off signal-to-noise ratio for an increase in dynamic range. Philadelphia: Lippincott Williams & Wilkins, 2006, p. 280. : \mathrm{SNR} = \frac{\mu}{\sigma} where \mu is the signal mean or expected value and \sigma is the standard deviation of the noise, or an estimate thereof.The exact methods may vary between fields. Substituting the definitions of SNR, signal, and noise in decibels into the above equation results in an important formula for calculating the signal to noise ratio in decibels, when the signal and noise are also in decibels: : \mathrm{SNR_{dB}} = {P_\mathrm{signal,dB} - P_\mathrm{noise,dB}}. Using the definition of SNR : \mathrm{SNR_{dB}} = 10 \log_{10} \left ( \frac{P_\mathrm{signal}}{P_\mathrm{noise}} \right ). ","The Optical Signal-to-Noise Ratio (OSNR) is the ratio between the modulation frequency and the carrier frequency of an optical signal, used to describe the signal quality in systems where dynamic range is less than 6.02m.","The Optical Signal-to-Noise Ratio (OSNR) is the ratio between the signal power and the noise power in a given bandwidth, used to describe the signal quality without taking the receiver into account.","The Optical Signal-to-Noise Ratio (OSNR) is the ratio between the signal power and the noise power in a given bandwidth, used to describe the signal quality in situations where the dynamic range is less than 6.02m.","The Optical Signal-to-Noise Ratio (OSNR) is the ratio between the signal power and the noise power in a fixed bandwidth of 6.02m, used to describe the signal quality in systems where dynamic range is less than 6.02m.","The Optical Signal-to-Noise Ratio (OSNR) is the ratio between the signal power and the noise power in a given bandwidth, used to describe the signal quality in situations where the dynamic range is large or unpredictable.",B,kaggle200,"Note that the dynamic range is much larger than fixed-point, but at a cost of a worse signal-to-noise ratio. This makes floating-point preferable in situations where the dynamic range is large or unpredictable. Fixed-point's simpler implementations can be used with no signal quality disadvantage in systems where dynamic range is less than 6.02m. The very large dynamic range of floating-point can be a disadvantage, since it requires more forethought in designing algorithms.
Signal to noise ratio may be abbreviated as SNR and less commonly as S/N. PSNR stands for peak signal-to-noise ratio. GSNR stands for geometric signal-to-noise ratio. SINR is the signal-to-interference-plus-noise ratio.
The optical component used for this purpose in DWDM networks is known as optical performance monitor (OPM) or optical channel monitor (OCM), which measures channel power, wavelength, and optical signal-to-noise ratio (OSNR) for each channel.
Optical signals have a carrier frequency (about and more) that is much higher than the modulation frequency. This way the noise covers a bandwidth that is much wider than the signal itself. The resulting signal influence relies mainly on the filtering of the noise. To describe the signal quality without taking the receiver into account, the optical SNR (OSNR) is used. The OSNR is the ratio between the signal power and the noise power in a given bandwidth. Most commonly a reference bandwidth of 0.1 nm is used. This bandwidth is independent of the modulation format, the frequency and the receiver. For instance an OSNR of 20 dB/0.1 nm could be given, even the signal of 40 GBit DPSK would not fit in this bandwidth. OSNR is measured with an optical spectrum analyzer.","Signal-to-noise ratio (SNR or S/N) is a measure used in science and engineering that compares the level of a desired signal to the level of background noise. SNR is defined as the ratio of signal power to noise power, often expressed in decibels. A ratio higher than 1:1 (greater than 0 dB) indicates more signal than noise.
The optical component used for this purpose in DWDM networks is known as optical performance monitor (OPM) or optical channel monitor (OCM), which measures channel power, wavelength, and optical signal-to-noise ratio (OSNR) for each channel.
Optical signals have a carrier frequency (about 200 THz and more) that is much higher than the modulation frequency. This way the noise covers a bandwidth that is much wider than the signal itself. The resulting signal influence relies mainly on the filtering of the noise. To describe the signal quality without taking the receiver into account, the optical SNR (OSNR) is used. The OSNR is the ratio between the signal power and the noise power in a given bandwidth. Most commonly a reference bandwidth of 0.1 nm is used. This bandwidth is independent of the modulation format, the frequency and the receiver. For instance an OSNR of 20 dB/0.1 nm could be given, even the signal of 40 GBit DPSK would not fit in this bandwidth. OSNR is measured with an optical spectrum analyzer.","OSNR is measured with an optical spectrum analyzer.The OSNR is the ratio between the signal power and the noise power in a given bandwidthOSNR is measured with an optical spectrum analyzerTo describe the signal quality without taking the receiver into account, the optical SNR (OSNR) is usedA ratio higher than 1:1 (greater than 0 dB) indicates more signal than noise.
The optical component used for this purpose in DWDM networks is known as optical performance monitor (OPM) or optical channel monitor (OCM), which measures channel power, wavelength, and optical signal-to-noise ratio (OSNR) for each channel.
Optical signals have a carrier frequency (about 200 THz and more) that is much higher than the modulation frequencySignal-to-noise ratio (SNR or S/N) is a measure used in science and engineering that compares the level of a desired signal to the level of background noiseSINR is the signal-to-interference-plus-noise ratio.
The optical component used for this purpose in DWDM networks is known as optical performance monitor (OPM) or optical channel monitor (OCM), which measures channel power, wavelength, and optical signal-to-noise ratio (OSNR) for each channel.
Optical signals have a carrier frequency (about and more) that is much higher than the modulation frequencyPSNR stands for peak signal-to-noise ratioFor instance an OSNR of 20 dB/0.1 nm could be given, even the signal of 40 GBit DPSK would not fit in this bandwidthSNR is defined as the ratio of signal power to noise power, often expressed in decibelsGSNR stands for geometric signal-to-noise ratioThe very large dynamic range of floating-point can be a disadvantage, since it requires more forethought in designing algorithms.
Signal to noise ratio may be abbreviated as SNR and less commonly as S/NMost commonly a reference bandwidth of 0.1 nm is usedThis way the noise covers a bandwidth that is much wider than the signal itselfThis bandwidth is independent of the modulation format, the frequency and the receiverThe resulting signal influence relies mainly on the filtering of the noise- Note that the dynamic range ","OSNR is measured with an optical spectrum analyzer.The OSNR is the ratio between the signal power and the noise power in a given bandwidthOSNR is measured with an optical spectrum analyzerTo describe the signal quality without taking the receiver into account, the optical SNR (OSNR) is usedA ratio higher than 1:1 (greater than 0 dB) indicates more signal than noise.
The optical component used for this purpose in DWDM networks is known as optical performance monitor (OPM) or optical channel monitor (OCM), which measures channel power, wavelength, and optical signal-to-noise ratio (OSNR) for each channel.
Optical signals have a carrier frequency (about 200 THz and more) that is much higher than the modulation frequencySignal-to-noise ratio (SNR or S/N) is a measure used in science and engineering that compares the level of a desired signal to the level of background noiseSINR is the signal-to-interference-plus-noise ratio.
The optical component used for this purpose in DWDM networks is known as optical performance monitor (OPM) or optical channel monitor (OCM), which measures channel power, wavelength, and optical signal-to-noise ratio (OSNR) for each channel.
Optical signals have a carrier frequency (about and more) that is much higher than the modulation frequencyPSNR stands for peak signal-to-noise ratioFor instance an OSNR of 20 dB/0.1 nm could be given, even the signal of 40 GBit DPSK would not fit in this bandwidthSNR is defined as the ratio of signal power to noise power, often expressed in decibelsGSNR stands for geometric signal-to-noise ratioThe very large dynamic range of floating-point can be a disadvantage, since it requires more forethought in designing algorithms.
Signal to noise ratio may be abbreviated as SNR and less commonly as S/NMost commonly a reference bandwidth of 0.1 nm is usedThis way the noise covers a bandwidth that is much wider than the signal itselfThis bandwidth is independent of the modulation format, the frequency and the receiverThe resulting signal influence relies mainly on the filtering of the noise- Note that the dynamic range [SEP]What is the Optical Signal-to-Noise Ratio (OSNR)?","['B', 'C', 'D']",1.0
What is the interpretation of supersymmetry in stochastic supersymmetric theory?,"Accordingly, the emergent long-range behavior that always accompanies dynamical chaos and its derivatives such as turbulence and self- organized criticality can be understood as a consequence of the Goldstone theorem. == History and relation to other theories == The first relation between supersymmetry and stochastic dynamics was established in two papers in 1979 and 1982 by Giorgio Parisi and Nicolas Sourlas, who demonstrated that the application of the BRST gauge fixing procedure to Langevin SDEs, i.e., to SDEs with linear phase spaces, gradient flow vector fields, and additive noises, results in N=2 supersymmetric models. Supersymmetric theory of stochastic dynamics or stochastics (STS) is an exact theory of stochastic (partial) differential equations (SDEs), the class of mathematical models with the widest applicability covering, in particular, all continuous time dynamical systems, with and without noise. In the domain of applicability of stochastic differential equations including, e.g, classical physics, spontaneous supersymmetry breaking encompasses such nonlinear dynamical phenomena as chaos, turbulence, pink noise, etc. ==Supersymmetry breaking scale== In particle physics, supersymmetry breaking scale is the energy scale where supersymmetry breaking takes place. The theory identifies a model as chaotic, in the generalized, stochastic sense, if its ground state is not supersymmetric, i.e., if the supersymmetry is broken spontaneously. Within STS, spontaneous breakdown of supersymmetry is indeed a nontrivial dynamical phenomenon that has been variously known across disciplines as chaos, turbulence, self-organized criticality etc. A similar approach was used to establish that classical mechanics, its stochastic generalization, and higher-order Langevin SDEs also have supersymmetric representations. Since then, relation between so-emerged supersymmetry of Langevin SDEs and a few physical concepts have been established including the fluctuation dissipation theorems, Jarzynski equality, Onsager principle of microscopic reversibility, solutions of Fokker–Planck equations, self- organization, etc. As a supersymmetric theory, BRST procedure approach to SDEs can be viewed as one of the realizations of the concept of Nicolai map. == Parisi–Sourlas approach to Langevin SDEs == In the context of supersymmetric approach to stochastic dynamics, the term Langevin SDEs denotes SDEs with Euclidean phase space, X = \mathbb{R}^n , gradient flow vector field, and additive Gaussian white noise, \dot x(t) = - \partial U(x(t))+(2\Theta)^{1/2} \xi(t),where x\in X , \xi \in \mathbb{R}^n is the noise variable, \Theta is the noise intensity, and \partial U(x), which in coordinates (\partial U(x))^i \equiv \delta^{ij}\partial_jU(x) and \partial_i U(x) \equiv \partial U(x)/\partial x^i, is the gradient flow vector field with U(x) being the Langevin function often interpreted as the energy of the purely dissipative stochastic dynamical system. This evolution has an intrinsic BRST or topological supersymmetry representing the preservation of topology and/or the concept of proximity in the phase space by continuous time dynamics. In the general stochastic case, one can consider global supersymmetric states, \theta's, from the De Rham cohomology classes of X and observables, \gamma , that are Poincare duals of closed manifolds non-trivial in homology of X. Once such suitable gauge is obtained, the dynamics of the SUSY gauge theory work as follows: we seek a Lagrangian that is invariant under the Super-gauge transformations (these transformations are an important tool needed to develop supersymmetric version of a gauge theory). In addition, physically meaningful Langevin SDEs never break supersymmetry spontaneously. Therefore, for the purpose of the identification of the spontaneous supersymmetry breaking as dynamical chaos, the generalization of the Parisi–Sourlas approach to SDEs of general form is needed. The second is the spontaneous breakdown of supersymmetry. Real dynamical systems cannot be isolated from their environments and thus always experience stochastic influence. == Spontaneous supersymmetry breaking and dynamical chaos == BRST gauge fixing procedure applied to SDEs leads directly to the Witten index. The theory began with the application of BRST gauge fixing procedure to Langevin SDEs, that was later adapted to classical mechanics and its stochastic generalization, higher-order Langevin SDEs, and, more recently, to SDEs of arbitrary form, which allowed to link BRST formalism to the concept of transfer operators and recognize spontaneous breakdown of BRST supersymmetry as a stochastic generalization of dynamical chaos. Such generalization showed that all SDEs possess N=1 BRST or topological supersymmetry (TS) and this finding completes the story of relation between supersymmetry and SDEs. Namely, \textstyle {\mathcal W} = \operatorname{Tr} (-1)^{\hat n} \langle M_{t't}^* \rangle_\text{noise} = \langle \operatorname{Tr} (-1)^{\hat n} M_{t't}^* \rangle_\text{noise} = I_{L} . === The meaning of supersymmetry and the butterfly effect === The N=2 supersymmetry of Langevin SDEs has been linked to the Onsager principle of microscopic reversibility and Jarzynski equality. This is in contrast with the traditional deterministic chaos whose trajectory-based properties such as the topological mixing cannot in principle be generalized to stochastic case because, just like in quantum dynamics, all trajectories are possible in the presence of noise and, say, the topological mixing property is satisfied trivially by all models with non-zero noise intensity. == STS as a topological field theory == thumb|The square acbd represents an instanton, i.e, the family of trajectories of deterministic flow (dotted arrowed curves) leading from one critical point (b) to another (a). Finally, Nature does not have to be supersymmetric at any scale. ==See also== * Soft SUSY breaking * Timeline of the Big Bang * Chronology of the universe * Big Bang * Supersymmetric theory of stochastic dynamics Category:Supersymmetric quantum field theory Category:Symmetry ",Supersymmetry is a type of hydromagnetic dynamo that arises when the magnetic field becomes strong enough to affect the fluid motions.,Supersymmetry is a measure of the amplitude of the dynamo in the induction equation of the kinematic approximation.,Supersymmetry is a measure of the strength of the magnetic field in the induction equation of the kinematic dynamo.,Supersymmetry is a property of deterministic chaos that arises from the continuity of the flow in the model's phase space.,"Supersymmetry is an intrinsic property of all stochastic differential equations, and it preserves continuity in the model's phase space via continuous time flows.",E,kaggle200,"In a supersymmetric theory the equations for force and the equations for matter are identical. In theoretical and mathematical physics, any theory with this property has the principle of supersymmetry (SUSY). Dozens of supersymmetric theories exist. Supersymmetry is a spacetime symmetry between two basic classes of particles: bosons, which have an integer-valued spin and follow Bose–Einstein statistics, and fermions, which have a half-integer-valued spin and follow Fermi–Dirac statistics. In supersymmetry, each particle from one class would have an associated particle in the other, known as its superpartner, the spin of which differs by a half-integer. For example, if the electron exists in a supersymmetric theory, then there would be a particle called a """"selectron"""" (superpartner electron), a bosonic partner of the electron. In the simplest supersymmetry theories, with perfectly ""unbroken"" supersymmetry, each pair of superpartners would share the same mass and internal quantum numbers besides spin. More complex supersymmetry theories have a spontaneously broken symmetry, allowing superpartners to differ in mass.
In supersymmetric theory of stochastics, an approximation-free theory of stochastic differential equations, 1/""f"" noise is one of the manifestations of the spontaneous breakdown of topological supersymmetry. This supersymmetry is an intrinsic property of all stochastic differential equations and its meaning is the preservation of the continuity of the phase space by continuous time dynamics. Spontaneous breakdown of this supersymmetry is the stochastic generalization of the concept of deterministic chaos, whereas the associated emergence of the long-term dynamical memory or order, i.e., 1/""f"" and crackling noises, the Butterfly effect etc., is the consequence of the Goldstone theorem in the application to the spontaneously broken topological supersymmetry.
It is possible to have more than one kind of supersymmetry transformation. Theories with more than one supersymmetry transformation are known as extended supersymmetric theories. The more supersymmetry a theory has, the more constrained are the field content and interactions. Typically the number of copies of a supersymmetry is a power of 2 (1, 2, 4, 8...). In four dimensions, a spinor has four degrees of freedom and thus the minimal number of supersymmetry generators is four in four dimensions and having eight copies of supersymmetry means that there are 32 supersymmetry generators.
Kinematic dynamo can be also viewed as the phenomenon of the spontaneous breakdown of the topological supersymmetry of the associated stochastic differential equation related to the flow of the background matter. Within stochastic supersymmetric theory, this supersymmetry is an intrinsic property of ""all"" stochastic differential equations, its interpretation is that the model’s phase space preserves continuity via continuous time flows. When the continuity of that flow spontaneously breaks down, the system is in the stochastic state of ""deterministic chaos"". In other words, kinematic dynamo arises because of chaotic flow in the underlying background matter.","In a supersymmetric theory the equations for force and the equations for matter are identical. In theoretical and mathematical physics, any theory with this property has the principle of supersymmetry (SUSY). Dozens of supersymmetric theories exist. Supersymmetry is a spacetime symmetry between two basic classes of particles: bosons, which have an integer-valued spin and follow Bose–Einstein statistics, and fermions, which have a half-integer-valued spin and follow Fermi–Dirac statistics.In supersymmetry, each particle from one class would have an associated particle in the other, known as its superpartner, the spin of which differs by a half-integer. For example, if the electron exists in a supersymmetric theory, then there would be a particle called a selectron (superpartner electron), a bosonic partner of the electron. In the simplest supersymmetry theories, with perfectly ""unbroken"" supersymmetry, each pair of superpartners would share the same mass and internal quantum numbers besides spin. More complex supersymmetry theories have a spontaneously broken symmetry, allowing superpartners to differ in mass.Supersymmetry has various applications to different areas of physics, such as quantum mechanics, statistical mechanics, quantum field theory, condensed matter physics, nuclear physics, optics, stochastic dynamics, astrophysics, quantum gravity, and cosmology. Supersymmetry has also been applied to high energy physics, where a supersymmetric extension of the Standard Model is a possible candidate for physics beyond the Standard Model. However, no supersymmetric extensions of the Standard Model have been experimentally verified.
Extended supersymmetry It is possible to have more than one kind of supersymmetry transformation. Theories with more than one supersymmetry transformation are known as extended supersymmetric theories. The more supersymmetry a theory has, the more constrained are the field content and interactions. Typically the number of copies of a supersymmetry is a power of 2 (1, 2, 4, 8...). In four dimensions, a spinor has four degrees of freedom and thus the minimal number of supersymmetry generators is four in four dimensions and having eight copies of supersymmetry means that there are 32 supersymmetry generators.
Spontaneous breakdown of a topological supersymmetry Kinematic dynamo can be also viewed as the phenomenon of the spontaneous breakdown of the topological supersymmetry of the associated stochastic differential equation related to the flow of the background matter. Within stochastic supersymmetric theory, this supersymmetry is an intrinsic property of all stochastic differential equations, its interpretation is that the model’s phase space preserves continuity via continuous time flows. When the continuity of that flow spontaneously breaks down, the system is in the stochastic state of deterministic chaos. In other words, kinematic dynamo arises because of chaotic flow in the underlying background matter.","Within stochastic supersymmetric theory, this supersymmetry is an intrinsic property of ""all"" stochastic differential equations, its interpretation is that the model’s phase space preserves continuity via continuous time flowsWithin stochastic supersymmetric theory, this supersymmetry is an intrinsic property of all stochastic differential equations, its interpretation is that the model’s phase space preserves continuity via continuous time flowsThis supersymmetry is an intrinsic property of all stochastic differential equations and its meaning is the preservation of the continuity of the phase space by continuous time dynamicsMore complex supersymmetry theories have a spontaneously broken symmetry, allowing superpartners to differ in mass.
In supersymmetric theory of stochastics, an approximation-free theory of stochastic differential equations, 1/""f"" noise is one of the manifestations of the spontaneous breakdown of topological supersymmetrySupersymmetry is a spacetime symmetry between two basic classes of particles: bosons, which have an integer-valued spin and follow Bose–Einstein statistics, and fermions, which have a half-integer-valued spin and follow Fermi–Dirac statisticsSupersymmetry is a spacetime symmetry between two basic classes of particles: bosons, which have an integer-valued spin and follow Bose–Einstein statistics, and fermions, which have a half-integer-valued spin and follow Fermi–Dirac statistics.In supersymmetry, each particle from one class would have an associated particle in the other, known as its superpartner, the spin of which differs by a half-integerSpontaneous breakdown of this supersymmetry is the stochastic generalization of the concept of deterministic chaos, whereas the associated emergence of the long-term dynamical memory or order, i.e., 1/""f"" and crackling noises, the Butterfly effect etc., is the consequence of the Goldstone theorem in the application to the spontaneously broken topological supersymmetry.
It is possible to have more than one kind of supersymmetry transformationDozens of supersymmetric theories existThe more ","Within stochastic supersymmetric theory, this supersymmetry is an intrinsic property of ""all"" stochastic differential equations, its interpretation is that the model’s phase space preserves continuity via continuous time flowsWithin stochastic supersymmetric theory, this supersymmetry is an intrinsic property of all stochastic differential equations, its interpretation is that the model’s phase space preserves continuity via continuous time flowsThis supersymmetry is an intrinsic property of all stochastic differential equations and its meaning is the preservation of the continuity of the phase space by continuous time dynamicsMore complex supersymmetry theories have a spontaneously broken symmetry, allowing superpartners to differ in mass.
In supersymmetric theory of stochastics, an approximation-free theory of stochastic differential equations, 1/""f"" noise is one of the manifestations of the spontaneous breakdown of topological supersymmetrySupersymmetry is a spacetime symmetry between two basic classes of particles: bosons, which have an integer-valued spin and follow Bose–Einstein statistics, and fermions, which have a half-integer-valued spin and follow Fermi–Dirac statisticsSupersymmetry is a spacetime symmetry between two basic classes of particles: bosons, which have an integer-valued spin and follow Bose–Einstein statistics, and fermions, which have a half-integer-valued spin and follow Fermi–Dirac statistics.In supersymmetry, each particle from one class would have an associated particle in the other, known as its superpartner, the spin of which differs by a half-integerSpontaneous breakdown of this supersymmetry is the stochastic generalization of the concept of deterministic chaos, whereas the associated emergence of the long-term dynamical memory or order, i.e., 1/""f"" and crackling noises, the Butterfly effect etc., is the consequence of the Goldstone theorem in the application to the spontaneously broken topological supersymmetry.
It is possible to have more than one kind of supersymmetry transformationDozens of supersymmetric theories existThe more [SEP]What is the interpretation of supersymmetry in stochastic supersymmetric theory?","['E', 'D', 'C']",1.0
"What is the purpose of expressing a map's scale as a ratio, such as 1:10,000?","This concept is derived from the map scale in cartography. * Cartographic scale or map scale: a large-scale map covers a smaller area but embodies more detail, while a small-scale map covers a larger area with less detail. In geography, scale is the level at which a geographical phenomenon occurs or is described. Regardless of the selected type of division, there is a convention that four sheets of a particular scale map are used to depict the same area as one sheet of the next smaller scale map series produced by the same publisher. ==Numbering and naming systems== To determine whether a specific map sheet forms part of a map series, it is often sufficient simply to search for a map sheet number. Many non-European states limit the largest scale of their map series, usually to 1:50,000 scale, frequently due to the large size of the country covered (and hence for financial reasons). If a publisher produces several map series at different scales, for instance 1:25,000, 1:50,000 and 1:100,000, then these series are called scale series. In cartography and spatial analysis, scale effect and zoning effect (different ways of zoning lead to different statistical outcomes) conbimed can lead to modifiable areal unit problem (MAUP). == Types == Spatio-temporal hierarchies in landscape ecology Scale Spatial (m2) Temporal (yr) Micro- 100 \- 106 1 -500 Meso- 106 \- 1010 500 - 10,000 Macro- 1010 \- 1012 10,000 - 1,000,000 Mega- 1012 \- 1,000,000 - In geography, the term ""scale"" can be spatial, temporal, or spatio-temporal, but often (though not always) means spatial scale in spatial analysis. Map series occur when an area is to be covered by a map that, due to its scale, must be spread over several sheets. A map series is a group of topographic or thematic charts or maps usually having the same scale and cartographic specifications, and with each sheet appropriately identified by its publisher as belonging to the same series. In most European countries, the largest scale topographic map series is a 1:25.000 scale series. This system is therefore suitable only for small maps, or those in an irregular sheet division (as in tourist maps published by the private sector), and is seldom now used for modern official map series. Geographers describe geographical phenomena and differences using different scales. A scale factor is used when a real-world set of numbers needs to be represented on a different scale in order to fit a specific number format. In contrast with single sheet maps, map series have the advantage of representing a larger area in a uniform manner and have documented card network designs and recording methods. ==References== ===Notes=== ==Further reading== * * ==External links== * State Survey of North Rhine-Westphalia: old map series * State Agency for Surveying and Geobasis Information Rhineland-Palatinate: old map series * India And Adjacent Countries (IAC): Map series used by Survey of India. In different contexts, ""scale"" could have very different connotations, which could be classified as follows: * Geographic scale or the scale of observation: the spatial extent of a study. *Thurstone scale – This is a scaling technique that incorporates the intensity structure among indicators. From an epistemological perspective, scale is used to describe how detailed an observation is, while ontologically, scale is inherent in the complex interaction between society and nature. == Scale effect == The concept of scale is central to geography. However, that is not a correct use of the technical language of cartography, in which the term map series refers exclusively to the phenomenon described here, namely a map published over several sheets. Examples of such series are the German Topographic maps of 1:25.000 scale (TK25) to 1:1,000,000 scale (TK1000). The small scale map series are edited by the Federal Agency for Cartography and Geodesy. ","To indicate the use of south-up orientation, as used in Ancient Africa and some maps in Brazil today.","To indicate the orientation of the map, such as whether the 0° meridian is at the top or bottom of the page.","To indicate the projection used to create the map, such as Buckminster Fuller's Dymaxion projection.","To indicate the arrangement of the map, such as the world map of Gott, Vanderbei, and Goldberg arranged as a pair of disks back-to-back.",To indicate the relationship between the size of the map and the size of the area being represented.,E,kaggle200,"Newly published maps, like books, are recorded in national bibliographies. Thus, the title, author(s), imprint and ISBN of any recently published map are mentioned in official records. Additionally, various data specific to a map, such as scale, map projection, geographical coordinates and map format, are included in the records of that map.
It's not always clear whether an ancient artifact had been wrought as a map or as something else. The definition of ""map"" is also not precise. Thus, no single artifact is generally accepted to be the earliest surviving map. Candidates include:
In this case, the nondegenerate bilinear form is often used implicitly to map between the vector spaces and their duals, to express the transposed map as a map formula_144
Many maps are drawn to a scale expressed as a ratio, such as 1:10,000, which means that 1 unit of measurement on the map corresponds to 10,000 of that same unit on the ground. The scale statement can be accurate when the region mapped is small enough for the curvature of the Earth to be neglected, such as a city map. Mapping larger regions, where the curvature cannot be ignored, requires projections to map from the curved surface of the Earth to the plane. The impossibility of flattening the sphere to the plane without distortion means that the map cannot have a constant scale. Rather, on most projections, the best that can be attained is an accurate scale along one or two paths on the projection. Because scale differs everywhere, it can only be measured meaningfully as point scale per location. Most maps strive to keep point scale variation within narrow bounds. Although the scale statement is nominal it is usually accurate enough for most purposes unless the map covers a large fraction of the earth. At the scope of a world map, scale as a single number is practically meaningless throughout most of the map. Instead, it usually refers to the scale along the equator.","Maps not oriented with north at the top: Medieval European T and O maps such as the Hereford Mappa Mundi were centered on Jerusalem with East at the top. Indeed, before the reintroduction of Ptolemy's Geography to Europe around 1400, there was no single convention in the West. Portolan charts, for example, are oriented to the shores they describe.
Maps of cities bordering a sea are often conventionally oriented with the sea at the top.
Route and channel maps have traditionally been oriented to the road or waterway they describe.
Polar maps of the Arctic or Antarctic regions are conventionally centered on the pole; the direction North would be toward or away from the center of the map, respectively. Typical maps of the Arctic have 0° meridian toward the bottom of the page; maps of the Antarctic have the 0° meridian toward the top of the page.
South-up maps invert the North is up convention by having south at the top. Ancient Africans including in Ancient Egypt used this orientation, as some maps in Brazil do today.
Buckminster Fuller's Dymaxion maps are based on a projection of the Earth's sphere onto an icosahedron. The resulting triangular pieces may be arranged in any order or orientation.
Using the equator as the edge, the world map of Gott, Vanderbei, and Goldberg is arranged as a pair of disks back-to-back designed to present the least error possible. They are designed to be printed as a two-sided flat object that could be held easily for educational purposes.
Relief map of Guatemala The Relief map of Guatemala was made by Francisco Vela in 1905 and still exists. This map (horizontal scale 1:10,000; vertical scale 1:2,000) measures 1,800 m2, and was created to educate children in the scape of their country.
List
Many maps are drawn to a scale expressed as a ratio, such as 1:10,000, which means that 1 unit of measurement on the map corresponds to 10,000 of that same unit on the ground. The scale statement can be accurate when the region mapped is small enough for the curvature of the Earth to be neglected, such as a city map. Mapping larger regions, where the curvature cannot be ignored, requires projections to map from the curved surface of the Earth to the plane. The impossibility of flattening the sphere to the plane without distortion means that the map cannot have a constant scale. Rather, on most projections, the best that can be attained is an accurate scale along one or two paths on the projection. Because scale differs everywhere, it can only be measured meaningfully as point scale per location. Most maps strive to keep point scale variation within narrow bounds. Although the scale statement is nominal it is usually accurate enough for most purposes unless the map covers a large fraction of the earth. At the scope of a world map, scale as a single number is practically meaningless throughout most of the map. Instead, it usually refers to the scale along the equator.","This map (horizontal scale 1:10,000; vertical scale 1:2,000) measures 1,800 m2, and was created to educate children in the scape of their country.
List
Many maps are drawn to a scale expressed as a ratio, such as 1:10,000, which means that 1 unit of measurement on the map corresponds to 10,000 of that same unit on the groundAt the scope of a world map, scale as a single number is practically meaningless throughout most of the mapAlthough the scale statement is nominal it is usually accurate enough for most purposes unless the map covers a large fraction of the earthMost maps strive to keep point scale variation within narrow boundsBecause scale differs everywhere, it can only be measured meaningfully as point scale per locationThe scale statement can be accurate when the region mapped is small enough for the curvature of the Earth to be neglected, such as a city mapInstead, it usually refers to the scale along the equator.Instead, it usually refers to the scale along the equatorAdditionally, various data specific to a map, such as scale, map projection, geographical coordinates and map format, are included in the records of that map.
It's not always clear whether an ancient artifact had been wrought as a map or as something elseRather, on most projections, the best that can be attained is an accurate scale along one or two paths on the projectionCandidates include:
In this case, the nondegenerate bilinear form is often used implicitly to map between the vector spaces and their duals, to express the transposed map as a map formula_144
Many maps are drawn to a scale expressed as a ratio, such as 1:10,000, which means that 1 unit of measurement on the map corresponds to 10,000 of that same unit on the groundThe impossibility of flattening the sphere to the plane without distortion means that the map cannot have a constant scaleThey are designed to be printed as a two-sided flat object that could be held easily for educational purposes.
Relief map of Guatemala The Relief map of Guatemala was made by Francisco Vela in 1905 and still existsThe definition of ""map"" is als","This map (horizontal scale 1:10,000; vertical scale 1:2,000) measures 1,800 m2, and was created to educate children in the scape of their country.
List
Many maps are drawn to a scale expressed as a ratio, such as 1:10,000, which means that 1 unit of measurement on the map corresponds to 10,000 of that same unit on the groundAt the scope of a world map, scale as a single number is practically meaningless throughout most of the mapAlthough the scale statement is nominal it is usually accurate enough for most purposes unless the map covers a large fraction of the earthMost maps strive to keep point scale variation within narrow boundsBecause scale differs everywhere, it can only be measured meaningfully as point scale per locationThe scale statement can be accurate when the region mapped is small enough for the curvature of the Earth to be neglected, such as a city mapInstead, it usually refers to the scale along the equator.Instead, it usually refers to the scale along the equatorAdditionally, various data specific to a map, such as scale, map projection, geographical coordinates and map format, are included in the records of that map.
It's not always clear whether an ancient artifact had been wrought as a map or as something elseRather, on most projections, the best that can be attained is an accurate scale along one or two paths on the projectionCandidates include:
In this case, the nondegenerate bilinear form is often used implicitly to map between the vector spaces and their duals, to express the transposed map as a map formula_144
Many maps are drawn to a scale expressed as a ratio, such as 1:10,000, which means that 1 unit of measurement on the map corresponds to 10,000 of that same unit on the groundThe impossibility of flattening the sphere to the plane without distortion means that the map cannot have a constant scaleThey are designed to be printed as a two-sided flat object that could be held easily for educational purposes.
Relief map of Guatemala The Relief map of Guatemala was made by Francisco Vela in 1905 and still existsThe definition of ""map"" is als[SEP]What is the purpose of expressing a map's scale as a ratio, such as 1:10,000?","['E', 'D', 'C']",1.0
What is the main sequence in astronomy?,"In astronomy, the main sequence is a continuous and distinctive band of stars that appears on plots of stellar color versus brightness. The most massive stars will leave the main sequence first, followed in sequence by stars of ever lower masses. Stars on this band are known as main-sequence stars or dwarf stars. The main sequence is sometimes divided into upper and lower parts, based on the dominant process that a star uses to generate energy. On average, main-sequence stars are known to follow an empirical mass–luminosity relationship. Stars of luminosity class V belonged to the main sequence. These plots showed a prominent and continuous sequence of stars, which he named the Main Sequence. The common use of ""dwarf"" to mean the main sequence is confusing in another way because there are dwarf stars that are not main-sequence stars. The observed upper limit for a main-sequence star is 120–200 . Thus, roughly speaking, stars of spectral class F or cooler belong to the lower main sequence, while A-type stars or hotter are upper main-sequence stars. During the initial collapse, this pre-main-sequence star generates energy through gravitational contraction. Thus, about 90% of the observed stars above 0.5 will be on the main sequence. As this is the core temperature of a star with about 1.5 , the upper main sequence consists of stars above this mass. A sufficiently dense, and hot, core region will trigger nuclear fusion, thus creating a main-sequence star. This effect results in a broadening of the main sequence band because stars are observed at random stages in their lifetime. Thus the main sequence represents the primary hydrogen-burning stage of a star's lifetime. ==Properties== The majority of stars on a typical HR diagram lie along the main-sequence curve. Astronomers divide the main sequence into upper and lower parts, based on which of the two is the dominant fusion process. Thus, the most massive stars may remain on the main sequence for only a few million years, while stars with less than a tenth of a solar mass may last for over a trillion years. Main-sequence stars below undergo convection throughout their mass. Main-sequence stars with more than two solar masses undergo convection in their core regions, which acts to stir up the newly created helium and maintain the proportion of fuel needed for fusion to occur. ",The main sequence is a type of galaxy that contains a large number of stars.,The main sequence is a type of black hole that is formed from the collapse of a massive star.,The main sequence is a continuous and distinctive band of stars that appears on plots of stellar color versus brightness. Stars on this band are known as main-sequence stars or dwarf stars.,The main sequence is a group of planets that orbit around a star in a solar system.,The main sequence is a type of nebula that is formed from the explosion of a supernova.,C,kaggle200,"Subgiants occupy a region above (i.e. more luminous than) the main sequence stars and below the giant stars. There are relatively few on most H–R diagrams because the time spent as a subgiant is much less than the time spent on the main sequence or as a giant star. Hot, class B, subgiants are barely distinguishable from the main sequence stars, while cooler subgiants fill a relatively large gap between cool main sequence stars and the red giants. Below approximately spectral type K3 the region between the main sequence and red giants is entirely empty, with no subgiants.
A star remains near its initial position on the main sequence until a significant amount of hydrogen in the core has been consumed, then begins to evolve into a more luminous star. (On the HR diagram, the evolving star moves up and to the right of the main sequence.) Thus the main sequence represents the primary hydrogen-burning stage of a star's lifetime.
A new star will sit at a specific point on the main sequence of the Hertzsprung–Russell diagram, with the main-sequence spectral type depending upon the mass of the star. Small, relatively cold, low-mass red dwarfs fuse hydrogen slowly and will remain on the main sequence for hundreds of billions of years or longer, whereas massive, hot O-type stars will leave the main sequence after just a few million years. A mid-sized yellow dwarf star, like the Sun, will remain on the main sequence for about 10 billion years. The Sun is thought to be in the middle of its main sequence lifespan.
In astronomy, the main sequence is a continuous and distinctive band of stars that appears on plots of stellar color versus brightness. These color-magnitude plots are known as Hertzsprung–Russell diagrams after their co-developers, Ejnar Hertzsprung and Henry Norris Russell. Stars on this band are known as main-sequence stars or dwarf stars. These are the most numerous true stars in the universe and include the Sun.","A new star will sit at a specific point on the main sequence of the Hertzsprung–Russell diagram, with the main-sequence spectral type depending upon the mass of the star. Small, relatively cold, low-mass red dwarfs fuse hydrogen slowly and will remain on the main sequence for hundreds of billions of years or longer, whereas massive, hot O-type stars will leave the main sequence after just a few million years. A mid-sized yellow dwarf star, like the Sun, will remain on the main sequence for about 10 billion years. The Sun is thought to be in the middle of its main sequence lifespan.
The turnoff point for a star refers to the point on the Hertzsprung–Russell diagram where it leaves the main sequence after its main fuel is exhausted – the main sequence turnoff.
By plotting the turnoff points of individual stars in a star cluster one can estimate the cluster's age.
In astronomy, the main sequence is a continuous and distinctive band of stars that appears on plots of stellar color versus brightness. These color-magnitude plots are known as Hertzsprung–Russell diagrams after their co-developers, Ejnar Hertzsprung and Henry Norris Russell. Stars on this band are known as main-sequence stars or dwarf stars. These are the most numerous true stars in the universe and include the Sun.","The Sun is thought to be in the middle of its main sequence lifespan.
In astronomy, the main sequence is a continuous and distinctive band of stars that appears on plots of stellar color versus brightnessThe Sun is thought to be in the middle of its main sequence lifespan.
The turnoff point for a star refers to the point on the Hertzsprung–Russell diagram where it leaves the main sequence after its main fuel is exhausted – the main sequence turnoff.
By plotting the turnoff points of individual stars in a star cluster one can estimate the cluster's age.
In astronomy, the main sequence is a continuous and distinctive band of stars that appears on plots of stellar color versus brightness(On the HR diagram, the evolving star moves up and to the right of the main sequence.) Thus the main sequence represents the primary hydrogen-burning stage of a star's lifetime.
A new star will sit at a specific point on the main sequence of the Hertzsprung–Russell diagram, with the main-sequence spectral type depending upon the mass of the starStars on this band are known as main-sequence stars or dwarf starsmore luminous than) the main sequence stars and below the giant starsA mid-sized yellow dwarf star, like the Sun, will remain on the main sequence for about 10 billion yearsA new star will sit at a specific point on the main sequence of the Hertzsprung–Russell diagram, with the main-sequence spectral type depending upon the mass of the starBelow approximately spectral type K3 the region between the main sequence and red giants is entirely empty, with no subgiants.
A star remains near its initial position on the main sequence until a significant amount of hydrogen in the core has been consumed, then begins to evolve into a more luminous starSmall, relatively cold, low-mass red dwarfs fuse hydrogen slowly and will remain on the main sequence for hundreds of billions of years or longer, whereas massive, hot O-type stars will leave the main sequence after just a few million yearsHot, class B, subgiants are barely distinguishable from the main sequence stars, while cooler subgiants fi","The Sun is thought to be in the middle of its main sequence lifespan.
In astronomy, the main sequence is a continuous and distinctive band of stars that appears on plots of stellar color versus brightnessThe Sun is thought to be in the middle of its main sequence lifespan.
The turnoff point for a star refers to the point on the Hertzsprung–Russell diagram where it leaves the main sequence after its main fuel is exhausted – the main sequence turnoff.
By plotting the turnoff points of individual stars in a star cluster one can estimate the cluster's age.
In astronomy, the main sequence is a continuous and distinctive band of stars that appears on plots of stellar color versus brightness(On the HR diagram, the evolving star moves up and to the right of the main sequence.) Thus the main sequence represents the primary hydrogen-burning stage of a star's lifetime.
A new star will sit at a specific point on the main sequence of the Hertzsprung–Russell diagram, with the main-sequence spectral type depending upon the mass of the starStars on this band are known as main-sequence stars or dwarf starsmore luminous than) the main sequence stars and below the giant starsA mid-sized yellow dwarf star, like the Sun, will remain on the main sequence for about 10 billion yearsA new star will sit at a specific point on the main sequence of the Hertzsprung–Russell diagram, with the main-sequence spectral type depending upon the mass of the starBelow approximately spectral type K3 the region between the main sequence and red giants is entirely empty, with no subgiants.
A star remains near its initial position on the main sequence until a significant amount of hydrogen in the core has been consumed, then begins to evolve into a more luminous starSmall, relatively cold, low-mass red dwarfs fuse hydrogen slowly and will remain on the main sequence for hundreds of billions of years or longer, whereas massive, hot O-type stars will leave the main sequence after just a few million yearsHot, class B, subgiants are barely distinguishable from the main sequence stars, while cooler subgiants fi[SEP]What is the main sequence in astronomy?","['C', 'D', 'B']",1.0
"Who proposed the concept of ""maximal acceleration""?","The Great Acceleration is the dramatic, continuous and roughly simultaneous surge across a large range of measures of human activity, first recorded in the mid-20th century and continuing to this day. Environmental historian J. R. McNeill has argued that the Great Acceleration is idiosyncratic of the current age and is set to halt in the near future; that it has never happened before and will never happen again. Related to Great Acceleration is the concept of accelerating change. Accelerationen (Accelerations), op. 234, is a waltz composed by Johann Strauss II in 1860 for the Engineering Students' Ball at the Sofienbad-Saal in Vienna. This page lists examples of the acceleration occurring in various situations. In other words, in order to define acceleration an additional structure on M must be given. The concept of acceleration is a covariant derivative concept. The concept of acceleration most often arises within the context of contract law. In mathematics and physics, acceleration is the rate of change of velocity of a curve with respect to a given linear connection. In the concept, the Great Acceleration can be variously classified as the only age of the epoch to date, one of many ages of the epoch – depending on the epoch's proposed start date – or a defining feature of the epoch that is thus not an age, as well as other classifications. Acceleration is defined in law as a shortening of the time period in which something is to take place. Therefore, while adherents of the theory of accelerating change do not comment on the short-term fate of the Great Acceleration, they do hold that its eventual fate is continuation, which also contradicts McNeill's conclusions. ==Overview== In tracking the effects of human activity upon the Earth, a number of socioeconomic and earth system parameters are utilized including population, economics, water usage, food production, transportation, technology, greenhouse gases, surface temperature, and natural resource usage. Using abstract index notation, the acceleration of a given curve with unit tangent vector \xi^a is given by \xi^{b} abla_{b}\xi^{a}. ==See also== *Acceleration *Covariant derivative ==Notes== ==References== * * * Category:Differential geometry Category:Manifolds Many turns have 5 g peak values, like turn 8 at Istanbul or Eau Rouge at Spa 101 1 dam/s2 inertial 59 m/s2 6 g Parachutist peak during normal opening of parachute 101 1 dam/s2 inertial m/s2 Standard, full aerobatics certified glider 101 1 dam/s2 inertial 70.6 m/s2 7.19 g Apollo 16 on reentryNASA: SP-368 Biomedical Results of Apollo, Chapter 5: Environmental Factors, Table 2: Apollo Manned Space Flight Reentry G Levels 101 1 dam/s2 inertial 79 m/s2 8 g F-16 aircraft pulling out of dive 101 1 dam/s2 inertial 9 g Maximum for a fit, trained person with G-suit to keep consciousness, avoiding G-LOC 101 1 dam/s2 inertial Typical maximum turn acceleration in an aerobatic plane or fighter jet 1 hm/s2 inertial 147 m/s2 15 g Explosive seat ejection from aircraft 1 hm/s2 inertial 18 g Physical damage in humans like broken capillaries 1 hm/s2 inertial 21.3 g Peak acceleration experienced by cosmonauts during the Soyuz 18a abort 1 hm/s2 inertial 34 g Peak deceleration of the Stardust Sample Return Capsule on reentry to Earth 1 hm/s2 inertial 46.2 g Maximum acceleration a human has survived on a rocket sled 1 hm/s2 inertial > 50 g Death or serious injury likely 1 hm/s2 inertial 982 m/s2 100 g Sprint missileSprint 1 hm/s2 inertial 982 m/s2 100 g Automobile crash (100 km/h into wall)tomshardware.co.uk - Hard Drive Shock Tolerance - Hard-Disks - Storage , Physics, by O'hanian, 1989, 2007-01-03 1 hm/s2 inertial Brief human exposure survived in crash“Several Indy car drivers have withstood impacts in excess of 100 G without serious injuries.” And National Highway Traffic Safety Administration: Recording Automotive Crash Event Data 1 hm/s2 inertial 100 g Deadly limit for most humans 1 km/s2 inertial ≈ lab 157 g Peak acceleration of fastest rocket sled run 1 km/s2 inertial ≈ lab 1964 m/s2 200 g 3.5"" hard disc non-operating shock tolerance for 2 ms, weight 0.6 kgwdc.com - Legacy Product Specifications : WD600BB , read 2012-01-11 1 km/s2 inertial ≈ lab 2098 m/s2 214 g Highest recorded amount of g-force exposed and survived by a human (Peak deceleration experienced by Kenny Bräck in a crash at the 2003 Chevy 500)Feel the G's: The Science of Gravity and G-Forces - by Suzanne Slade (page 37) 1 km/s2 inertial ≈ lab 2256 m/s2 230 g Peak acceleration experience by the Galileo probe during descent into Jupiter's atmosphere 1 km/s2 inertial ≈ lab 2490 m/s2 254 g Peak deceleration experienced by Jules Bianchi in crash of Marussia MR03, 2014 Japanese Grand Prix 1 km/s2 inertial ≈ lab 2946 m/s2 300 g Soccer ball struck by foot 1 km/s2 inertial ≈ lab 3200 m/s2 320 g A jumping human flea 1 km/s2 inertial ≈ lab 3800 m/s2 380 g A jumping click beetle 1 km/s2 inertial ≈ lab 4944 m/s2 504 g Clothes on washing machine, during dry spinning (46 cm drum / 1400 rpm) 10 km/s2 Deceleration of the head of a woodpecker 10 km/s2 Space gun with a barrel length of and a muzzle velocity of , as proposed by Quicklaunch (assuming constant acceleration) 10 km/s2 29460 m/s2 3000 g Baseball struck by bat 10 km/s2 Standard requirement for decelerative crashworthiness in certified flight recorders (such as a Boeing 737 'black box') 10 km/s2 Shock capability of mechanical wrist watchesOmega , Ball Watch Technology 10 km/s2 Current Formula One engines, maximum piston acceleration (up to 10,000 g before rev limits)Cosworth V8 engine 100 km/s2 A mantis shrimp punch 100 km/s2 Rating of electronics built into military artillery shells 100 km/s2 Spore acceleration of the Pilobolus fungibu.edu - Rockets in Horse Poop, 2010-12-10 100 km/s2 9×19mm Parabellum handgun bullet (average along the length of the barrel)Assuming an 8.04 gram bullet, a muzzle velocity of , and a 102 mm barrel. 1 Mm/s2 Closing jaws of a trap-jaw ant 1 Mm/s2 9×19mm Parabellum handgun bullet, peakAssuming an 8.04 gram bullet, a peak pressure of and 440 N of friction. 1 Mm/s2 Surface gravity of white dwarf Sirius B 1 Mm/s2 UltracentrifugeBerkeley Physics Course, vol. 1, Mechanics, fig. 4.1 (authors Kittel-Knight-Ruderman, 1973 edition) 10 Mm/s2 Jellyfish stinger 1 Gm/s2 1 m/s2 The record peak acceleration of a projectile in a coilgun, a 2 gram projectile accelerated in 1 cm from rest to 5 km/sec.K. McKinney and P. Mongeau, ""Multiple stage pulsed induction acceleration,"" in IEEE Transactions on Magnetics, vol. 20, no. 2, pp. 239-242, March 1984, doi: 10.1109/TMAG.1984.1063089. 1 Tm/s2 7 m/s2 7 g Max surface gravity of a neutron star 1 Tm/s2 2.1 m/s2 2.1 g Protons in the Large Hadron ColliderCalculated from their speed and radius, approximating the LHC as a circle. 1 Zm/s2 9.149 m/s2 g Classical (Bohr model) acceleration of an electron around a 1H nucleus. 1 Zm/s2 176 m/s2 1.79 g Electrons in a 1 TV/m wakefield accelerator 1 QZm/s2 Coherent Planck unit of acceleration ==See also== *G-force *Gravitational acceleration *Mechanical shock *Standard gravity *International System of Units (SI) *SI prefix ==References== Acceleration The acceleration vector of \gamma is defined by abla_{\dot\gamma}{\dot\gamma} , where abla denotes the covariant derivative associated to \Gamma. Accelerations is featured in Erich Wolfgang Korngold's The Tales of Strauss, Op. 21 as well as many of Strauss's other well-known waltzes. ==References== Category:1860 compositions Category:Waltzes by Johann Strauss II The TRIAD 1 satellite was a later, more advanced navigation satellite that was part of the U.S. Navy’s Transit, or NAVSAT system. inertial ≈ 0 m/s2 ≈ 0 g Weightless parabola in a reduced-gravity aircraft lab Smallest acceleration in a scientific experiment Solar system Acceleration of Earth toward the sun due to sun's gravitational attraction lab 0.25 m/s2 0.026 g Train acceleration for SJ X2 inertial 1.62 m/s2 0.1654 g Standing on the Moon at its equator lab 4.3 m/s2 0.44 g Car acceleration 0–100 km/h in 6.4 s with a Saab 9-5 Hirsch inertial 1 g Standard gravity, the gravity acceleration on Earth at sea level standard 101 1 dam/s2 inertial 11.2 m/s2 1.14 g Saturn V moon rocket just after launch 101 1 dam/s2 inertial 15.2 m/s2 1.55 g Bugatti Veyron from 0 to in (the net acceleration vector including gravitational acceleration is directed 40 degrees from horizontal) 101 1 dam/s2 inertial 29 m/s2 3 g Space Shuttle, maximum during launch and reentry 101 1 dam/s2 inertial 3 g Sustainable for > 25 seconds, for a human 101 1 dam/s2 inertial g High-G roller coastersGeorge Bibel. An acceleration clause, also known as an acceleration covenant, may be included within a contract, so as to fully mature the performance due from a party upon a breach of the contract, such as by requiring payment in full upon the contract if a borrower materially breaches a loan agreement. With an acceleration clause a landlord may be able to sue for damages when a breach of the lease agreement occurs. == References == Category:Legal terminology ",Max Planck,Niels Bohr,Eduardo R. Caianiello,Hideki Yukawa,Albert Einstein,C,kaggle200,"In April 1957, Japanese engineer Jun-ichi Nishizawa proposed the concept of a ""semiconductor optical maser"" in a patent application.
Marvel Comics invented the Cobra concept, with the name having been proposed by Archie Goodwin. When Marvel first proposed the concept, Hasbro was reluctant to make toys of the villains for fear that they would not sell. According to Jim Shooter, ""later ... villains became 40% of their volume.""
Various standards such as IEEE 802.22 and IEEE 802.11af have been proposed for this concept. The term ""White-Fi"" has also been used to indicate the use of white space for IEEE 802.11af.
However Born's idea of a quantum metric operator was later taken up by Hideki Yukawa when developing his nonlocal quantum theory in the 1950s. In 1981, Eduardo R. Caianiello proposed a ""maximal acceleration"", similarly as there is a minimal length at Planck scale, and this concept of maximal acceleration has been expanded upon by others. It has also been suggested that Born reciprocity may be the underlying physical reason for the T-duality symmetry in string theory, and that Born reciprocity may be of relevance to developing a quantum geometry.","maximal 1. For ""maximal compact subgroup"", see #compact.
2. For ""maximal torus"", see #torus.
That that is is that that is. ""Not"" is not. Is that it? It is.
That that is is that that is not. Is ""'not' is that"" it? It is.
That that is is that that is not ""is not"". Is that it? It is.
However Born's idea of a quantum metric operator was later taken up by Hideki Yukawa when developing his nonlocal quantum theory in the 1950s. In 1981, Eduardo R. Caianiello proposed a ""maximal acceleration"", similarly as there is a minimal length at Planck scale, and this concept of maximal acceleration has been expanded upon by others. It has also been suggested that Born reciprocity may be the underlying physical reason for the T-duality symmetry in string theory, and that Born reciprocity may be of relevance to developing a quantum geometry.Born chose the term ""reciprocity"" for the reason that in a crystal lattice, the motion of a particle can be described in p-space by means of the reciprocal lattice.","Caianiello proposed a ""maximal acceleration"", similarly as there is a minimal length at Planck scale, and this concept of maximal acceleration has been expanded upon by othersmaximal 1- In April 1957, Japanese engineer Jun-ichi Nishizawa proposed the concept of a ""semiconductor optical maser"" in a patent application.
Marvel Comics invented the Cobra concept, with the name having been proposed by Archie GoodwinIn 1981, Eduardo RFor ""maximal torus"", see #torus.
That that is is that that isThe term ""White-Fi"" has also been used to indicate the use of white space for IEEE 802.11af.
However Born's idea of a quantum metric operator was later taken up by Hideki Yukawa when developing his nonlocal quantum theory in the 1950sIs that it? It is.
However Born's idea of a quantum metric operator was later taken up by Hideki Yukawa when developing his nonlocal quantum theory in the 1950sFor ""maximal compact subgroup"", see #compact.
2It has also been suggested that Born reciprocity may be the underlying physical reason for the T-duality symmetry in string theory, and that Born reciprocity may be of relevance to developing a quantum geometry.Born chose the term ""reciprocity"" for the reason that in a crystal lattice, the motion of a particle can be described in p-space by means of the reciprocal latticevillains became 40% of their volume.""
Various standards such as IEEE 802.22 and IEEE 802.11af have been proposed for this conceptIt has also been suggested that Born reciprocity may be the underlying physical reason for the T-duality symmetry in string theory, and that Born reciprocity may be of relevance to developing a quantum geometry.When Marvel first proposed the concept, Hasbro was reluctant to make toys of the villains for fear that they would not sellAccording to Jim Shooter, ""later ..Is that it? It is.
That that is is that that is notIs ""'not' is that"" it? It is.
That that is is that that is not ""is not""""Not"" is not","Caianiello proposed a ""maximal acceleration"", similarly as there is a minimal length at Planck scale, and this concept of maximal acceleration has been expanded upon by othersmaximal 1- In April 1957, Japanese engineer Jun-ichi Nishizawa proposed the concept of a ""semiconductor optical maser"" in a patent application.
Marvel Comics invented the Cobra concept, with the name having been proposed by Archie GoodwinIn 1981, Eduardo RFor ""maximal torus"", see #torus.
That that is is that that isThe term ""White-Fi"" has also been used to indicate the use of white space for IEEE 802.11af.
However Born's idea of a quantum metric operator was later taken up by Hideki Yukawa when developing his nonlocal quantum theory in the 1950sIs that it? It is.
However Born's idea of a quantum metric operator was later taken up by Hideki Yukawa when developing his nonlocal quantum theory in the 1950sFor ""maximal compact subgroup"", see #compact.
2It has also been suggested that Born reciprocity may be the underlying physical reason for the T-duality symmetry in string theory, and that Born reciprocity may be of relevance to developing a quantum geometry.Born chose the term ""reciprocity"" for the reason that in a crystal lattice, the motion of a particle can be described in p-space by means of the reciprocal latticevillains became 40% of their volume.""
Various standards such as IEEE 802.22 and IEEE 802.11af have been proposed for this conceptIt has also been suggested that Born reciprocity may be the underlying physical reason for the T-duality symmetry in string theory, and that Born reciprocity may be of relevance to developing a quantum geometry.When Marvel first proposed the concept, Hasbro was reluctant to make toys of the villains for fear that they would not sellAccording to Jim Shooter, ""later ..Is that it? It is.
That that is is that that is notIs ""'not' is that"" it? It is.
That that is is that that is not ""is not""""Not"" is not[SEP]Who proposed the concept of ""maximal acceleration""?","['C', 'E', 'D']",1.0
What is indirect photophoresis?,"Indirect photophoresis occurs as a result of an increase in the kinetic energy of molecules when particles absorb incident light only on the irradiated side, thus creating a temperature gradient within the particle. Under certain conditions, with particles of diameter comparable to the wavelength of light, the phenomenon of a negative indirect photophoresis occurs, due to the unequal heat generation on the laser irradiation between the back and front sides of particles, this produces a temperature gradient in the medium around the particle such that molecules at the far side of the particle from the light source may get to heat up more, causing the particle to move towards the light source. Separately from photophoresis, in a fluid mixture of different kinds of particles, the migration of some kinds of particles may be due to differences in their absorptions of thermal radiation and other thermal effects collectively known as thermophoresis. Photophoresis denotes the phenomenon that small particles suspended in gas (aerosols) or liquids (hydrocolloids) start to migrate when illuminated by a sufficiently intense beam of light. In laser photophoresis, particles migrate once they have a refractive index different from their surrounding medium. Photophoresis is applied in particle trapping and levitation, in the field flow fractionation of particles, in the determination of thermal conductivity and temperature of microscopic grains and also in the transport of soot particles in the atmosphere. Indirect photophoretic force depends on the physical properties of the particle and the surrounding medium. They suggest uses for telecommunications, and deployment on Mars. ==Theory of photophoresis== Direct photophoresis is caused by the transfer of photon momentum to a particle by refraction and reflection.Ashkin, A. 2000 IEEE Journal of Selected Topics in Quantum Electronics, 6,841-856. Molecules with higher kinetic energy in the region of higher gas temperature impinge on the particle with greater momenta than molecules in the cold region; this causes a migration of particles in a direction opposite to the surface temperature gradient. Photostimulation methods fall into two general categories: one set of methods uses light to uncage a compound that then becomes biochemically active, binding to a downstream effector. One example is when a certain wavelength of light is put onto certain channels, the blockage in the pore is relieved and allows ion transduction. The component of the photophoretic force responsible for this phenomenon is called the radiometric force. Photostimulation can be used to noninvasively probe various relationships between different biological processes, using only light. Often, the design function in such a way that a medium is met between the diffusing light that may cause additional, unwanted photolysis and light attenuation; both being significant problems with a photolysis system. ==History== The idea of photostimulation as a method of controlling biomolecule function was developed in the 1970s. A particle with a higher refractive index compared to its surrounding molecule moves away from the light source due to momentum transfer from absorbed and scattered light photons. Just like in Crookes radiometer, light can heat up one side and gas molecules bounce from that surface with greater velocity, hence push the particle to the other side. The existence of this phenomenon is owed to a non-uniform distribution of temperature of an illuminated particle in a fluid medium. Movement of particles in the forward direction occurs when the particle is transparent and has an index of refraction larger compared to its surrounding medium. Photostimulation is the use of light to artificially activate biological compounds, cells, tissues, or even whole organisms. The steps of photostimulation are time independent in that protein delivery and light activation can be done at different times. ","Indirect photophoresis is a phenomenon that occurs when particles absorb incident light uniformly, creating a temperature gradient within the particle, and causing a migration of particles in a random direction.","Indirect photophoresis is a phenomenon that occurs when particles absorb incident light only on the irradiated side, creating a temperature gradient within the particle, and causing a migration of particles in the same direction as the surface temperature gradient.","Indirect photophoresis is a phenomenon that occurs when particles absorb incident light uniformly, creating a temperature gradient within the particle, and causing a migration of particles in the same direction as the surface temperature gradient.","Indirect photophoresis is a phenomenon that occurs when particles absorb incident light only on the irradiated side, creating a temperature gradient within the particle, and causing a migration of particles in a direction opposite to the surface temperature gradient.","Indirect photophoresis is a phenomenon that occurs when particles absorb incident light uniformly, creating a temperature gradient within the particle, and causing a migration of particles in a direction opposite to the surface temperature gradient.",D,kaggle200,"If the temperature gradient operator exceeds one, the mean temperature gradient is larger than the critical temperature gradient and the stack operates as a prime mover. If the temperature gradient operator is less than one, the mean temperature gradient is smaller than the critical gradient and the stack operates as a heat pump.
The applications of photophoresis expand into the various divisions of science, thus physics, chemistry as well as in biology. Photophoresis is applied in particle trapping and levitation, in the field flow fractionation of particles, in the determination of thermal conductivity and temperature of microscopic grains and also in the transport of soot particles in the atmosphere. The use of light in the separation of particles aerosols based on their optical properties, makes possible the separation of organic and inorganic particles of the same aerodynamic size.
Photophoresis denotes the phenomenon that small particles suspended in gas (aerosols) or liquids (hydrocolloids) start to migrate when illuminated by a sufficiently intense beam of light. The existence of this phenomenon is owed to a non-uniform distribution of temperature of an illuminated particle in a fluid medium. Separately from photophoresis, in a fluid mixture of different kinds of particles, the migration of some kinds of particles may be due to differences in their absorptions of thermal radiation and other thermal effects collectively known as thermophoresis. In laser photophoresis, particles migrate once they have a refractive index different from their surrounding medium. The migration of particles is usually possible when the laser is slightly or not focused. A particle with a higher refractive index compared to its surrounding molecule moves away from the light source due to momentum transfer from absorbed and scattered light photons. This is referred to as a radiation pressure force. This force depends on light intensity and particle size but has nothing to do with the surrounding medium. Just like in Crookes radiometer, light can heat up one side and gas molecules bounce from that surface with greater velocity, hence push the particle to the other side. Under certain conditions, with particles of diameter comparable to the wavelength of light, the phenomenon of a negative indirect photophoresis occurs, due to the unequal heat generation on the laser irradiation between the back and front sides of particles, this produces a temperature gradient in the medium around the particle such that molecules at the far side of the particle from the light source may get to heat up more, causing the particle to move towards the light source.
Direct photophoresis is caused by the transfer of photon momentum to a particle by refraction and reflection. Movement of particles in the forward direction occurs when the particle is transparent and has an index of refraction larger compared to its surrounding medium. Indirect photophoresis occurs as a result of an increase in the kinetic energy of molecules when particles absorb incident light only on the irradiated side, thus creating a temperature gradient within the particle. In this situation the surrounding gas layer reaches temperature equilibrium with the surface of the particle. Molecules with higher kinetic energy in the region of higher gas temperature impinge on the particle with greater momenta than molecules in the cold region; this causes a migration of particles in a direction opposite to the surface temperature gradient. The component of the photophoretic force responsible for this phenomenon is called the radiometric force. This comes as a result of uneven distribution of radiant energy (source function within a particle).","Indirect photophoretic force depends on the physical properties of the particle and the surrounding medium.
Photophoresis denotes the phenomenon that small particles suspended in gas (aerosols) or liquids (hydrocolloids) start to migrate when illuminated by a sufficiently intense beam of light. The existence of this phenomenon is owed to a non-uniform distribution of temperature of an illuminated particle in a fluid medium. Separately from photophoresis, in a fluid mixture of different kinds of particles, the migration of some kinds of particles may be due to differences in their absorptions of thermal radiation and other thermal effects collectively known as thermophoresis. In laser photophoresis, particles migrate once they have a refractive index different from their surrounding medium. The migration of particles is usually possible when the laser is slightly or not focused. A particle with a higher refractive index compared to its surrounding molecule moves away from the light source due to momentum transfer from absorbed and scattered light photons. This is referred to as a radiation pressure force. This force depends on light intensity and particle size but has nothing to do with the surrounding medium. Just like in Crookes radiometer, light can heat up one side and gas molecules bounce from that surface with greater velocity, hence push the particle to the other side. Under certain conditions, with particles of diameter comparable to the wavelength of light, the phenomenon of a negative indirect photophoresis occurs, due to the unequal heat generation on the laser irradiation between the back and front sides of particles, this produces a temperature gradient in the medium around the particle such that molecules at the far side of the particle from the light source may get to heat up more, causing the particle to move towards the light source.If the suspended particle is rotating, it will also experience the Yarkovsky effect.
Direct photophoresis is caused by the transfer of photon momentum to a particle by refraction and reflection. Movement of particles in the forward direction occurs when the particle is transparent and has an index of refraction larger compared to its surrounding medium. Indirect photophoresis occurs as a result of an increase in the kinetic energy of molecules when particles absorb incident light only on the irradiated side, thus creating a temperature gradient within the particle. In this situation the surrounding gas layer reaches temperature equilibrium with the surface of the particle. Molecules with higher kinetic energy in the region of higher gas temperature impinge on the particle with greater momenta than molecules in the cold region; this causes a migration of particles in a direction opposite to the surface temperature gradient. The component of the photophoretic force responsible for this phenomenon is called the radiometric force. This comes as a result of uneven distribution of radiant energy (source function within a particle).","Indirect photophoresis occurs as a result of an increase in the kinetic energy of molecules when particles absorb incident light only on the irradiated side, thus creating a temperature gradient within the particleUnder certain conditions, with particles of diameter comparable to the wavelength of light, the phenomenon of a negative indirect photophoresis occurs, due to the unequal heat generation on the laser irradiation between the back and front sides of particles, this produces a temperature gradient in the medium around the particle such that molecules at the far side of the particle from the light source may get to heat up more, causing the particle to move towards the light source.
Direct photophoresis is caused by the transfer of photon momentum to a particle by refraction and reflectionIndirect photophoretic force depends on the physical properties of the particle and the surrounding medium.
Photophoresis denotes the phenomenon that small particles suspended in gas (aerosols) or liquids (hydrocolloids) start to migrate when illuminated by a sufficiently intense beam of lightUnder certain conditions, with particles of diameter comparable to the wavelength of light, the phenomenon of a negative indirect photophoresis occurs, due to the unequal heat generation on the laser irradiation between the back and front sides of particles, this produces a temperature gradient in the medium around the particle such that molecules at the far side of the particle from the light source may get to heat up more, causing the particle to move towards the light source.If the suspended particle is rotating, it will also experience the Yarkovsky effect.
Direct photophoresis is caused by the transfer of photon momentum to a particle by refraction and reflectionPhotophoresis is applied in particle trapping and levitation, in the field flow fractionation of particles, in the determination of thermal conductivity and temperature of microscopic grains and also in the transport of soot particles in the atmosphereIn laser photophoresis, particles migrate once they have a refractive in","Indirect photophoresis occurs as a result of an increase in the kinetic energy of molecules when particles absorb incident light only on the irradiated side, thus creating a temperature gradient within the particleUnder certain conditions, with particles of diameter comparable to the wavelength of light, the phenomenon of a negative indirect photophoresis occurs, due to the unequal heat generation on the laser irradiation between the back and front sides of particles, this produces a temperature gradient in the medium around the particle such that molecules at the far side of the particle from the light source may get to heat up more, causing the particle to move towards the light source.
Direct photophoresis is caused by the transfer of photon momentum to a particle by refraction and reflectionIndirect photophoretic force depends on the physical properties of the particle and the surrounding medium.
Photophoresis denotes the phenomenon that small particles suspended in gas (aerosols) or liquids (hydrocolloids) start to migrate when illuminated by a sufficiently intense beam of lightUnder certain conditions, with particles of diameter comparable to the wavelength of light, the phenomenon of a negative indirect photophoresis occurs, due to the unequal heat generation on the laser irradiation between the back and front sides of particles, this produces a temperature gradient in the medium around the particle such that molecules at the far side of the particle from the light source may get to heat up more, causing the particle to move towards the light source.If the suspended particle is rotating, it will also experience the Yarkovsky effect.
Direct photophoresis is caused by the transfer of photon momentum to a particle by refraction and reflectionPhotophoresis is applied in particle trapping and levitation, in the field flow fractionation of particles, in the determination of thermal conductivity and temperature of microscopic grains and also in the transport of soot particles in the atmosphereIn laser photophoresis, particles migrate once they have a refractive in[SEP]What is indirect photophoresis?","['B', 'D', 'C']",0.5
What does Earnshaw's theorem state?,"Earnshaw's theorem states that a collection of point charges cannot be maintained in a stable stationary equilibrium configuration solely by the electrostatic interaction of the charges. On the other hand, Earnshaw's theorem only applies to point charges, but not to distributed charges. Since Earnshaw's theorem only applies to stationary charges, there were attempts to explain stability of atoms using planetary models, such as Nagaoka's Saturnian model (1904) and Rutherford's planetary model (1911), where the point electrons are circling a positive point charge in the center. Earnshaw's theorem applies to classical inverse-square law forces (electric and gravitational) and also to the magnetic forces of permanent magnets, if the magnets are hard (the magnets do not vary in strength with external fields). However, Earnshaw's theorem does not necessarily apply to moving ferromagnets, certain electromagnetic systems, pseudo-levitation and diamagnetic materials. This led J. J. Thomson in 1904 to his plum pudding model, where the negative point charges (electrons, or ""plums"") are embedded into a distributed positive charge ""pudding"", where they could be either stationary or moving along circles; this is a configuration which is non-point positive charges (and also non-stationary negative charges), not covered by Earnshaw's theorem. Earnshaw's theorem has even been proven for the general case of extended bodies, and this is so even if they are flexible and conducting, provided they are not diamagnetic, as diamagnetism constitutes a (small) repulsive force, but no attraction. His most famous contribution, ""Earnshaw's theorem"", shows the impossibility of stable levitating permanent magnets: other topics included optics, waves, dynamics and acoustics in physics, calculus, trigonometry and partial differential equations in mathematics. Eventually this led the way to Schrödinger's model of 1926, where the existence of non-radiative states in which the electron is not a point but rather a distributed charge density resolves the above conundrum at a fundamental level: not only there was no contradiction to Earnshaw's theorem, but also the resulting charge density and the current density are stationary, and so is the corresponding electromagnetic field, no longer radiating the energy to infinity. Earnshaw's theorem forbids magnetic levitation in many common situations. Intuitively, though, it is plausible that if the theorem holds for a single point charge then it would also hold for two opposite point charges connected together. Samuel Earnshaw (1 February 1805, Sheffield, Yorkshire – 6 December 1888, Sheffield, YorkshireGRO Register of Deaths: DEC 1888 9c 246 ECCLESALL B. (aged 83)) was an English clergyman and mathematician and physicist, noted for his contributions to theoretical physics, especially ""Earnshaw's theorem"". This works because the theorem shows only that there is some direction in which there will be an instability. There are, however, no known configurations of permanent magnets that stably levitate so there may be other reasons not discussed here why it is not possible to maintain permanent magnets in orientations antiparallel to magnetic fields (at least not without rotation—see spin-stabilized magnetic levitation. ===Detailed proofs=== Earnshaw's theorem was originally formulated for electrostatics (point charges) to show that there is no stable configuration of a collection of point charges. As a practical consequence, this theorem also states that there is no possible static configuration of ferromagnets that can stably levitate an object against gravity, even when the magnetic forces are stronger than the gravitational forces. Earnshaw published several mathematical and physical articles and books. To be completely rigorous, strictly speaking, the existence of a stable point does not require that all neighbouring force vectors point exactly toward the stable point; the force vectors could spiral in toward the stable point, for example. If the materials are not hard, Braunbeck's extension shows that materials with relative magnetic permeability greater than one (paramagnetism) are further destabilising, but materials with a permeability less than one (diamagnetic materials) permit stable configurations. ==Explanation== Informally, the case of a point charge in an arbitrary static electric field is a simple consequence of Gauss's law. It is also possible to prove this theorem directly from the force/energy equations for static magnetic dipoles (below). A stable equilibrium of the particle cannot exist and there must be an instability in some direction. ",A collection of point charges can be maintained in a stable stationary equilibrium configuration solely by the gravitational interaction of the charges.,A collection of point charges can be maintained in a stable stationary equilibrium configuration solely by the electrostatic interaction of the charges.,"A collection of point charges can be maintained in a stable stationary equilibrium configuration solely by the magnetic interaction of the charges, if the magnets are hard.",A collection of point charges cannot be maintained in a stable stationary equilibrium configuration solely by the electrostatic interaction of the charges.,A collection of point charges can be maintained in a stable stationary equilibrium configuration solely by the magnetic interaction of the charges.,D,kaggle200,"Conceptually, this method uses an electrostatic analogy, modeling the approximated zeros as movable negative point charges, which converge toward the true zeros, represented by fixed positive point charges. A direct application of Newton's method to each approximated zero will often cause multiple starting points to incorrectly converge to the same root. The Aberth method avoids this by also modeling the repulsive effect the movable charges have on each other. In this way, when a movable charge has converged on a zero, their charges will cancel out, so that other movable charges are no longer attracted to that location, encouraging them to converge to other ""unoccupied"" zeros. (Stieltjes also modeled the positions of zeros of polynomials as solutions to electrostatic problems.)
Earnshaw's theorem was originally formulated for electrostatics (point charges) to show that there is no stable configuration of a collection of point charges. The proofs presented here for individual dipoles should be generalizable to collections of magnetic dipoles because they are formulated in terms of energy, which is additive. A rigorous treatment of this topic is, however, currently beyond the scope of this article.
Similar to point masses, in electromagnetism physicists discuss a , a point particle with a nonzero electric charge. The fundamental equation of electrostatics is Coulomb's law, which describes the electric force between two point charges. Another result, Earnshaw's theorem, states that a collection of point charges cannot be maintained in a static equilibrium configuration solely by the electrostatic interaction of the charges. The electric field associated with a classical point charge increases to infinity as the distance from the point charge decreases towards zero, which suggests that the model is no longer accurate in this limit.
Earnshaw's theorem states that a collection of point charges cannot be maintained in a stable stationary equilibrium configuration solely by the electrostatic interaction of the charges. This was first proven by British mathematician Samuel Earnshaw in 1842.","Full descriptions of the no-go theorems named below are given in other articles linked to their names. A few of them are broad, general categories under which several theorems fall. Other names are broad and general-sounding but only refer to a single theorem.
Classical Electrodynamics Antidynamo theorems is a general category of theorems that restrict the type of magnetic fields that can be produced by dynamo action.
Cowling's theorem states that an axisymmetric magnetic field cannot be maintained through a self-sustaining dynamo action by an axially symmetric current.
Earnshaw's theorem states that a collection of point charges cannot be maintained in a stable stationary equilibrium configuration solely by the electrostatic interaction of the charges.
Non-Relativistic Quantum Mechanics and Quantum Information Bell's theorem Kochen–Specker theorem PBR theorem No-hiding theorem No-cloning theorem Quantum no-deleting theorem No-teleportation theorem No-broadcast theorem The no-communication theorem in quantum information theory gives conditions under which instantaneous transfer of information between two observers is impossible.
Similar to point masses, in electromagnetism physicists discuss a point charge, a point particle with a nonzero electric charge. The fundamental equation of electrostatics is Coulomb's law, which describes the electric force between two point charges. Another result, Earnshaw's theorem, states that a collection of point charges cannot be maintained in a static equilibrium configuration solely by the electrostatic interaction of the charges. The electric field associated with a classical point charge increases to infinity as the distance from the point charge decreases towards zero, which suggests that the model is no longer accurate in this limit.
Earnshaw's theorem states that a collection of point charges cannot be maintained in a stable stationary equilibrium configuration solely by the electrostatic interaction of the charges. This was first proven by British mathematician Samuel Earnshaw in 1842.
It is usually cited in reference to magnetic fields, but was first applied to electrostatic fields.
Earnshaw's theorem applies to classical inverse-square law forces (electric and gravitational) and also to the magnetic forces of permanent magnets, if the magnets are hard (the magnets do not vary in strength with external fields). Earnshaw's theorem forbids magnetic levitation in many common situations.
If the materials are not hard, Braunbeck's extension shows that materials with relative magnetic permeability greater than one (paramagnetism) are further destabilising, but materials with a permeability less than one (diamagnetic materials) permit stable configurations.","Another result, Earnshaw's theorem, states that a collection of point charges cannot be maintained in a static equilibrium configuration solely by the electrostatic interaction of the charges(Stieltjes also modeled the positions of zeros of polynomials as solutions to electrostatic problems.)
Earnshaw's theorem was originally formulated for electrostatics (point charges) to show that there is no stable configuration of a collection of point chargesThis was first proven by British mathematician Samuel Earnshaw in 1842.
It is usually cited in reference to magnetic fields, but was first applied to electrostatic fields.
Earnshaw's theorem applies to classical inverse-square law forces (electric and gravitational) and also to the magnetic forces of permanent magnets, if the magnets are hard (the magnets do not vary in strength with external fields)This was first proven by British mathematician Samuel Earnshaw in 1842.The electric field associated with a classical point charge increases to infinity as the distance from the point charge decreases towards zero, which suggests that the model is no longer accurate in this limit.
Earnshaw's theorem states that a collection of point charges cannot be maintained in a stable stationary equilibrium configuration solely by the electrostatic interaction of the chargesOther names are broad and general-sounding but only refer to a single theorem.
Classical Electrodynamics Antidynamo theorems is a general category of theorems that restrict the type of magnetic fields that can be produced by dynamo action.
Cowling's theorem states that an axisymmetric magnetic field cannot be maintained through a self-sustaining dynamo action by an axially symmetric current.
Earnshaw's theorem states that a collection of point charges cannot be maintained in a stable stationary equilibrium configuration solely by the electrostatic interaction of the charges.
Non-Relativistic Quantum Mechanics and Quantum Information Bell's theorem Kochen–Specker theorem PBR theorem No-hiding theorem No-cloning theorem Quantum no-deleting theorem No-teleportation theor","Another result, Earnshaw's theorem, states that a collection of point charges cannot be maintained in a static equilibrium configuration solely by the electrostatic interaction of the charges(Stieltjes also modeled the positions of zeros of polynomials as solutions to electrostatic problems.)
Earnshaw's theorem was originally formulated for electrostatics (point charges) to show that there is no stable configuration of a collection of point chargesThis was first proven by British mathematician Samuel Earnshaw in 1842.
It is usually cited in reference to magnetic fields, but was first applied to electrostatic fields.
Earnshaw's theorem applies to classical inverse-square law forces (electric and gravitational) and also to the magnetic forces of permanent magnets, if the magnets are hard (the magnets do not vary in strength with external fields)This was first proven by British mathematician Samuel Earnshaw in 1842.The electric field associated with a classical point charge increases to infinity as the distance from the point charge decreases towards zero, which suggests that the model is no longer accurate in this limit.
Earnshaw's theorem states that a collection of point charges cannot be maintained in a stable stationary equilibrium configuration solely by the electrostatic interaction of the chargesOther names are broad and general-sounding but only refer to a single theorem.
Classical Electrodynamics Antidynamo theorems is a general category of theorems that restrict the type of magnetic fields that can be produced by dynamo action.
Cowling's theorem states that an axisymmetric magnetic field cannot be maintained through a self-sustaining dynamo action by an axially symmetric current.
Earnshaw's theorem states that a collection of point charges cannot be maintained in a stable stationary equilibrium configuration solely by the electrostatic interaction of the charges.
Non-Relativistic Quantum Mechanics and Quantum Information Bell's theorem Kochen–Specker theorem PBR theorem No-hiding theorem No-cloning theorem Quantum no-deleting theorem No-teleportation theor[SEP]What does Earnshaw's theorem state?","['D', 'B', 'A']",1.0
What is radiosity in radiometry?,"In radiometry, radiosity is the radiant flux leaving (emitted, reflected and transmitted by) a surface per unit area, and spectral radiosity is the radiosity of a surface per unit frequency or wavelength, depending on whether the spectrum is taken as a function of frequency or of wavelength. Radiosity may refer to: *Radiosity (radiometry), the total radiation (emitted plus reflected) leaving a surface, certainly including the reflected radiation and the emitted radiation. Radiosity is often called in branches of physics other than radiometry, but in radiometry this usage leads to confusion with radiant intensity. ==Mathematical definitions== ===Radiosity=== Radiosity of a surface, denoted Je (""e"" for ""energetic"", to avoid confusion with photometric quantities), is defined as :J_\mathrm{e} = \frac{\partial \Phi_\mathrm{e}}{\partial A} = J_\mathrm{e,em} + J_\mathrm{e,r} + J_\mathrm{e,tr}, where * ∂ is the partial derivative symbol * \Phi_e is the radiant flux leaving (emitted, reflected and transmitted) * A is the area * J_{e,em} = M_e is the emitted component of the radiosity of the surface, that is to say its exitance * J_{e,r} is the reflected component of the radiosity of the surface * J_{e,tr} is the transmitted component of the radiosity of the surface For an opaque surface, the transmitted component of radiosity Je,tr vanishes and only two components remain: :J_\mathrm{e} = M_\mathrm{e} + J_\mathrm{e,r}. Radiodensity (or radiopacity) is opacity to the radio wave and X-ray portion of the electromagnetic spectrum: that is, the relative inability of those kinds of electromagnetic radiation to pass through a particular material. The radiosity of an opaque, gray and diffuse surface is given by :J_\mathrm{e} = M_\mathrm{e} + J_\mathrm{e,r} = \varepsilon \sigma T^4 + (1 - \varepsilon) E_\mathrm{e}, where *ε is the emissivity of that surface; *σ is the Stefan–Boltzmann constant; *T is the temperature of that surface; *Ee is the irradiance of that surface. In such a case, the radiosity does not depend on the angle of incidence of reflecting radiation and this information is lost on a diffuse surface. In reality, however, the radiosity will have a specular component from the reflected radiation. In such an application, the radiosity must be calculated spectrally and then integrated over the range of radiation spectrum. Spectral radiosity in wavelength of a surface, denoted Je,λ, is defined as :J_{\mathrm{e},\lambda} = \frac{\partial J_\mathrm{e}}{\partial \lambda}, where λ is the wavelength. ==Radiosity method== thumb|400px|right|The two radiosity components of an opaque surface. In heat transfer, combining these two factors into one radiosity term helps in determining the net energy exchange between multiple surfaces. ===Spectral radiosity=== Spectral radiosity in frequency of a surface, denoted Je,ν, is defined as :J_{\mathrm{e}, u} = \frac{\partial J_\mathrm{e}}{\partial u}, where ν is the frequency. The SI unit of radiosity is the watt per square metre (), while that of spectral radiosity in frequency is the watt per square metre per hertz (W·m−2·Hz−1) and that of spectral radiosity in wavelength is the watt per square metre per metre (W·m−3)—commonly the watt per square metre per nanometre (). Materials that inhibit the passage of electromagnetic radiation are called radiodense or radiopaque, while those that allow radiation to pass more freely are referred to as radiolucent. If it is not, then the radiosity will vary as a function of position along the surface. Radiophysics (also modern writing ""radio physics""Radio Physics Solutions company official web page) is a branch of physics focused on the theoretical and experimental study of certain kinds of radiation, its emission, propagation and interaction with matter. The two main factors contributing to a material's radiopacity are density and atomic number. *Radiosity (computer graphics), a rendering algorithm which gives a realistic rendering of shadows and diffuse light. Radiopacity is one of the key considerations in the design of various devices such as guidewires or stents that are used during radiological intervention. Radiopaque volumes of material have white appearance on radiographs, compared with the relatively darker appearance of radiolucent volumes. These can be for instance, in the field of radiometry or the measurement of ionising radiation radiated from a source. ==Ionising radiation== thumb|400px|Graphic showing relationships between radioactivity and detected ionizing radiation. Though the term radiodensity is more commonly used in the context of qualitative comparison, radiodensity can also be quantified according to the Hounsfield scale, a principle which is central to X-ray computed tomography (CT scan) applications. ","Radiosity is the radiant flux entering a surface per unit area, including emitted, reflected, and transmitted radiation.","Radiosity is the radiant flux entering a surface per unit area, including absorbed, reflected, and transmitted radiation.","Radiosity is the radiant flux leaving a surface per unit area, including absorbed, reflected, and transmitted radiation.","Radiosity is the radiant flux leaving a surface per unit area, including emitted, reflected, and transmitted radiation.","Radiosity is the radiant flux leaving a surface per unit volume, including emitted, reflected, and transmitted radiation.",D,kaggle200,"In radiometry, irradiance is the radiant flux ""received"" by a ""surface"" per unit area. The SI unit of irradiance is the watt per square metre (W⋅m). The CGS unit erg per square centimetre per second (erg⋅cm⋅s) is often used in astronomy. Irradiance is often called intensity, but this term is avoided in radiometry where such usage leads to confusion with radiant intensity. In astrophysics, irradiance is called ""radiant flux"".
In radiometry, radiance is the radiant flux emitted, reflected, transmitted or received by a given surface, per unit solid angle per unit projected area. Radiance is used to characterize diffuse emission and reflection of electromagnetic radiation, and to quantify emission of neutrinos and other particles. The SI unit of radiance is the watt per steradian per square metre (). It is a ""directional"" quantity: the radiance of a surface depends on the direction from which it is being observed.
More correctly, radiosity ""B"" is the energy per unit area leaving the patch surface per discrete time interval and is the combination of emitted and reflected energy:
In radiometry, radiosity is the radiant flux leaving (emitted, reflected and transmitted by) a surface per unit area, and spectral radiosity is the radiosity of a surface per unit frequency or wavelength, depending on whether the spectrum is taken as a function of frequency or of wavelength. The SI unit of radiosity is the watt per square metre (), while that of spectral radiosity in frequency is the watt per square metre per hertz (W·m·Hz) and that of spectral radiosity in wavelength is the watt per square metre per metre (W·m)—commonly the watt per square metre per nanometre (). The CGS unit erg per square centimeter per second () is often used in astronomy. Radiosity is often called in branches of physics other than radiometry, but in radiometry this usage leads to confusion with radiant intensity.","The radiosity method, in the context of computer graphics, derives from (and is fundamentally the same as) the radiosity method in heat transfer. In this context, radiosity is the total radiative flux (both reflected and re-radiated) leaving a surface; this is also sometimes known as radiant exitance. Calculation of radiosity, rather than surface temperatures, is a key aspect of the radiosity method that permits linear matrix methods to be applied to the problem.
Radiosity Radiosity of a surface, denoted Je (""e"" for ""energetic"", to avoid confusion with photometric quantities), is defined as Je=∂Φe∂A=Je,em+Je,r+Je,tr, where ∂ is the partial derivative symbol Φe is the radiant flux leaving (emitted, reflected and transmitted) A is the area Je,em=Me is the emitted component of the radiosity of the surface, that is to say its exitance Je,r is the reflected component of the radiosity of the surface Je,tr is the transmitted component of the radiosity of the surfaceFor an opaque surface, the transmitted component of radiosity Je,tr vanishes and only two components remain: Je=Me+Je,r.
In radiometry, radiosity is the radiant flux leaving (emitted, reflected and transmitted by) a surface per unit area, and spectral radiosity is the radiosity of a surface per unit frequency or wavelength, depending on whether the spectrum is taken as a function of frequency or of wavelength. The SI unit of radiosity is the watt per square metre (W/m2), while that of spectral radiosity in frequency is the watt per square metre per hertz (W·m−2·Hz−1) and that of spectral radiosity in wavelength is the watt per square metre per metre (W·m−3)—commonly the watt per square metre per nanometre (W·m−2·nm−1). The CGS unit erg per square centimeter per second (erg·cm−2·s−1) is often used in astronomy. Radiosity is often called intensity in branches of physics other than radiometry, but in radiometry this usage leads to confusion with radiant intensity.","Radiosity is often called in branches of physics other than radiometry, but in radiometry this usage leads to confusion with radiant intensity.In this context, radiosity is the total radiative flux (both reflected and re-radiated) leaving a surface; this is also sometimes known as radiant exitanceThe radiosity method, in the context of computer graphics, derives from (and is fundamentally the same as) the radiosity method in heat transfer- In radiometry, irradiance is the radiant flux ""received"" by a ""surface"" per unit areaIt is a ""directional"" quantity: the radiance of a surface depends on the direction from which it is being observed.
More correctly, radiosity ""B"" is the energy per unit area leaving the patch surface per discrete time interval and is the combination of emitted and reflected energy:
In radiometry, radiosity is the radiant flux leaving (emitted, reflected and transmitted by) a surface per unit area, and spectral radiosity is the radiosity of a surface per unit frequency or wavelength, depending on whether the spectrum is taken as a function of frequency or of wavelengthRadiosity is often called intensity in branches of physics other than radiometry, but in radiometry this usage leads to confusion with radiant intensityCalculation of radiosity, rather than surface temperatures, is a key aspect of the radiosity method that permits linear matrix methods to be applied to the problem.
Radiosity Radiosity of a surface, denoted Je (""e"" for ""energetic"", to avoid confusion with photometric quantities), is defined as Je=∂Φe∂A=Je,em+Je,r+Je,tr, where ∂ is the partial derivative symbol Φe is the radiant flux leaving (emitted, reflected and transmitted) A is the area Je,em=Me is the emitted component of the radiosity of the surface, that is to say its exitance Je,r is the reflected component of the radiosity of the surface Je,tr is the transmitted component of the radiosity of the surfaceFor an opaque surface, the transmitted component of radiosity Je,tr vanishes and only two components remain: Je=Me+Je,r.
In radiometry, radiosity is the radiant flux leaving (","Radiosity is often called in branches of physics other than radiometry, but in radiometry this usage leads to confusion with radiant intensity.In this context, radiosity is the total radiative flux (both reflected and re-radiated) leaving a surface; this is also sometimes known as radiant exitanceThe radiosity method, in the context of computer graphics, derives from (and is fundamentally the same as) the radiosity method in heat transfer- In radiometry, irradiance is the radiant flux ""received"" by a ""surface"" per unit areaIt is a ""directional"" quantity: the radiance of a surface depends on the direction from which it is being observed.
More correctly, radiosity ""B"" is the energy per unit area leaving the patch surface per discrete time interval and is the combination of emitted and reflected energy:
In radiometry, radiosity is the radiant flux leaving (emitted, reflected and transmitted by) a surface per unit area, and spectral radiosity is the radiosity of a surface per unit frequency or wavelength, depending on whether the spectrum is taken as a function of frequency or of wavelengthRadiosity is often called intensity in branches of physics other than radiometry, but in radiometry this usage leads to confusion with radiant intensityCalculation of radiosity, rather than surface temperatures, is a key aspect of the radiosity method that permits linear matrix methods to be applied to the problem.
Radiosity Radiosity of a surface, denoted Je (""e"" for ""energetic"", to avoid confusion with photometric quantities), is defined as Je=∂Φe∂A=Je,em+Je,r+Je,tr, where ∂ is the partial derivative symbol Φe is the radiant flux leaving (emitted, reflected and transmitted) A is the area Je,em=Me is the emitted component of the radiosity of the surface, that is to say its exitance Je,r is the reflected component of the radiosity of the surface Je,tr is the transmitted component of the radiosity of the surfaceFor an opaque surface, the transmitted component of radiosity Je,tr vanishes and only two components remain: Je=Me+Je,r.
In radiometry, radiosity is the radiant flux leaving ([SEP]What is radiosity in radiometry?","['D', 'C', 'E']",1.0
What is a virtual particle?,"Virtual photons are referred to as ""virtual"" because they do not exist as free particles in the traditional sense but instead serve as intermediate particles in the exchange of force between other particles. The virtual particles, also known as force carriers, are bosons, with different bosons associated with each force. Virtual photons are said to be ""off-shell"", which means that they do not obey the usual relationship between energy and momentum that applies to real particles. In particle physics, V was a generic name for heavy, unstable subatomic particles that decay into a pair of particles, thereby producing a characteristic letter V in a bubble chamber or other particle detector. Virtual photons are a fundamental concept in particle physics and quantum field theory that play a crucial role in describing the interactions between electrically charged particles. Virtual photons are thought of as fluctuations in the electromagnetic field, characterized by their energy, momentum, and polarization. In computing, a virtual machine (VM) is the virtualization or emulation of a computer system. The virtual-particle description of static forces is capable of identifying the spatial form of the forces, such as the inverse-square behavior in Newton's law of universal gravitation and in Coulomb's law. There are limits to the validity of the virtual particle picture. A virtual artifact (VA) is an immaterial object that exists in the human mind or in a digital environment, for example the Internet, intranet, virtual reality, cyberspace, etc.Masaki Omata, Kentaro Go, Atsumi Imamiya. The physical, ""real-world"" hardware running the VM is generally referred to as the 'host', and the virtual machine emulated on that machine is generally referred to as the 'guest'. Virtual machines are based on computer architectures and provide the functionality of a physical computer. If virtual photons exchanged between particles have a positive energy, they contribute to the electromagnetic force as a repulsive force. The virtual-particle formulation is derived from a method known as perturbation theory which is an approximation assuming interactions are not too strong, and was intended for scattering problems, not bound states such as atoms. Simulated virtual objects (photorealistic VA) and environments have a model in the real world; however, depending on the context, an abstract virtual artifact isn't necessarily dependent on the laws of physics or causality.Vince, John. On the other hand, if the virtual photons have a negative energy, they contribute to the electromagnetic force as an attractive force. There are insights that can be obtained, however, without going into the machinery of path integrals, such as why classical gravitational and electrostatic forces fall off as the inverse square of the distance between bodies. ===Path-integral formulation of virtual-particle exchange=== A virtual particle is created by a disturbance to the vacuum state, and the virtual particle is destroyed when it is absorbed back into the vacuum state by another disturbance. It is important to note that positive and negative virtual photons are not separate particles, but rather a way of classifying the virtual photons that exist in the electromagnetic field. The mechanics of virtual-particle exchange is best described with the path integral formulation of quantum mechanics. These classifications are based on the direction of the energy and momentum of the virtual photons and their contribution to the electromagnetic force. ",A particle that is not affected by the strong force.,A particle that is not affected by the weak force.,A particle that is created in a laboratory for experimental purposes.,A particle that is not directly observable but is inferred from its effects on measurable particles.,A particle that is directly observable and can be measured in experiments.,D,kaggle200,"All this, together, implies that the decay of the gluino can only go through a virtual particle, a high-mass squark. The mean decay time depends on the mass of the intermediate virtual particle, and in this case can be very long.
When one particle scatters off another, altering its trajectory, there are two ways to think about the process. In the field picture, we imagine that the field generated by one particle caused a force on the other. Alternatively, we can imagine one particle emitting a virtual particle which is absorbed by the other. The virtual particle transfers momentum from one particle to the other. This particle viewpoint is especially helpful when there are a large number of complicated quantum corrections to the calculation since these corrections can be visualized as Feynman diagrams containing additional virtual particles.
Particle levels in water (or air) can be measured with a turbidity meter and analyzed with a particle counter. They can also be scanned with an underwater microscope, such as ecoSCOPE.
Particle board has had a huge influence on furniture design. In the early 1950s, particle board kitchens started to come into use in furniture construction but, in many cases, it remained more expensive than solid wood. A particle board kitchen was only available to the very wealthy. Once the technology was more developed, particle board became cheaper.","When one particle scatters off another, altering its trajectory, there are two ways to think about the process. In the field picture, we imagine that the field generated by one particle caused a force on the other. Alternatively, we can imagine one particle emitting a virtual particle which is absorbed by the other. The virtual particle transfers momentum from one particle to the other. This particle viewpoint is especially helpful when there are a large number of complicated quantum corrections to the calculation since these corrections can be visualized as Feynman diagrams containing additional virtual particles.
Even among particle physicists, the exact definition of a particle has diverse descriptions. These professional attempts at the definition of a particle include: A particle is a collapsed wave function A particle is a quantum excitation of a field A particle is an irreducible representation of the Poincaré group A particle is an observed thing
Particle levels in water (or air) can be measured with a turbidity meter and analyzed with a particle counter. They can also be scanned with an underwater microscope, such as ecoSCOPE.","The virtual particle transfers momentum from one particle to the otherAlternatively, we can imagine one particle emitting a virtual particle which is absorbed by the otherThis particle viewpoint is especially helpful when there are a large number of complicated quantum corrections to the calculation since these corrections can be visualized as Feynman diagrams containing additional virtual particles.
Even among particle physicists, the exact definition of a particle has diverse descriptionsThese professional attempts at the definition of a particle include: A particle is a collapsed wave function A particle is a quantum excitation of a field A particle is an irreducible representation of the Poincaré group A particle is an observed thing
Particle levels in water (or air) can be measured with a turbidity meter and analyzed with a particle counterThis particle viewpoint is especially helpful when there are a large number of complicated quantum corrections to the calculation since these corrections can be visualized as Feynman diagrams containing additional virtual particles.
Particle levels in water (or air) can be measured with a turbidity meter and analyzed with a particle counterIn the field picture, we imagine that the field generated by one particle caused a force on the other- All this, together, implies that the decay of the gluino can only go through a virtual particle, a high-mass squarkWhen one particle scatters off another, altering its trajectory, there are two ways to think about the processThe mean decay time depends on the mass of the intermediate virtual particle, and in this case can be very long.
When one particle scatters off another, altering its trajectory, there are two ways to think about the processThey can also be scanned with an underwater microscope, such as ecoSCOPE.
Particle board has had a huge influence on furniture designOnce the technology was more developed, particle board became cheaper.In the early 1950s, particle board kitchens started to come into use in furniture construction but, in many cases, it remained more expensive than","The virtual particle transfers momentum from one particle to the otherAlternatively, we can imagine one particle emitting a virtual particle which is absorbed by the otherThis particle viewpoint is especially helpful when there are a large number of complicated quantum corrections to the calculation since these corrections can be visualized as Feynman diagrams containing additional virtual particles.
Even among particle physicists, the exact definition of a particle has diverse descriptionsThese professional attempts at the definition of a particle include: A particle is a collapsed wave function A particle is a quantum excitation of a field A particle is an irreducible representation of the Poincaré group A particle is an observed thing
Particle levels in water (or air) can be measured with a turbidity meter and analyzed with a particle counterThis particle viewpoint is especially helpful when there are a large number of complicated quantum corrections to the calculation since these corrections can be visualized as Feynman diagrams containing additional virtual particles.
Particle levels in water (or air) can be measured with a turbidity meter and analyzed with a particle counterIn the field picture, we imagine that the field generated by one particle caused a force on the other- All this, together, implies that the decay of the gluino can only go through a virtual particle, a high-mass squarkWhen one particle scatters off another, altering its trajectory, there are two ways to think about the processThe mean decay time depends on the mass of the intermediate virtual particle, and in this case can be very long.
When one particle scatters off another, altering its trajectory, there are two ways to think about the processThey can also be scanned with an underwater microscope, such as ecoSCOPE.
Particle board has had a huge influence on furniture designOnce the technology was more developed, particle board became cheaper.In the early 1950s, particle board kitchens started to come into use in furniture construction but, in many cases, it remained more expensive than[SEP]What is a virtual particle?","['D', 'A', 'E']",1.0
"Who proposed the principle of ""complexity from noise"" and when was it first introduced?","""The Complexity of Songs"" is a scholarly article by computer scientist Donald Knuth in 1977, as an in-joke about computational complexity theory. However the Europeans were unprepared to grasp this notion, and the chiefs, in order to establish a common ground to convey their achievements later proceeded to demonstrate an approach described by the recurrent relation S_k = C_1S_{k-1}, where C_1 = 'i', with a suboptimal complexity given by c = 1.Kurt Eisemann, ""Further Results on the Complexity of Songs"", Communications of the ACM, vol 28 (1985), no. 3, p. 235. ""The Telnet Song"", Communications of the ACM, April 1984Text of the TELNET Song (retrieved January 5, 2012)Telnet song in MIDI format It has been suggested that the complexity analysis of human songs can be a useful pedagogic device for teaching students complexity theory. Essential complexity is a numerical measure defined by Thomas J. McCabe, Sr., in his highly cited, 1976 paper better known for introducing cyclomatic complexity. The Collapse of Chaos: Discovering Simplicity in a Complex World (1994) is a book about complexity theory and the nature of scientific explanation written by biologist Jack Cohen and mathematician Ian Stewart. Alan Louis Selman (April 2, 1941 – January 22, 2021) was a mathematician and theoretical computer scientist known for his research on structural complexity theory, the study of computational complexity in terms of the relation between complexity classes rather than individual algorithmic problems. ==Education and career== Selman was a graduate of the City College of New York. The O(1) space complexity result was also implemented by Guy L. Steele, Jr., perhaps challenged by Knuth's article.Peter G. Neumann, ""A further view of the first quarter century"" ,Communications of the ACM, Volume 27, Issue 4, April 1984, p. 343 Dr. Steele's TELNET Song used a completely different algorithm based on exponential recursion, a parody on some implementations of TELNET.Guy L. Steele, Jr., More ingenious approaches yield songs of complexity O(\log N), a class known as ""m bottles of beer on the wall"". In this book Cohen and Stewart give their ideas on chaos theory, particularly on how the simple leads to the complex, and conversely, how the complex leads to the simple, and argue for a need for contextual explanation in science as a complement to reduction. He was the first chair of the annual Computational Complexity Conference, and served as editor-in-chief of the journal Theory of Computing Systems for 18 years, beginning in 2001. ==Selected publications== Selman's research publications included well-cited works on the classification of different types of reductions according to their computational power, the formulation of promise problems, the complexity class UP of problems solvable by unambiguous Turing machines, and their applications to the computational complexity of cryptography: * * * As well as being the editor of several edited volumes, Selman was the coauthor of the textbook Computability and Complexity Theory (with Steve Homer, Springer, 2001; 2nd ed., 2011). ==Recognition== Selman was a Fulbright Scholar and Humboldt Fellow. Reprinted in: Knuth further demonstrates a way of producing songs with O(\sqrt N) complexity, an approach ""further improved by a Scottish farmer named O. MacDonald"". A noise print is part of a technique used in noise reduction. Finally, the progress during the 20th century—stimulated by the fact that ""the advent of modern drugs has led to demands for still less memory""—leads to the ultimate improvement: Arbitrarily long songs with space complexity O(1) exist, e.g. a song defined by the recurrence relation :S_0=\epsilon, S_k = V_kS_{k-1},\, k\ge 1, :V_k = 'That's the way,' U 'I like it,' U, for all k \ge 1 :U= 'uh huh,' 'uh huh' == Further developments == Prof. Kurt Eisemann of San Diego State University in his letter to the Communications of the ACM further improves the latter seemingly unbeatable estimate. A noise print is commonly used in audio mastering to help reduce the effects of unwanted noise from a piece of audio. As McCabe explains in his paper, his essential complexity metric was designed to provide a measure of how far off this ideal (of being completely structured) a given program was. The article ""On Superpolylogarithmic Subexponential Functions"" by Prof. Alan ShermanAlan Sherman, ""On Superpolylogarithmic Subexponential Functions"" (PostScript), ACM SIGACT News, vol. 22, no. 1, 1991, p. 65 writes that Knuth's article was seminal for analysis of a special class of functions. == References == == External links == * ""The Complexity of Songs"", Knuth, Donald E. (1984). As Prof. Eisemann puts it: > ""When the Mayflower voyagers first descended on these shores, the native > Americans proud of their achievement in the theory of information storage > and retrieval, at first welcomed the strangers with the complete silence. > This was meant to convey their peak achievement in the complexity of songs, > namely the demonstration that a limit as low as c = 0 is indeed obtainable."" ===Additional reviews=== * * * * * ==References== * Jack Cohen and Ian Stewart: The Collapse of Chaos: discovering simplicity in a complex world, Penguin Books, 1994, Category:Books by Ian Stewart (mathematician) Category:Science books Category:1994 non- fiction books Category:Chaos theory He begins with an observation that for practical applications the value of the ""hidden constant"" c in the Big Oh notation may be crucial in making the difference between the feasibility and unfeasibility: for example a constant value of 1080 would exceed the capacity of any known device. He further notices that a technique has already been known in Mediaeval Europe whereby textual content of an arbitrary tune can be recorded basing on the recurrence relation S_k = C_2S_{k-1}, where C_2 = 'la', yielding the value of the big-Oh constant c equal to 2. ",Ilya Prigogine in 1979,Henri Atlan in 1972,Democritus and Lucretius in ancient times,None of the above.,René Descartes in 1637,B,kaggle200,"A swing skirt is a vintage knee-length retro skirt typical of the 1960s, but first introduced in the 1930s.
Rolf Landauer first proposed the principle in 1961 while working at IBM. He justified and stated important limits to an earlier conjecture by John von Neumann. For this reason, it is sometimes referred to as being simply the Landauer bound or Landauer limit.
The cybernetician Heinz von Foerster formulated the principle of ""order from noise"" in 1960. It notes that self-organization is facilitated by random perturbations (""noise"") that let the system explore a variety of states in its state space. This increases the chance that the system will arrive into the basin of a ""strong"" or ""deep"" attractor, from which it then quickly enters the attractor itself. The biophysicist Henri Atlan developed such a concept by proposing the principle of ""complexity from noise"" () first in the 1972 book ""L'organisation biologique et la théorie de l'information"" and then in the 1979 book ""Entre le cristal et la fumée"". The thermodynamicist Ilya Prigogine formulated a similar principle as ""order through fluctuations"" or ""order out of chaos"". It is applied in the method of simulated annealing for problem solving and machine learning.
The cybernetician Heinz von Foerster formulated the principle of ""order from noise"" in 1960. It notes that self-organization is facilitated by random perturbations (""noise"") that let the system explore a variety of states in its state space. This increases the chance that the system will arrive into the basin of a ""strong"" or ""deep"" attractor, from which it then quickly enters the attractor itself. The biophysicist Henri Atlan developed this concept by proposing the principle of ""complexity from noise"" () first in the 1972 book ""L'organisation biologique et la théorie de l'information"" and then in the 1979 book ""Entre le cristal et la fumée"". The physicist and chemist Ilya Prigogine formulated a similar principle as ""order through fluctuations"" or ""order out of chaos"". It is applied in the method of simulated annealing for problem solving and machine learning.","The term ""luminescence"" was first introduced in 1888.
Self-organization, a process where some form of overall order arises out of the local interactions between parts of an initially disordered system, was discovered in cybernetics by William Ross Ashby in 1947. It states that any deterministic dynamic system automatically evolves towards a state of equilibrium that can be described in terms of an attractor in a basin of surrounding states. Once there, the further evolution of the system is constrained to remain in the attractor. This constraint implies a form of mutual dependency or coordination between its constituent components or subsystems. In Ashby's terms, each subsystem has adapted to the environment formed by all other subsystems.The cybernetician Heinz von Foerster formulated the principle of ""order from noise"" in 1960. It notes that self-organization is facilitated by random perturbations (""noise"") that let the system explore a variety of states in its state space. This increases the chance that the system will arrive into the basin of a ""strong"" or ""deep"" attractor, from which it then quickly enters the attractor itself. The biophysicist Henri Atlan developed such a concept by proposing the principle of ""complexity from noise"" (French: le principe de complexité par le bruit) first in the 1972 book L'organisation biologique et la théorie de l'information and then in the 1979 book Entre le cristal et la fumée. The thermodynamicist Ilya Prigogine formulated a similar principle as ""order through fluctuations"" or ""order out of chaos"". It is applied in the method of simulated annealing for problem solving and machine learning.Wiener regarded the automatic serial identification of a black box and its subsequent reproduction (copying) as sufficient to meet the condition of self-organization. The importance of phase locking or the ""attraction of frequencies"", as he called it, is discussed in the 2nd edition of his ""Cybernetics"". Drexler sees self-replication (copying) as a key step in nano and universal assembly. In later work he seeks to lessen this constraint.By contrast, the four concurrently connected galvanometers of W. Ross Ashby's Homeostat hunt, when perturbed, to converge on one of many possible stable states. Ashby used his state counting measure of variety to describe stable states and produced the ""Good Regulator"" theorem which requires internal models for self-organized endurance and stability (e.g. Nyquist stability criterion).
The cybernetician William Ross Ashby formulated the original principle of self-organization in 1947. It states that any deterministic dynamic system automatically evolves towards a state of equilibrium that can be described in terms of an attractor in a basin of surrounding states. Once there, the further evolution of the system is constrained to remain in the attractor. This constraint implies a form of mutual dependency or coordination between its constituent components or subsystems. In Ashby's terms, each subsystem has adapted to the environment formed by all other subsystems.The cybernetician Heinz von Foerster formulated the principle of ""order from noise"" in 1960. It notes that self-organization is facilitated by random perturbations (""noise"") that let the system explore a variety of states in its state space. This increases the chance that the system will arrive into the basin of a ""strong"" or ""deep"" attractor, from which it then quickly enters the attractor itself. The biophysicist Henri Atlan developed this concept by proposing the principle of ""complexity from noise"" (French: le principe de complexité par le bruit) first in the 1972 book L'organisation biologique et la théorie de l'information and then in the 1979 book Entre le cristal et la fumée. The physicist and chemist Ilya Prigogine formulated a similar principle as ""order through fluctuations"" or ""order out of chaos"". It is applied in the method of simulated annealing for problem solving and machine learning.","The biophysicist Henri Atlan developed this concept by proposing the principle of ""complexity from noise"" (French: le principe de complexité par le bruit) first in the 1972 book L'organisation biologique et la théorie de l'information and then in the 1979 book Entre le cristal et la fuméeThe biophysicist Henri Atlan developed this concept by proposing the principle of ""complexity from noise"" () first in the 1972 book ""L'organisation biologique et la théorie de l'information"" and then in the 1979 book ""Entre le cristal et la fumée""The biophysicist Henri Atlan developed such a concept by proposing the principle of ""complexity from noise"" () first in the 1972 book ""L'organisation biologique et la théorie de l'information"" and then in the 1979 book ""Entre le cristal et la fumée""The biophysicist Henri Atlan developed such a concept by proposing the principle of ""complexity from noise"" (French: le principe de complexité par le bruit) first in the 1972 book L'organisation biologique et la théorie de l'information and then in the 1979 book Entre le cristal et la fuméeIt is applied in the method of simulated annealing for problem solving and machine learning.
The cybernetician Heinz von Foerster formulated the principle of ""order from noise"" in 1960In Ashby's terms, each subsystem has adapted to the environment formed by all other subsystems.The cybernetician Heinz von Foerster formulated the principle of ""order from noise"" in 1960For this reason, it is sometimes referred to as being simply the Landauer bound or Landauer limit.
The cybernetician Heinz von Foerster formulated the principle of ""order from noise"" in 1960The physicist and chemist Ilya Prigogine formulated a similar principle as ""order through fluctuations"" or ""order out of chaos""The thermodynamicist Ilya Prigogine formulated a similar principle as ""order through fluctuations"" or ""order out of chaos""Nyquist stability criterion).
The cybernetician William Ross Ashby formulated the original principle of self-organization in 1947It notes that self-organization is facilitated by random perturbations (""noise"") t","The biophysicist Henri Atlan developed this concept by proposing the principle of ""complexity from noise"" (French: le principe de complexité par le bruit) first in the 1972 book L'organisation biologique et la théorie de l'information and then in the 1979 book Entre le cristal et la fuméeThe biophysicist Henri Atlan developed this concept by proposing the principle of ""complexity from noise"" () first in the 1972 book ""L'organisation biologique et la théorie de l'information"" and then in the 1979 book ""Entre le cristal et la fumée""The biophysicist Henri Atlan developed such a concept by proposing the principle of ""complexity from noise"" () first in the 1972 book ""L'organisation biologique et la théorie de l'information"" and then in the 1979 book ""Entre le cristal et la fumée""The biophysicist Henri Atlan developed such a concept by proposing the principle of ""complexity from noise"" (French: le principe de complexité par le bruit) first in the 1972 book L'organisation biologique et la théorie de l'information and then in the 1979 book Entre le cristal et la fuméeIt is applied in the method of simulated annealing for problem solving and machine learning.
The cybernetician Heinz von Foerster formulated the principle of ""order from noise"" in 1960In Ashby's terms, each subsystem has adapted to the environment formed by all other subsystems.The cybernetician Heinz von Foerster formulated the principle of ""order from noise"" in 1960For this reason, it is sometimes referred to as being simply the Landauer bound or Landauer limit.
The cybernetician Heinz von Foerster formulated the principle of ""order from noise"" in 1960The physicist and chemist Ilya Prigogine formulated a similar principle as ""order through fluctuations"" or ""order out of chaos""The thermodynamicist Ilya Prigogine formulated a similar principle as ""order through fluctuations"" or ""order out of chaos""Nyquist stability criterion).
The cybernetician William Ross Ashby formulated the original principle of self-organization in 1947It notes that self-organization is facilitated by random perturbations (""noise"") t[SEP]Who proposed the principle of ""complexity from noise"" and when was it first introduced?","['B', 'D', 'C']",1.0
What is the order parameter that breaks the electromagnetic gauge symmetry in superconductors?,"The table below shows some of the parameters of common superconductors. The superconducting coherence length is one of two parameters in the Ginzburg–Landau theory of superconductivity. The theory predicts the upper critical field () at 0 K from and the slope of at . ==References== Category:Superconductivity In strong-coupling, anisotropic and multi-component theories these expressions are modified. ==See also== * Ginzburg–Landau theory of superconductivity * BCS theory of superconductivity * London penetration depth ==References== Category:Superconductivity This theory was proposed in 1966 to go beyond BCS theory of superconductivity and it provides predictions of upper critical field () in type-II superconductors. Since then over 30 heavy fermion superconductors were found (in materials based on Ce, U), with a critical temperature up to 2.3 K (in CeCoIn5). Heavy fermion superconductors are a type of unconventional superconductor. Furthermore, UPd2Al3 orders antiferromagnetically at TN=14K, and UPd2Al3 thus features the unusual behavior that this material, at temperatures below 2K, is simultaneously superconducting and magnetically ordered. The Formation of Cooper Pairs and the Nature of Superconducting Currents, CERN 79-12 (Yellow Report), December 1979 The ratio \kappa = \lambda/\xi , where \lambda is the London penetration depth, is known as the Ginzburg–Landau parameter. From specific heat measurements (ΔC/C(TC) one knows that the Cooper pairs in the superconducting state are also formed by the heavy quasiparticles.Neil W. Ashcroft and N. David Mermin, Solid State Physics In contrast to normal superconductors it cannot be described by BCS-Theory. In some special limiting cases, for example in the weak-coupling BCS theory of isotropic s-wave superconductor it is related to characteristic Cooper pair size: : \xi_{BCS} = \frac{\hbar v_f}{\pi \Delta} where \hbar is the reduced Planck constant, m is the mass of a Cooper pair (twice the electron mass), v_f is the Fermi velocity, and \Delta is the superconducting energy gap. Material TC (K) comments original reference CeCu2Si2 0.7 first unconventional superconductor CeCoIn5 2.3 highest TC of all Ce-based heavy fermions CePt3Si 0.75 first heavy-fermion superconductor with non- centrosymmetric crystal structure CeIn3 0.2 superconducting only at high pressures UBe13 0.85 p-wave superconductor UPt3 0.48 several distinct superconducting phases URu2Si2 1.3 mysterious 'hidden-order phase' below 17 K UPd2Al3 2.0 antiferromagnetic below 14 K UNi2Al3 1.1 antiferromagnetic below 5 K Heavy Fermion materials are intermetallic compounds, containing rare earth or actinide elements. The superconducting coherence length is a measure of the size of a Cooper pair (distance between the two electrons) and is of the order of 10^{-4} cm. In superconductivity, the superconducting coherence length, usually denoted as \xi (Greek lowercase xi), is the characteristic exponent of the variations of the density of superconducting component. Some heavy fermion superconductors are candidate materials for the Fulde-Ferrell-Larkin- Ovchinnikov (FFLO) phase. The first heavy fermion superconductor, CeCu2Si2, was discovered by Frank Steglich in 1978. For heavy-fermion superconductors it is generally believed that the coupling mechanism cannot be phononic in nature. Type-I superconductors are those with 0<\kappa<1/\sqrt{2}, and type-II superconductors are those with \kappa>1/\sqrt{2}. In Landau mean-field theory, at temperatures T near the superconducting critical temperature T_c, \xi (T) \propto (1-T/T_c)^{-\frac{1}{2}}. At that point, the Tc=2.0K of UPd2Al3 was the highest critical temperature amongst all known heavy-fermion superconductors, and this record would stand for 10 years until CeCoIn5 was discovered in 2001. ==Metallic state== The overall metallic behavior of UPd2Al3, e.g. as deduced from the dc resistivity, is typical for a heavy- fermion material and can be explained as follows: incoherent Kondo scattering above approximately 80 K and coherent heavy-fermion state (in a Kondo lattice) at lower temperatures. ",None of the above.,A thin cylindrical plastic rod.,A condensed-matter collective field ψ.,The cosmic microwave background.,A component of the Higgs field.,C,kaggle200,"A gauge symmetry of a Lagrangian formula_1 is defined as a differential operator on some vector bundle formula_2 taking its values in the linear space of (variational or exact) symmetries of formula_1. Therefore, a gauge symmetry of formula_1
Spontaneously-symmetry-broken phases of matter are characterized by an order parameter that describes the quantity which breaks the symmetry under consideration. For example, in a magnet, the order parameter is the local magnetization.
It was expected that a half-integer flux, that is, a spontaneous magnetization could only occur for a junction of ""d"" symmetry superconductors. But, even if the junction experiment is the strongest method to determine the symmetry of the HTS order parameter, the results have been ambiguous. John R. Kirtley and C. C. Tsuei thought that the ambiguous results came from the defects inside the HTS, so that they designed an experiment where both clean limit (no defects) and dirty limit (maximal defects) were considered simultaneously. In the experiment, the spontaneous magnetization was clearly observed in YBCO, which supported the ""d"" symmetry of the order parameter in YBCO. But, since YBCO is orthorhombic, it might inherently have an admixture of ""s"" symmetry. So, by tuning their technique further, they found that there was an admixture of ""s"" symmetry in YBCO within about 3%. Also, they found that there was a pure ""d"" order parameter symmetry in the tetragonal TlBaCuO.
If formula_1 is the order parameter of the system, then mean field theory requires that the fluctuations in the order parameter are much smaller than the actual value of the order parameter near the critical point.","If ϕ is the order parameter of the system, then mean field theory requires that the fluctuations in the order parameter are much smaller than the actual value of the order parameter near the critical point.
For the electroweak model, as explained earlier, a component of the Higgs field provides the order parameter breaking the electroweak gauge symmetry to the electromagnetic gauge symmetry. Like the ferromagnetic example, there is a phase transition at the electroweak temperature. The same comment about us not tending to notice broken symmetries suggests why it took so long for us to discover electroweak unification.
In superconductors, there is a condensed-matter collective field ψ, which acts as the order parameter breaking the electromagnetic gauge symmetry.","The same comment about us not tending to notice broken symmetries suggests why it took so long for us to discover electroweak unification.
In superconductors, there is a condensed-matter collective field ψ, which acts as the order parameter breaking the electromagnetic gauge symmetryFor example, in a magnet, the order parameter is the local magnetization.
It was expected that a half-integer flux, that is, a spontaneous magnetization could only occur for a junction of ""d"" symmetry superconductorsTherefore, a gauge symmetry of formula_1
Spontaneously-symmetry-broken phases of matter are characterized by an order parameter that describes the quantity which breaks the symmetry under considerationIf ϕ is the order parameter of the system, then mean field theory requires that the fluctuations in the order parameter are much smaller than the actual value of the order parameter near the critical point.
For the electroweak model, as explained earlier, a component of the Higgs field provides the order parameter breaking the electroweak gauge symmetry to the electromagnetic gauge symmetryIn the experiment, the spontaneous magnetization was clearly observed in YBCO, which supported the ""d"" symmetry of the order parameter in YBCOAlso, they found that there was a pure ""d"" order parameter symmetry in the tetragonal TlBaCuO.
If formula_1 is the order parameter of the system, then mean field theory requires that the fluctuations in the order parameter are much smaller than the actual value of the order parameter near the critical point.But, even if the junction experiment is the strongest method to determine the symmetry of the HTS order parameter, the results have been ambiguousLike the ferromagnetic example, there is a phase transition at the electroweak temperature- A gauge symmetry of a Lagrangian formula_1 is defined as a differential operator on some vector bundle formula_2 taking its values in the linear space of (variational or exact) symmetries of formula_1So, by tuning their technique further, they found that there was an admixture of ""s"" symmetry in YBCO within about 3","The same comment about us not tending to notice broken symmetries suggests why it took so long for us to discover electroweak unification.
In superconductors, there is a condensed-matter collective field ψ, which acts as the order parameter breaking the electromagnetic gauge symmetryFor example, in a magnet, the order parameter is the local magnetization.
It was expected that a half-integer flux, that is, a spontaneous magnetization could only occur for a junction of ""d"" symmetry superconductorsTherefore, a gauge symmetry of formula_1
Spontaneously-symmetry-broken phases of matter are characterized by an order parameter that describes the quantity which breaks the symmetry under considerationIf ϕ is the order parameter of the system, then mean field theory requires that the fluctuations in the order parameter are much smaller than the actual value of the order parameter near the critical point.
For the electroweak model, as explained earlier, a component of the Higgs field provides the order parameter breaking the electroweak gauge symmetry to the electromagnetic gauge symmetryIn the experiment, the spontaneous magnetization was clearly observed in YBCO, which supported the ""d"" symmetry of the order parameter in YBCOAlso, they found that there was a pure ""d"" order parameter symmetry in the tetragonal TlBaCuO.
If formula_1 is the order parameter of the system, then mean field theory requires that the fluctuations in the order parameter are much smaller than the actual value of the order parameter near the critical point.But, even if the junction experiment is the strongest method to determine the symmetry of the HTS order parameter, the results have been ambiguousLike the ferromagnetic example, there is a phase transition at the electroweak temperature- A gauge symmetry of a Lagrangian formula_1 is defined as a differential operator on some vector bundle formula_2 taking its values in the linear space of (variational or exact) symmetries of formula_1So, by tuning their technique further, they found that there was an admixture of ""s"" symmetry in YBCO within about 3[SEP]What is the order parameter that breaks the electromagnetic gauge symmetry in superconductors?","['C', 'E', 'A']",1.0
What is the reason for the sun appearing slightly yellowish when viewed from Earth?,"A number of different atmospheric conditions can be responsible for this effect, all of which divert the sunlight in such a way as to allow it to reach the observer's eye, thereby giving the impression that the light comes directly from the Sun itself. A related phenomenon is gegenschein (or counterglow), sunlight backscattered from the interplanetary dust, appearing directly opposite to the Sun as a faint but slightly brighter oval glow. Yellow sun or Yellow Sun may refer to: *Yellow Sun (nuclear weapon), a British nuclear weapon *Yellow sun, a type of stellar classification *""Yellow Sun"", a song by The Raconteurs from their album Broken Boy Soldiers This is why it is most clearly visible near sunrise or sunset when the sun is blocked, but the dust particles nearest the line of sight to the sun are not. Depending on circumstances, these phenomena can give the impression of an actual sunset. Similarly to a false sunrise, other atmospheric circumstances may be responsible for the effect as well, such as simple reflection of the sunlight off the bottom of the clouds, or a type of mirage like the Novaya Zemlya effect. ==See also== *False sunrise *Halo (optical phenomenon) *Lower tangent arc *Mirage *Novaya Zemlya effect *Subsun *Sun pillar *Upper tangent arc ==References== Category:Atmospheric optical phenomena Up to now, the ""Blue Sky with a White Sun"" can still be seen in the emblem of the US Army 75th Ranger Regiment. The zodiacal light (also called false dawn when seen before sunrise) is a faint glow of diffuse sunlight scattered by interplanetary dust. The Blue Sky with a White Sun () serves as the design for the party flag and emblem of the Kuomintang, the canton of the flag of the Republic of China, the national emblem of the Republic of China, and as the naval jack of the ROC Navy. Several atmospheric phenomena that may alternatively be called a ""false sunrise"" are: * Simple reflection of the sunlight off the bottom of the clouds. There are several atmospheric conditions which may cause the effect, most commonly a type of halo, caused by the reflection and refraction of sunlight by small ice crystals in the atmosphere, often in the form of cirrostratus clouds. Consequently, its spectrum is the same as the solar spectrum. A false sunrise is any of several atmospheric optical phenomena in which the Sun appears to have risen, but is actually still some distance below the horizon. Depending on which variety of ""false sunset"" is meant, the halo has to appear either above the Sun (which itself is hidden below the horizon) or below it (in which case the real Sun is obstructed from view, e.g. by clouds or other objects), making the upper and lower tangent arc, upper and lower sun pillars and the subsun the most likely candidates. The spread of light can sometimes be deceivingly similar to a true sun. After the Northern Expedition it was replaced by the Blue Sky with a White Sun national emblem in 1928. ===Nationalist period=== Since 1928, under the KMT's political tutelage, the Blue Sky with a White Sun Flag shared the same prominence as the ROC flag. A false sunset can refer to one of two related atmospheric optical phenomena, in which either (1) the Sun appears to be setting into or to have set below the horizon while it is actually still some height above the horizon, or (2) the Sun has already set below the horizon, but still appears to be on or above the horizon (thus representing the reverse of a false sunrise). Like all halos, these phenomena are caused by the reflection and/or refraction of sunlight by ice crystals suspended in the atmosphere, often in the form of cirrus or cirrostratus clouds. The light scattered from extremely small dust particles is strongly forward scattering, although the zodiacal light actually extends all the way around the sky, hence it is brightest when observing at a small angle with the Sun. Thus it is possible to see more of the width at small angles toward the sun, and it appears wider near the horizon, closer to the sun under the horizon. == Origin == The source of the dust has been long debated. ",The sun appears yellowish due to a reflection of the Earth's atmosphere.,"The longer wavelengths of light, such as red and yellow, are not scattered away and are directly visible when looking towards the sun.","The sun appears yellowish due to the scattering of all colors of light, mainly blue and green, in the Earth's atmosphere.","The sun emits a yellow light due to its own spectrum, which is visible when viewed from Earth.","The atmosphere absorbs the shorter wavelengths of light, such as blue and red, leaving only the longer wavelengths of light, such as green and yellow, visible when looking towards the sun.",B,kaggle200,"A monochrome or red rainbow is an optical and meteorological phenomenon and a rare variation of the more commonly seen multicolored rainbow. Its formation process is identical to that of a normal rainbow (namely the reflection/refraction of light in water droplets), the difference being that a monochrome rainbow requires the sun to be close to the horizon; i.e., near sunrise or sunset. The low angle of the sun results in a longer distance for its light to travel through the atmosphere, causing shorter wavelengths of light, such as blue, green and yellow, to be scattered and leaving primarily red.
White light from the sun consists of a continuous spectrum of colors which, when divided, forms the colors of the rainbow: violet, indigo blue, blue, green, yellow, orange, and red. In its interaction with the Earth's atmosphere, sunlight tends to scatter the shorter wavelengths, i.e. the blue photons, which is why the sky is perceived as blue. On the other hand, at sunset, when the atmosphere is denser, the light is less scattered, so that the longer wavelengths, red, are perceived.
The Sun emits light across the visible spectrum, so its color is white, with a CIE color-space index near (0.3, 0.3), when viewed from space or when the Sun is high in the sky. The Solar radiance per wavelength peaks in the green portion of the spectrum when viewed from space. When the Sun is very low in the sky, atmospheric scattering renders the Sun yellow, red, orange, or magenta, and in rare occasions even green or blue. Despite its typical whiteness (white sunrays, white ambient light, white illumination of the Moon, etc.), some cultures mentally picture the Sun as yellow and some even red; the reasons for this are cultural and exact ones are the subject of debate.
A portion of the beam of light coming from the sun scatters off molecules of gas and other small particles in the atmosphere. Here, Rayleigh scattering primarily occurs through sunlight's interaction with randomly located air molecules. It is this scattered light that gives the surrounding sky its brightness and its color. As previously stated, Rayleigh scattering is inversely proportional to the fourth power of wavelength, so that shorter wavelength violet and blue light will scatter more than the longer wavelengths (yellow and especially red light). However, the Sun, like any star, has its own spectrum and so ""I"" in the scattering formula above is not constant but falls away in the violet. In addition the oxygen in the Earth's atmosphere absorbs wavelengths at the edge of the ultra-violet region of the spectrum. The resulting color, which appears like a pale blue, actually is a mixture of all the scattered colors, mainly blue and green. Conversely, glancing toward the sun, the colors that were not scattered away—the longer wavelengths such as red and yellow light—are directly visible, giving the sun itself a slightly yellowish hue. Viewed from space, however, the sky is black and the sun is white.","Sunlight and neutrinos The Sun emits light across the visible spectrum, so its color is white, with a CIE color-space index near (0.3, 0.3), when viewed from space or when the Sun is high in the sky. The Solar radiance per wavelength peaks in the green portion of the spectrum when viewed from space. When the Sun is very low in the sky, atmospheric scattering renders the Sun yellow, red, orange, or magenta, and in rare occasions even green or blue. Despite its typical whiteness (white sunrays, white ambient light, white illumination of the Moon, etc.), some cultures mentally picture the Sun as yellow and some even red; the reasons for this are cultural and exact ones are the subject of debate.
A monochrome or red rainbow is an optical and meteorological phenomenon and a rare variation of the more commonly seen multicolored rainbow. Its formation process is identical to that of a normal rainbow (namely the reflection/refraction of light in water droplets), the difference being that a monochrome rainbow requires the sun to be close to the horizon; i.e., near sunrise or sunset. The low angle of the sun results in a longer distance for its light to travel through the atmosphere, causing shorter wavelengths of light, such as blue, green and yellow, to be scattered and leaving primarily red.
A portion of the beam of light coming from the sun scatters off molecules of gas and other small particles in the atmosphere. Here, Rayleigh scattering primarily occurs through sunlight's interaction with randomly located air molecules. It is this scattered light that gives the surrounding sky its brightness and its color. As previously stated, Rayleigh scattering is inversely proportional to the fourth power of wavelength, so that shorter wavelength violet and blue light will scatter more than the longer wavelengths (yellow and especially red light). However, the Sun, like any star, has its own spectrum and so I0 in the scattering formula above is not constant but falls away in the violet. In addition the oxygen in the Earth's atmosphere absorbs wavelengths at the edge of the ultra-violet region of the spectrum. The resulting color, which appears like a pale blue, actually is a mixture of all the scattered colors, mainly blue and green. Conversely, glancing toward the sun, the colors that were not scattered away—the longer wavelengths such as red and yellow light—are directly visible, giving the sun itself a slightly yellowish hue. Viewed from space, however, the sky is black and the sun is white.","Conversely, glancing toward the sun, the colors that were not scattered away—the longer wavelengths such as red and yellow light—are directly visible, giving the sun itself a slightly yellowish hueWhen the Sun is very low in the sky, atmospheric scattering renders the Sun yellow, red, orange, or magenta, and in rare occasions even green or blueDespite its typical whiteness (white sunrays, white ambient light, white illumination of the Moon, etc.), some cultures mentally picture the Sun as yellow and some even red; the reasons for this are cultural and exact ones are the subject of debate.
A portion of the beam of light coming from the sun scatters off molecules of gas and other small particles in the atmosphereThe low angle of the sun results in a longer distance for its light to travel through the atmosphere, causing shorter wavelengths of light, such as blue, green and yellow, to be scattered and leaving primarily red.
A portion of the beam of light coming from the sun scatters off molecules of gas and other small particles in the atmosphereThe Solar radiance per wavelength peaks in the green portion of the spectrum when viewed from spaceIn its interaction with the Earth's atmosphere, sunlight tends to scatter the shorter wavelengths, i.eThe low angle of the sun results in a longer distance for its light to travel through the atmosphere, causing shorter wavelengths of light, such as blue, green and yellow, to be scattered and leaving primarily red.
White light from the sun consists of a continuous spectrum of colors which, when divided, forms the colors of the rainbow: violet, indigo blue, blue, green, yellow, orange, and redOn the other hand, at sunset, when the atmosphere is denser, the light is less scattered, so that the longer wavelengths, red, are perceived.
The Sun emits light across the visible spectrum, so its color is white, with a CIE color-space index near (0.3, 0.3), when viewed from space or when the Sun is high in the skyDespite its typical whiteness (white sunrays, white ambient light, white illumination of the Moon, etc.), some cultures mentally","Conversely, glancing toward the sun, the colors that were not scattered away—the longer wavelengths such as red and yellow light—are directly visible, giving the sun itself a slightly yellowish hueWhen the Sun is very low in the sky, atmospheric scattering renders the Sun yellow, red, orange, or magenta, and in rare occasions even green or blueDespite its typical whiteness (white sunrays, white ambient light, white illumination of the Moon, etc.), some cultures mentally picture the Sun as yellow and some even red; the reasons for this are cultural and exact ones are the subject of debate.
A portion of the beam of light coming from the sun scatters off molecules of gas and other small particles in the atmosphereThe low angle of the sun results in a longer distance for its light to travel through the atmosphere, causing shorter wavelengths of light, such as blue, green and yellow, to be scattered and leaving primarily red.
A portion of the beam of light coming from the sun scatters off molecules of gas and other small particles in the atmosphereThe Solar radiance per wavelength peaks in the green portion of the spectrum when viewed from spaceIn its interaction with the Earth's atmosphere, sunlight tends to scatter the shorter wavelengths, i.eThe low angle of the sun results in a longer distance for its light to travel through the atmosphere, causing shorter wavelengths of light, such as blue, green and yellow, to be scattered and leaving primarily red.
White light from the sun consists of a continuous spectrum of colors which, when divided, forms the colors of the rainbow: violet, indigo blue, blue, green, yellow, orange, and redOn the other hand, at sunset, when the atmosphere is denser, the light is less scattered, so that the longer wavelengths, red, are perceived.
The Sun emits light across the visible spectrum, so its color is white, with a CIE color-space index near (0.3, 0.3), when viewed from space or when the Sun is high in the skyDespite its typical whiteness (white sunrays, white ambient light, white illumination of the Moon, etc.), some cultures mentally[SEP]What is the reason for the sun appearing slightly yellowish when viewed from Earth?","['B', 'E', 'D']",1.0
What is the Landau-Lifshitz-Gilbert equation used for in physics?,"In physics, the Landau–Lifshitz–Gilbert equation, named for Lev Landau, Evgeny Lifshitz, and T. L. Gilbert, is a name used for a differential equation describing the precessional motion of magnetization in a solid. The Landau–Lifshitz–Gilbert equation predicts the rotation of the magnetization in response to torques. Using the methods of irreversible statistical mechanics, numerous authors have independently obtained the Landau–Lifshitz equation. ==Landau–Lifshitz–Gilbert equation== In 1955 Gilbert replaced the damping term in the Landau–Lifshitz (LL) equation by one that depends on the time derivative of the magnetization: {d t}=-\gamma \left(\mathbf{M} \times \mathbf{H}_{\mathrm{eff}} - \eta \mathbf{M}\times\frac{d \mathbf{M}}{d t}\right)|}} This is the Landau–Lifshitz–Gilbert (LLG) equation, where is the damping parameter, which is characteristic of the material. In solid-state physics, the Landau–Lifshitz equation (LLE), named for Lev Landau and Evgeny Lifshitz, is a partial differential equation describing time evolution of magnetism in solids, depending on 1 time variable and 1, 2, or 3 space variables. ==Landau–Lifshitz equation== The LLE describes an anisotropic magnet. An additional term was added to the equation to describe the effect of spin polarized current on magnets. ==Landau–Lifshitz equation== thumb|upright|The terms of the Landau–Lifshitz–Gilbert equation: precession (red) and damping (blue). The various forms of the equation are commonly used in micromagnetics to model the effects of a magnetic field on ferromagnetic materials. An earlier, but equivalent, equation (the Landau–Lifshitz equation) was introduced by : {d t}= -\gamma \mathbf{M} \times \mathbf{H_\mathrm{eff}} - \lambda \mathbf{M} \times \left(\mathbf{M} \times \mathbf{H_{\mathrm{eff}}}\right)|}} where is the electron gyromagnetic ratio and is a phenomenological damping parameter, often replaced by :\lambda = \alpha \frac{\gamma}{M_\mathrm{s}}, where is a dimensionless constant called the damping factor. It is a modification by Gilbert of the original equation of Landau and Lifshitz. Mallinson, ""On damped gyromagnetic precession,"" in IEEE Transactions on Magnetics, vol. 23, no. 4, pp. 2003-2004, July 1987, doi: 10.1109/TMAG.1987.1065181. ==Landau–Lifshitz–Gilbert–Slonczewski equation== In 1996 Slonczewski expanded the model to account for the spin-transfer torque, i.e. the torque induced upon the magnetization by spin-polarized current flowing through the ferromagnet. The formal derivation to derive the Landau equation was given by Stuart, Watson and Palm in 1960.Stuart, J. T. (1960). In particular it can be used to model the time domain behavior of magnetic elements due to a magnetic field. The Landau–Zener formula is an analytic solution to the equations of motion governing the transition dynamics of a two-state quantum system, with a time- dependent Hamiltonian varying such that the energy separation of the two states is a linear function of time. The Landau equation is the equation for the magnitude of the disturbance, :\frac{d|A|^2}{dt} = 2\sigma_r |A|^2 - l_r |A|^4, which can also be re-written asProvansal, M., Mathis, C., & Boyer, L. (1987). In order that the equations of motion for the system might be solved analytically, a set of simplifications are made, known collectively as the Landau–Zener approximation. In 1944, Landau proposed an equation for the evolution of the magnitude of the disturbance, which is now called as the Landau equation, to explain the transition to turbulence based on a phenomenological argumentLandau, L. D. (1944). A description of the work is given in * * * * * == External links == * Magnetization dynamics applet Category:Magnetic ordering Category:Partial differential equations Category:Equations of physics Category:Lev Landau This better represents the behavior of real ferromagnets when the damping is large.For details of Kelly's non-resonant experiment, and of Gilbert's analysis (which led to Gilbert's modifying the damping term), see Gilbert, T. L. and Kelly, J. M. ""Anomalous rotational damping in ferromagnetic sheets"", Conf. Magnetism and Magnetic Materials, Pittsburgh, PA, June 14–16, 1955 (New York: American Institute of Electrical Engineers, Oct. 1955, pp. 253–263). The Landauer formula—named after Rolf Landauer, who first suggested its prototype in 1957—is a formula relating the electrical resistance of a quantum conductor to the scattering properties of the conductor. It can be transformed into the Landau–Lifshitz equation: {d t} = -\gamma' \mathbf{M} \times \mathbf{H}_{\mathrm{eff}} - \lambda \mathbf{M} \times (\mathbf{M} \times \mathbf{H}_{\mathrm{eff}})|}} where :\gamma' = \frac{\gamma}{1 + \gamma^2\eta^2M_s^2} \qquad \text{and} \qquad\lambda = \frac{\gamma^2\eta}{1 + \gamma^2\eta^2M_s^2}. Springer Science & Business Media. etc. ==General solution== The Landau equation is linear when it is written for the dependent variable |A|^{-2}, :\frac{d|A|^{-2}}{dt} + 2\sigma_r |A|^{-2} = l_r. ","The Landau-Lifshitz-Gilbert equation is a differential equation used to describe the precessional motion of magnetization M in a liquid, and is commonly used in micromagnetics to model the effects of a magnetic field on ferromagnetic materials.","The Landau-Lifshitz-Gilbert equation is a differential equation used to describe the precessional motion of magnetization M in a solid, and is commonly used in astrophysics to model the effects of a magnetic field on celestial bodies.","The Landau-Lifshitz-Gilbert equation is a differential equation used to describe the precessional motion of magnetization M in a solid, and is commonly used in micromagnetics to model the effects of a magnetic field on ferromagnetic materials.","The Landau-Lifshitz-Gilbert equation is a differential equation used to describe the precessional motion of magnetization M in a solid, and is commonly used in macro-magnetics to model the effects of a magnetic field on ferromagnetic materials.","The Landau-Lifshitz-Gilbert equation is a differential equation used to describe the precessional motion of magnetization M in a liquid, and is commonly used in macro-magnetics to model the effects of a magnetic field on ferromagnetic materials.",C,kaggle200,"The LL equation was introduced in 1935 by Landau and Lifshitz to model the precessional motion of magnetization formula_1 in a solid with an effective magnetic field formula_2 and with damping. Later, Gilbert modified the damping term, which in the limit of small damping yields identical results. The LLG equation is,
With these considerations, the differential equation governing the behavior of a magnetic moment in the presence of an applied magnetic field with damping can be written in the most familiar form of the Landau-Lifshitz-Gilbert equation,
The various forms of the equation are commonly used in micromagnetics to model the effects of a magnetic field on ferromagnetic materials. In particular it can be used to model the time domain behavior of magnetic elements due to a magnetic field. An additional term was added to the equation to describe the effect of spin polarized current on magnets.
Since without damping formula_23 is directed perpendicular to both the moment and the field, the damping term of the Landau-Lifshitz-Gilbert equation provides for a change in the moment towards the applied field. The Landau-Lifshitz-Gilbert equation can also be written in terms of torques,","In physics, the Landau–Lifshitz–Gilbert equation, named for Lev Landau, Evgeny Lifshitz, and T. L. Gilbert, is a name used for a differential equation describing the precessional motion of magnetization M in a solid. It is a modification by Gilbert of the original equation of Landau and Lifshitz.
The LL equation was introduced in 1935 by Landau and Lifshitz to model the precessional motion of magnetization M in a solid with an effective magnetic field Heff and with damping. Later, Gilbert modified the damping term, which in the limit of small damping yields identical results. The LLG equation is, ∂m∂t=−γm×Heff+αm×∂m∂t.
The constant α is the Gilbert phenomenological damping parameter and depends on the solid, and γ is the electron gyromagnetic ratio. Here m=M/MS.
The various forms of the equation are commonly used in micromagnetics to model the effects of a magnetic field on ferromagnetic materials. In particular it can be used to model the time domain behavior of magnetic elements due to a magnetic field. An additional term was added to the equation to describe the effect of spin polarized current on magnets.","In physics, the Landau–Lifshitz–Gilbert equation, named for Lev Landau, Evgeny Lifshitz, and TThe Landau-Lifshitz-Gilbert equation can also be written in terms of torques,It is a modification by Gilbert of the original equation of Landau and Lifshitz.
The LL equation was introduced in 1935 by Landau and Lifshitz to model the precessional motion of magnetization M in a solid with an effective magnetic field Heff and with damping- The LL equation was introduced in 1935 by Landau and Lifshitz to model the precessional motion of magnetization formula_1 in a solid with an effective magnetic field formula_2 and with dampingThe LLG equation is,
With these considerations, the differential equation governing the behavior of a magnetic moment in the presence of an applied magnetic field with damping can be written in the most familiar form of the Landau-Lifshitz-Gilbert equation,
The various forms of the equation are commonly used in micromagnetics to model the effects of a magnetic field on ferromagnetic materialsGilbert, is a name used for a differential equation describing the precessional motion of magnetization M in a solidIn particular it can be used to model the time domain behavior of magnetic elements due to a magnetic fieldAn additional term was added to the equation to describe the effect of spin polarized current on magnets.
Since without damping formula_23 is directed perpendicular to both the moment and the field, the damping term of the Landau-Lifshitz-Gilbert equation provides for a change in the moment towards the applied fieldLater, Gilbert modified the damping term, which in the limit of small damping yields identical resultsThe LLG equation is, ∂m∂t=−γm×Heff+αm×∂m∂t.
The constant α is the Gilbert phenomenological damping parameter and depends on the solid, and γ is the electron gyromagnetic ratioHere m=M/MS.
The various forms of the equation are commonly used in micromagnetics to model the effects of a magnetic field on ferromagnetic materialsAn additional term was added to the equation to describe the effect of spin polarized current on magnetsL","In physics, the Landau–Lifshitz–Gilbert equation, named for Lev Landau, Evgeny Lifshitz, and TThe Landau-Lifshitz-Gilbert equation can also be written in terms of torques,It is a modification by Gilbert of the original equation of Landau and Lifshitz.
The LL equation was introduced in 1935 by Landau and Lifshitz to model the precessional motion of magnetization M in a solid with an effective magnetic field Heff and with damping- The LL equation was introduced in 1935 by Landau and Lifshitz to model the precessional motion of magnetization formula_1 in a solid with an effective magnetic field formula_2 and with dampingThe LLG equation is,
With these considerations, the differential equation governing the behavior of a magnetic moment in the presence of an applied magnetic field with damping can be written in the most familiar form of the Landau-Lifshitz-Gilbert equation,
The various forms of the equation are commonly used in micromagnetics to model the effects of a magnetic field on ferromagnetic materialsGilbert, is a name used for a differential equation describing the precessional motion of magnetization M in a solidIn particular it can be used to model the time domain behavior of magnetic elements due to a magnetic fieldAn additional term was added to the equation to describe the effect of spin polarized current on magnets.
Since without damping formula_23 is directed perpendicular to both the moment and the field, the damping term of the Landau-Lifshitz-Gilbert equation provides for a change in the moment towards the applied fieldLater, Gilbert modified the damping term, which in the limit of small damping yields identical resultsThe LLG equation is, ∂m∂t=−γm×Heff+αm×∂m∂t.
The constant α is the Gilbert phenomenological damping parameter and depends on the solid, and γ is the electron gyromagnetic ratioHere m=M/MS.
The various forms of the equation are commonly used in micromagnetics to model the effects of a magnetic field on ferromagnetic materialsAn additional term was added to the equation to describe the effect of spin polarized current on magnetsL[SEP]What is the Landau-Lifshitz-Gilbert equation used for in physics?","['C', 'A', 'D']",1.0
What is spatial dispersion?,"In the physics of continuous media, spatial dispersion is a phenomenon where material parameters such as permittivity or conductivity have dependence on wavevector. Spatial dispersion refers to the non-local response of the medium to the space; this can be reworded as the wavevector dependence of the permittivity. Spatial dispersion can be compared to temporal dispersion, the latter often just called dispersion. Within optics, dispersion is a property of telecommunication signals along transmission lines (such as microwaves in coaxial cable) or the pulses of light in optical fiber. Temporal dispersion represents memory effects in systems, commonly seen in optics and electronics. In optics and in wave propagation in general, dispersion is the phenomenon in which the phase velocity of a wave depends on its frequency; sometimes the term chromatic dispersion is used for specificity to optics in particular. A dispersion is a system in which distributed particles of one material are dispersed in a continuous phase of another material. In materials science, dispersion is the fraction of atoms of a material exposed to the surface. Although the term is used in the field of optics to describe light and other electromagnetic waves, dispersion in the same sense can apply to any sort of wave motion such as acoustic dispersion in the case of sound and seismic waves, and in gravity waves (ocean waves). However, dispersion also has an effect in many other circumstances: for example, group-velocity dispersion causes pulses to spread in optical fibers, degrading signals over long distances; also, a cancellation between group-velocity dispersion and nonlinear effects leads to soliton waves. == Material and waveguide dispersion == Most often, chromatic dispersion refers to bulk material dispersion, that is, the change in refractive index with optical frequency. Spatial dispersion and temporal dispersion may occur in the same system. == Origin: nonlocal response == The origin of spatial dispersion is nonlocal response, where response to a force field appears at many locations, and can appear even in locations where the force is zero. Spatial dispersion contributes relatively small perturbations to optics, giving weak effects such as optical activity. Spatial dispersion on the other hand represents spreading effects and is usually significant only at microscopic length scales. Most commonly, the spatial dispersion in permittivity ε is of interest. === Crystal optics === Inside crystals there may be a combination of spatial dispersion, temporal dispersion, and anisotropy.Agranovich & Ginzburg . Dispersion is a material property. Spatial dispersion also plays an important role in the understanding of electromagnetic metamaterials. The conductivity function \tilde\sigma(k,\omega) has spatial dispersion if it is dependent on the wavevector k. Material dispersion can be a desirable or undesirable effect in optical applications. All common transmission media also vary in attenuation (normalized to transmission length) as a function of frequency, leading to attenuation distortion; this is not dispersion, although sometimes reflections at closely spaced impedance boundaries (e.g. crimped segments in a cable) can produce signal distortion which further aggravates inconsistent transit time as observed across signal bandwidth. == Examples == The most familiar example of dispersion is probably a rainbow, in which dispersion causes the spatial separation of a white light into components of different wavelengths (different colors). In optics, one important and familiar consequence of dispersion is the change in the angle of refraction of different colors of light,Dispersion Compensation. ","Spatial dispersion is a phenomenon in the physics of continuous media where material parameters such as permittivity or conductivity have dependence on time. It represents memory effects in systems, commonly seen in optics and electronics.",Spatial dispersion is a phenomenon in the physics of continuous media where material parameters such as permittivity or conductivity have dependence on time. It represents spreading effects and is usually significant only at microscopic length scales.,"Spatial dispersion is a phenomenon in the physics of continuous media where material parameters such as permittivity or conductivity have no dependence on wavevector. It represents memory effects in systems, commonly seen in optics and electronics.",Spatial dispersion is a phenomenon in the physics of continuous media where material parameters such as permittivity or conductivity have dependence on wavevector. It represents spreading effects and is usually significant only at microscopic length scales.,"Spatial dispersion is a phenomenon in the physics of continuous media where material parameters such as permittivity or conductivity have dependence on wavevector. It represents memory effects in systems, commonly seen in optics and electronics.",D,kaggle200,"In electromagnetism, spatial dispersion plays a role in a few material effects such as optical activity and doppler broadening. Spatial dispersion also plays an important role in the understanding of electromagnetic metamaterials. Most commonly, the spatial dispersion in permittivity ""ε"" is of interest.
The conductivity function formula_13 has spatial dispersion if it is dependent on the wavevector ""k"". This occurs if the spatial function formula_16 is not pointlike (delta function) response in ""x-x' "".
Spatial dispersion can be compared to temporal dispersion, the latter often just called dispersion. Temporal dispersion represents memory effects in systems, commonly seen in optics and electronics. Spatial dispersion on the other hand represents spreading effects and is usually significant only at microscopic length scales. Spatial dispersion contributes relatively small perturbations to optics, giving weak effects such as optical activity. Spatial dispersion and temporal dispersion may occur in the same system.
In the physics of continuous media, spatial dispersion is a phenomenon where material parameters such as permittivity or conductivity have dependence on wavevector. Normally, such a dependence is assumed to be absent for simplicity, however spatial dispersion exists to varying degrees in all materials.","A measure of spatial dispersion that is not based on the covariance matrix is the average distance between nearest neighbors.
Spatial dispersion can be compared to temporal dispersion, the latter often just called dispersion. Temporal dispersion represents memory effects in systems, commonly seen in optics and electronics. Spatial dispersion on the other hand represents spreading effects and is usually significant only at microscopic length scales. Spatial dispersion contributes relatively small perturbations to optics, giving weak effects such as optical activity. Spatial dispersion and temporal dispersion may occur in the same system.
In the physics of continuous media, spatial dispersion is a phenomenon where material parameters such as permittivity or conductivity have dependence on wavevector. Normally, such a dependence is assumed to be absent for simplicity, however spatial dispersion exists to varying degrees in all materials.","Spatial dispersion on the other hand represents spreading effects and is usually significant only at microscopic length scalesA measure of spatial dispersion that is not based on the covariance matrix is the average distance between nearest neighbors.
Spatial dispersion can be compared to temporal dispersion, the latter often just called dispersionSpatial dispersion and temporal dispersion may occur in the same system.
In the physics of continuous media, spatial dispersion is a phenomenon where material parameters such as permittivity or conductivity have dependence on wavevectorThis occurs if the spatial function formula_16 is not pointlike (delta function) response in ""x-x' "".
Spatial dispersion can be compared to temporal dispersion, the latter often just called dispersionSpatial dispersion contributes relatively small perturbations to optics, giving weak effects such as optical activityNormally, such a dependence is assumed to be absent for simplicity, however spatial dispersion exists to varying degrees in all materials.- In electromagnetism, spatial dispersion plays a role in a few material effects such as optical activity and doppler broadeningNormally, such a dependence is assumed to be absent for simplicity, however spatial dispersion exists to varying degrees in all materialsTemporal dispersion represents memory effects in systems, commonly seen in optics and electronicsMost commonly, the spatial dispersion in permittivity ""ε"" is of interest.
The conductivity function formula_13 has spatial dispersion if it is dependent on the wavevector ""k""Spatial dispersion also plays an important role in the understanding of electromagnetic metamaterials","Spatial dispersion on the other hand represents spreading effects and is usually significant only at microscopic length scalesA measure of spatial dispersion that is not based on the covariance matrix is the average distance between nearest neighbors.
Spatial dispersion can be compared to temporal dispersion, the latter often just called dispersionSpatial dispersion and temporal dispersion may occur in the same system.
In the physics of continuous media, spatial dispersion is a phenomenon where material parameters such as permittivity or conductivity have dependence on wavevectorThis occurs if the spatial function formula_16 is not pointlike (delta function) response in ""x-x' "".
Spatial dispersion can be compared to temporal dispersion, the latter often just called dispersionSpatial dispersion contributes relatively small perturbations to optics, giving weak effects such as optical activityNormally, such a dependence is assumed to be absent for simplicity, however spatial dispersion exists to varying degrees in all materials.- In electromagnetism, spatial dispersion plays a role in a few material effects such as optical activity and doppler broadeningNormally, such a dependence is assumed to be absent for simplicity, however spatial dispersion exists to varying degrees in all materialsTemporal dispersion represents memory effects in systems, commonly seen in optics and electronicsMost commonly, the spatial dispersion in permittivity ""ε"" is of interest.
The conductivity function formula_13 has spatial dispersion if it is dependent on the wavevector ""k""Spatial dispersion also plays an important role in the understanding of electromagnetic metamaterials[SEP]What is spatial dispersion?","['D', 'B', 'E']",1.0
What are the constituents of cold dark matter?,"Cold Dark Matter may refer to: * Cold dark matter, a hypothetical form of dark matter in cosmology and physics * Cold Dark Matter (Psychic TV album) * Cold Dark Matter (Red Harvest album), 2000 The WIMPs (weakly interacting massive particles), when produced non-thermally, could be candidates for warm dark matter. It consists of particles ""that emerge relatively late in cosmic time (z ≲ 1000) and are born non-relativistic from the decays of cold particles"". == Notes == Category:Physical cosmology Category:Dark matter Warm Dark Matter. Virtually every aspect of modern dark-matter research is covered, with the wide authorship providing detailed but consistently readable contributions. … About.com. Retrieved 23 Jan., 2013. http://space.about.com/od/astronomydictionary/g/Warm-Dark- Matter.htm. ==Further reading== * Category:Dark matter Particle Dark Matter: Observations, Models and Searches (2010) is an edited volume that describes the theoretical and experimental aspects of the dark matter problem from particle physics, astrophysics, and cosmological perspectives. Particle dark matter (G. Bertone and J. Silk) *2. Warm dark matter (WDM) is a hypothesized form of dark matter that has properties intermediate between those of hot dark matter and cold dark matter, causing structure formation to occur bottom-up from above their free-streaming scale, and top-down below their free streaming scale. ==External links== * Particle Dark Matter at Cambridge University Press * WorldCat link to Particle Dark Matter ==References== Category:2010 non-fiction books Dark matter and stars (G. Bertone) ==Critical response== Il Nuovo Saggiatore writes ""this book represents a text that any scholar whose research field is somewhat related to dark matter will find useful to have within easy reach … graduate students will find in this book an extremely useful guide into the vast and interdisciplinary field of dark matter."" In general, however, the thermally produced WIMPs are cold dark matter candidates. ==keVins and GeVins== One possible WDM candidate particle with a mass of a few keV comes from introducing two new, zero charge, zero lepton number fermions to the Standard Model of Particle Physics: ""keV-mass inert fermions"" (keVins) and ""GeV-mass inert fermions"" (GeVins). keVins are overproduced if they reach thermal equilibrium in the early universe, but in some scenarios the entropy production from the decays of unstable heavier particles may suppress their abundance to the correct value. Fuzzy cold dark matter is a hypothetical form of cold dark matter proposed to solve the cuspy halo problem. Fuzzy cold dark matter is a limit of scalar field dark matter without self-interaction. The Observatory writes ""Particle Dark Matter is a very welcome addition. Meta-cold dark matter, also known as mCDM, is a form of cold dark matter proposed to solve the cuspy halo problem. Dark matter at the centers of galaxies (D. Merritt) *6. New research (2023) has left fuzzy dark matter as the leading model, replacing WIMP dark matter. == Notes == Category:Physical cosmology Category:Dark matter Category:Hypothetical objects Dark matter and BBN (K. Jedamzik and M. Pospelov) *29. This lower limit on the mass of warm dark matter thermal relics mWDM > 4.6 keV; or adding dwarf satellite counts mWDM > 6.3 keV ==See also== * ** ** * * ==References== * * * * * Millis, John. ","They are unknown, but possibilities include large objects like MACHOs or new particles such as WIMPs and axions.",They are known to be black holes and Preon stars.,They are only MACHOs.,They are clusters of brown dwarfs.,They are new particles such as RAMBOs.,A,kaggle200,"In the cold dark matter theory, structure grows hierarchically, with small objects collapsing under their self-gravity first and merging in a continuous hierarchy to form larger and more massive objects. Predictions of the cold dark matter paradigm are in general agreement with observations of cosmological large-scale structure.
Warm dark matter (WDM) is a hypothesized form of dark matter that has properties intermediate between those of hot dark matter and cold dark matter, causing structure formation to occur bottom-up from above their free-streaming scale, and top-down below their free streaming scale. The most common WDM candidates are sterile neutrinos and gravitinos. The WIMPs (weakly interacting massive particles), when produced non-thermally, could be candidates for warm dark matter. In general, however, the thermally produced WIMPs are cold dark matter candidates.
Dark matter is detected through its gravitational interactions with ordinary matter and radiation. As such, it is very difficult to determine what the constituents of cold dark matter are. The candidates fall roughly into three categories:
The constituents of cold dark matter are unknown. Possibilities range from large objects like MACHOs (such as black holes and Preon stars) or RAMBOs (such as clusters of brown dwarfs), to new particles such as WIMPs and axions.","Cold dark matter Cold dark matter offers the simplest explanation for most cosmological observations. It is dark matter composed of constituents with an FSL much smaller than a protogalaxy. This is the focus for dark matter research, as hot dark matter does not seem capable of supporting galaxy or galaxy cluster formation, and most particle candidates slowed early.
Dark matter is detected through its gravitational interactions with ordinary matter and radiation. As such, it is very difficult to determine what the constituents of cold dark matter are. The candidates fall roughly into three categories: Axions, very light particles with a specific type of self-interaction that makes them a suitable CDM candidate. In recent years, axions have become one of the most promising candidates for dark matter. Axions have the theoretical advantage that their existence solves the strong CP problem in quantum chromodynamics, but axion particles have only been theorized and never detected. Axions are an example of a more general category of particle called a WISP (weakly interacting ""slender"" or ""slim"" particle), which are the low-mass counterparts of WIMPs.Massive compact halo objects (MACHOs), large, condensed objects such as black holes, neutron stars, white dwarfs, very faint stars, or non-luminous objects like planets. The search for these objects consists of using gravitational lensing to detect the effects of these objects on background galaxies. Most experts believe that the constraints from those searches rule out MACHOs as a viable dark matter candidate.Weakly interacting massive particles (WIMPs). There is no currently known particle with the required properties, but many extensions of the standard model of particle physics predict such particles. The search for WIMPs involves attempts at direct detection by highly sensitive detectors, as well as attempts at production of WIMPs by particle accelerators. Historically, WIMPs were regarded as one of the most promising candidates for the composition of dark matter, but in recent years WIMPs have since been supplanted by axions with the non-detection of WIMPs in experiments. The DAMA/NaI experiment and its successor DAMA/LIBRA have claimed to have directly detected dark matter particles passing through the Earth, but many scientists remain skeptical because no results from similar experiments seem compatible with the DAMA results.
The constituents of cold dark matter are unknown. Possibilities range from large objects like MACHOs (such as black holes and Preon stars) or RAMBOs (such as clusters of brown dwarfs), to new particles such as [WIMPs and axions The 1997 DAMA/NaI experiment and its successor DAMA/LIBRA in 2013, claimed to directly detect dark matter particles passing through the Earth, but many researchers remain skeptical, as negative results from similar experiments seem incompatible with the DAMA results.","The candidates fall roughly into three categories:
The constituents of cold dark matter are unknownAs such, it is very difficult to determine what the constituents of cold dark matter areCold dark matter Cold dark matter offers the simplest explanation for most cosmological observationsThe WIMPs (weakly interacting massive particles), when produced non-thermally, could be candidates for warm dark matterPredictions of the cold dark matter paradigm are in general agreement with observations of cosmological large-scale structure.
Warm dark matter (WDM) is a hypothesized form of dark matter that has properties intermediate between those of hot dark matter and cold dark matter, causing structure formation to occur bottom-up from above their free-streaming scale, and top-down below their free streaming scaleIn general, however, the thermally produced WIMPs are cold dark matter candidates.
Dark matter is detected through its gravitational interactions with ordinary matter and radiationHistorically, WIMPs were regarded as one of the most promising candidates for the composition of dark matter, but in recent years WIMPs have since been supplanted by axions with the non-detection of WIMPs in experimentsThe DAMA/NaI experiment and its successor DAMA/LIBRA have claimed to have directly detected dark matter particles passing through the Earth, but many scientists remain skeptical because no results from similar experiments seem compatible with the DAMA results.
The constituents of cold dark matter are unknownIt is dark matter composed of constituents with an FSL much smaller than a protogalaxyThis is the focus for dark matter research, as hot dark matter does not seem capable of supporting galaxy or galaxy cluster formation, and most particle candidates slowed early.
Dark matter is detected through its gravitational interactions with ordinary matter and radiation- In the cold dark matter theory, structure grows hierarchically, with small objects collapsing under their self-gravity first and merging in a continuous hierarchy to form larger and more massive objectsIn recent year","The candidates fall roughly into three categories:
The constituents of cold dark matter are unknownAs such, it is very difficult to determine what the constituents of cold dark matter areCold dark matter Cold dark matter offers the simplest explanation for most cosmological observationsThe WIMPs (weakly interacting massive particles), when produced non-thermally, could be candidates for warm dark matterPredictions of the cold dark matter paradigm are in general agreement with observations of cosmological large-scale structure.
Warm dark matter (WDM) is a hypothesized form of dark matter that has properties intermediate between those of hot dark matter and cold dark matter, causing structure formation to occur bottom-up from above their free-streaming scale, and top-down below their free streaming scaleIn general, however, the thermally produced WIMPs are cold dark matter candidates.
Dark matter is detected through its gravitational interactions with ordinary matter and radiationHistorically, WIMPs were regarded as one of the most promising candidates for the composition of dark matter, but in recent years WIMPs have since been supplanted by axions with the non-detection of WIMPs in experimentsThe DAMA/NaI experiment and its successor DAMA/LIBRA have claimed to have directly detected dark matter particles passing through the Earth, but many scientists remain skeptical because no results from similar experiments seem compatible with the DAMA results.
The constituents of cold dark matter are unknownIt is dark matter composed of constituents with an FSL much smaller than a protogalaxyThis is the focus for dark matter research, as hot dark matter does not seem capable of supporting galaxy or galaxy cluster formation, and most particle candidates slowed early.
Dark matter is detected through its gravitational interactions with ordinary matter and radiation- In the cold dark matter theory, structure grows hierarchically, with small objects collapsing under their self-gravity first and merging in a continuous hierarchy to form larger and more massive objectsIn recent year[SEP]What are the constituents of cold dark matter?","['A', 'E', 'B']",1.0
What is the mechanism of FTIR?,"Nano-FTIR (nanoscale Fourier transform infrared spectroscopy) is a scanning probe technique that utilizes as a combination of two techniques: Fourier transform infrared spectroscopy (FTIR) and scattering-type scanning near-field optical microscopy (s-SNOM). Fourier-transform infrared spectroscopy (FTIR) is a technique used to obtain an infrared spectrum of absorption or emission of a solid, liquid, or gas. A typical nano-FTIR setup thus consists of an atomic force microscope, a broadband infrared light source used for tip illumination, and a Michelson interferometer acting as Fourier-transform spectrometer. A mathematical approach Fourier Transform converts the raw data into spectrum. === Advantages === * The FTIR technique uses a polychromatic beam of light with a wide range of continuous frequencies simultaneously, and therefore allows a much higher speed of scanning versus the conventional monochromatic dispersive spectroscopy. As s-SNOM, nano-FTIR is based on atomic-force microscopy (AFM), where a sharp tip is illuminated by an external light source and the tip- scattered light (typically back-scattered) is detected as a function of tip position. The diffuse radiation is then focused again on a mirror when they exit and the combined IR beam carries the bulk sample information to the detector. 320px|DRIFT Spectroscopy Reflection- absorption FTIR * Sample is usually prepared as a thick block and is polished into a smooth surface. * Without the slit used in dispersive spectroscopy, FTIR allows more light to enter the spectrometer and gives a higher signal-to-noise ratio, i.e. a less- disturbed signal. Fourier transform infrared spectroscopy (FTIR) is a spectroscopic technique that has been used for analyzing the fundamental molecular structure of geological samples in recent decades. In nano- FTIR, the sample stage is placed in one of the interferometer arms, which allows for recording both amplitude and phase of the detected light (unlike conventional FTIR that normally does not yield phase information). * ATR-FTIR allows the functional group near the interface of the crystals to be analyzed when the IR radiation is totally internal reflected at the surface. With the detection of phase, nano-FTIR provides complete information about near fields, which is essential for quantitative studies and many other applications. Nano-FTIR detects the tip-scattered light interferometrically. Most of the geology applications of FTIR focus on the mid-infrared range, which is approximately 4000 to 400 cm−1. == Instrumentation == thumb|360px| The basic components of a Michelson Interferometer: a coherent light source, a detector, a beam splitter, a stationary mirror and a movable mirror. Nano-FTIR is capable of performing infrared (IR) spectroscopy of materials in ultrasmall quantities and with nanoscale spatial resolution. In other words, nano-FTIR has a unique capability of recovering the same information about thin-film samples that is typically returned by ellipsometry or impedance spectroscopy, yet with nanoscale spatial resolution. As a direct consequence of being quantitative technique (i.e. capable of highly reproducible detection of both near-field amplitude & phase and well understood near-field interaction models), nano- FTIR also provides means for the quantitative studies of the sample interior (within the probing range of the tip near field, of course). The throughput advantage is important for high-resolution FTIR, as the monochromator in a dispersive instrument with the same resolution would have very narrow entrance and exit slits. ==Motivation == FTIR is a method of measuring infrared absorption and emission spectra. Digilab pioneered the world's first commercial FTIR spectrometer (Model FTS-14) in 1969 (Digilab FTIRs are now a part of Agilent technologies's molecular product line after it acquired spectroscopy business from Varian). ==Michelson interferometer== thumb|upright=1.25|Schematic diagram of a Michelson interferometer, configured for FTIR In a Michelson interferometer adapted for FTIR, light from the polychromatic infrared source, approximately a black-body radiator, is collimated and directed to a beam splitter. The term Fourier-transform infrared spectroscopy originates from the fact that a Fourier transform (a mathematical process) is required to convert the raw data into the actual spectrum. == Conceptual introduction == The goal of absorption spectroscopy techniques (FTIR, ultraviolet-visible (""UV-vis"") spectroscopy, etc.) is to measure how much light a sample absorbs at each wavelength. This permits a direct comparison of nano-FTIR spectra with conventional absorption spectra of the sample material, thus allowing for simple spectroscopic identification according to standard FTIR databases. == History == Nano-FTIR was first described in 2005 in a patent by Ocelic and Hillenbrand as Fourier-transform spectroscopy of tip- scattered light with an asymmetric spectrometer (i.e. the tip/sample placed inside one of the interferometer arms). ","The mechanism of FTIR is called ray optics, which is a good analog to visualize quantum tunneling.","The mechanism of FTIR is called scattering, which is a good analog to visualize quantum tunneling.","The mechanism of FTIR is called frustrated TIR, which is a good analog to visualize quantum tunneling.","The mechanism of FTIR is called evanescent-wave coupling, which is a good analog to visualize quantum tunneling.","The mechanism of FTIR is called total internal reflection microscopy, which is a good analog to visualize quantum tunneling.",D,kaggle200,"Quantum tunneling in water was reported as early as 1992. At that time it was known that motions can destroy and regenerate the weak hydrogen bond by internal rotations of the substituent water monomers.
On 18 March 2016, it was reported that the hydrogen bond can be broken by quantum tunneling in the water hexamer. Unlike previously reported tunneling motions in water, this involved the concerted breaking of two hydrogen bonds.
Quantum tunneling is the most common operation mode of force-sensing resistors. A conductive polymer operating on the basis of quantum tunneling exhibits a resistance decrement for incremental values of stress formula_4. Commercial FSRs such as the FlexiForce, Interlink and Peratech sensors operate on the basis of quantum tunneling. The Peratech sensors are also referred to in the literature as quantum tunnelling composite.
The mechanism of FTIR is called ""evanescent-wave coupling"", and is a directly visible example of quantum tunneling. Due to the wave nature of matter, an electron has a non-zero probability of ""tunneling"" through a barrier, even if classical mechanics would say that its energy is insufficient. Similarly, due to the wave nature of light, a photon has a non-zero probability of crossing a gap, even if ray optics would say that its approach is too oblique.","Quantum tunneling The quantum tunneling dynamics in water was reported as early as 1992. At that time it was known that there are motions which destroy and regenerate the weak hydrogen bond by internal rotations of the substituent water monomers. On 18 March 2016, it was reported that the hydrogen bond can be broken by quantum tunneling in the water hexamer. Unlike previously reported tunneling motions in water, this involved the concerted breaking of two hydrogen bonds. Later in the same year, the discovery of the quantum tunneling of water molecules was reported.
Quantum tunneling in FSRs Quantum tunneling is the most common operation mode of force-sensing resistors. A conductive polymer operating on the basis of quantum tunneling exhibits a resistance decrement for incremental values of stress σ . Commercial FSRs such as the FlexiForce, Interlink and Peratech sensors operate based on quantum tunneling. The Peratech sensors are also referred to in the literature as quantum tunnelling composite.
Frustrated TIR can be observed by looking into the top of a glass of water held in one's hand (Fig. 10). If the glass is held loosely, contact may not be sufficiently close and widespread to produce a noticeable effect. But if it is held more tightly, the ridges of one's fingerprints interact strongly with the evanescent waves, allowing the ridges to be seen through the otherwise totally reflecting glass-air surface.The same effect can be demonstrated with microwaves, using paraffin wax as the ""internal"" medium (where the incident and reflected waves exist). In this case the permitted gap width might be (e.g.) 1 cm or several cm, which is easily observable and adjustable.The term frustrated TIR also applies to the case in which the evanescent wave is scattered by an object sufficiently close to the reflecting interface. This effect, together with the strong dependence of the amount of scattered light on the distance from the interface, is exploited in total internal reflection microscopy.The mechanism of FTIR is called evanescent-wave coupling, and is a good analog to visualize quantum tunneling. Due to the wave nature of matter, an electron has a non-zero probability of ""tunneling"" through a barrier, even if classical mechanics would say that its energy is insufficient. Similarly, due to the wave nature of light, a photon has a non-zero probability of crossing a gap, even if ray optics would say that its approach is too oblique.","The Peratech sensors are also referred to in the literature as quantum tunnelling composite.
The mechanism of FTIR is called ""evanescent-wave coupling"", and is a directly visible example of quantum tunnelingCommercial FSRs such as the FlexiForce, Interlink and Peratech sensors operate based on quantum tunnelingCommercial FSRs such as the FlexiForce, Interlink and Peratech sensors operate on the basis of quantum tunnelingThis effect, together with the strong dependence of the amount of scattered light on the distance from the interface, is exploited in total internal reflection microscopy.The mechanism of FTIR is called evanescent-wave coupling, and is a good analog to visualize quantum tunnelingLater in the same year, the discovery of the quantum tunneling of water molecules was reported.
Quantum tunneling in FSRs Quantum tunneling is the most common operation mode of force-sensing resistorsIn this case the permitted gap width might be (e.g.) 1 cm or several cm, which is easily observable and adjustable.The term frustrated TIR also applies to the case in which the evanescent wave is scattered by an object sufficiently close to the reflecting interfaceThe Peratech sensors are also referred to in the literature as quantum tunnelling composite.
Frustrated TIR can be observed by looking into the top of a glass of water held in one's hand (Fig. 10)Unlike previously reported tunneling motions in water, this involved the concerted breaking of two hydrogen bonds.
Quantum tunneling is the most common operation mode of force-sensing resistorsAt that time it was known that there are motions which destroy and regenerate the weak hydrogen bond by internal rotations of the substituent water monomersAt that time it was known that motions can destroy and regenerate the weak hydrogen bond by internal rotations of the substituent water monomers.
On 18 March 2016, it was reported that the hydrogen bond can be broken by quantum tunneling in the water hexamerUnlike previously reported tunneling motions in water, this involved the concerted breaking of two hydrogen bondsIf the glass is","The Peratech sensors are also referred to in the literature as quantum tunnelling composite.
The mechanism of FTIR is called ""evanescent-wave coupling"", and is a directly visible example of quantum tunnelingCommercial FSRs such as the FlexiForce, Interlink and Peratech sensors operate based on quantum tunnelingCommercial FSRs such as the FlexiForce, Interlink and Peratech sensors operate on the basis of quantum tunnelingThis effect, together with the strong dependence of the amount of scattered light on the distance from the interface, is exploited in total internal reflection microscopy.The mechanism of FTIR is called evanescent-wave coupling, and is a good analog to visualize quantum tunnelingLater in the same year, the discovery of the quantum tunneling of water molecules was reported.
Quantum tunneling in FSRs Quantum tunneling is the most common operation mode of force-sensing resistorsIn this case the permitted gap width might be (e.g.) 1 cm or several cm, which is easily observable and adjustable.The term frustrated TIR also applies to the case in which the evanescent wave is scattered by an object sufficiently close to the reflecting interfaceThe Peratech sensors are also referred to in the literature as quantum tunnelling composite.
Frustrated TIR can be observed by looking into the top of a glass of water held in one's hand (Fig. 10)Unlike previously reported tunneling motions in water, this involved the concerted breaking of two hydrogen bonds.
Quantum tunneling is the most common operation mode of force-sensing resistorsAt that time it was known that there are motions which destroy and regenerate the weak hydrogen bond by internal rotations of the substituent water monomersAt that time it was known that motions can destroy and regenerate the weak hydrogen bond by internal rotations of the substituent water monomers.
On 18 March 2016, it was reported that the hydrogen bond can be broken by quantum tunneling in the water hexamerUnlike previously reported tunneling motions in water, this involved the concerted breaking of two hydrogen bondsIf the glass is[SEP]What is the mechanism of FTIR?","['D', 'C', 'E']",1.0
What is the origin of the permanent moment in paramagnetism?,"In magnetic materials, the cause of the magnetic moment are the spin and orbital angular momentum states of the electrons, and varies depending on whether atoms in one region are aligned with atoms in another. === Magnetic pole model === thumb|upright|An electrostatic analog for a magnetic moment: two opposing charges separated by a finite distance. These unpaired dipoles (often called simply ""spins"", even though they also generally include orbital angular momentum) tend to align in parallel to an external magnetic field leading to a macroscopic effect called paramagnetism. This dipole moment comes from the more fundamental property of the electron that it has quantum mechanical spin. The origin of the magnetic moments responsible for magnetization can be either microscopic electric currents resulting from the motion of electrons in atoms, or the spin of the electrons or the nuclei. It is these intrinsic magnetic moments that give rise to the macroscopic effects of magnetism, and other phenomena, such as electron paramagnetic resonance. In this definition, the magnetic dipole moment of a system is the negative gradient of its intrinsic energy, , with respect to external magnetic field: : \mathbf{m} = -\hat\mathbf x\frac{\partial U_{\rm int}}{\partial B_x}-\hat\mathbf y\frac{\partial U_{\rm int}}{\partial B_y} -\hat\mathbf z\frac{\partial U_{\rm int}}{\partial B_z}. For many magnets the first non-zero term is the magnetic dipole moment. Fortunately, the linear relationship between the magnetic dipole moment of a particle and its angular momentum still holds, although it is different for each particle. In electromagnetism, the magnetic moment is the magnetic strength and orientation of a magnet or other object that produces a magnetic field. : Number of unpaired electrons Spin-only moment () 1 1.73 2 2.83 3 3.87 4 4.90 5 5.92 === Elementary particles === In atomic and nuclear physics, the Greek symbol represents the magnitude of the magnetic moment, often measured in Bohr magnetons or nuclear magnetons, associated with the intrinsic spin of the particle and/or with the orbital motion of the particle in a system. More precisely, the term magnetic moment normally refers to a system's magnetic dipole moment, the component of the magnetic moment that can be represented by an equivalent magnetic dipole: a magnetic north and south pole separated by a very small distance. The magnetic dipole moment of an object is readily defined in terms of the torque that the object experiences in a given magnetic field. thumb|Paramagnetism, ferromagnetism and spin waves Ferromagnetism is a property of certain materials (such as iron) that results in a significant, observable magnetic permeability, and in many cases, a significant magnetic coercivity, allowing the material to form a permanent magnet. These fields are related by , where is the magnetization. == Relation to angular momentum == The magnetic moment has a close connection with angular momentum called the gyromagnetic effect. See electron magnetic moment and Bohr magneton for more details. == Atoms, molecules, and elementary particles == Fundamentally, contributions to any system's magnetic moment may come from sources of two kinds: motion of electric charges, such as electric currents; and the intrinsic magnetism of elementary particles, such as the electron. In classical electromagnetism, magnetization is the vector field that expresses the density of permanent or induced magnetic dipole moments in a magnetic material. The first term describes precession of the moment about the effective field, while the second is a damping term related to dissipation of energy caused by interaction with the surroundings. === Magnetic moment of an electron === Electrons and many elementary particles also have intrinsic magnetic moments, an explanation of which requires a quantum mechanical treatment and relates to the intrinsic angular momentum of the particles as discussed in the article Electron magnetic moment. Further, a torque applied to a relatively isolated magnetic dipole such as an atomic nucleus can cause it to precess (rotate about the axis of the applied field). The magnetic field of a magnetic dipole is proportional to its magnetic dipole moment. See below for more details. == Effects of an external magnetic field == === Torque on a moment === The torque on an object having a magnetic dipole moment in a uniform magnetic field is: : \boldsymbol{\tau} = \mathbf{m} \times\mathbf{B}. ",The permanent moment is generally due to the spin of unpaired electrons in atomic or molecular electron orbitals.,The permanent moment is due to the alignment of dipoles perpendicular to the applied field.,"The permanent moment is due to the torque provided on the magnetic moments by an applied field, which tries to align the dipoles perpendicular to the applied field.",The permanent moment is due to the quantum-mechanical properties of spin and angular momentum.,The permanent moment is due to the interaction of dipoles with one another and are randomly oriented in the absence of an external field due to thermal agitation.,A,kaggle200,"it can be explicitly seen that the instantaneous change in magnetic moment occurs perpendicular to both the applied field and the direction of the moment, with no change in moment in the direction of the field.
which is the potential due to applied field and, in addition, a dipole in the direction of the applied field (the ""z""-direction) of dipole moment:
PCTFE exhibits a permanent dipole moment due to the asymmetry of its repeating unit. This dipole moment is perpendicular to the carbon-chain axis.
Constituent atoms or molecules of paramagnetic materials have permanent magnetic moments (dipoles), even in the absence of an applied field. The permanent moment generally is due to the spin of unpaired electrons in atomic or molecular electron orbitals (see Magnetic moment). In pure paramagnetism, the dipoles do not interact with one another and are randomly oriented in the absence of an external field due to thermal agitation, resulting in zero net magnetic moment. When a magnetic field is applied, the dipoles will tend to align with the applied field, resulting in a net magnetic moment in the direction of the applied field. In the classical description, this alignment can be understood to occur due to a torque being provided on the magnetic moments by an applied field, which tries to align the dipoles parallel to the applied field. However, the true origins of the alignment can only be understood via the quantum-mechanical properties of spin and angular momentum.","Carryover factors When a joint is released, balancing moment occurs to counterbalance the unbalanced moment. The balancing moment is initially the same as the fixed-end moment. This balancing moment is then carried over to the member's other end. The ratio of the carried-over moment at the other end to the fixed-end moment of the initial end is the carryover factor.
Permanent prevention of pseudopregnancy is accomplished with spaying.
Constituent atoms or molecules of paramagnetic materials have permanent magnetic moments (dipoles), even in the absence of an applied field. The permanent moment generally is due to the spin of unpaired electrons in atomic or molecular electron orbitals (see Magnetic moment). In pure paramagnetism, the dipoles do not interact with one another and are randomly oriented in the absence of an external field due to thermal agitation, resulting in zero net magnetic moment. When a magnetic field is applied, the dipoles will tend to align with the applied field, resulting in a net magnetic moment in the direction of the applied field. In the classical description, this alignment can be understood to occur due to a torque being provided on the magnetic moments by an applied field, which tries to align the dipoles parallel to the applied field. However, the true origins of the alignment can only be understood via the quantum-mechanical properties of spin and angular momentum.","The permanent moment generally is due to the spin of unpaired electrons in atomic or molecular electron orbitals (see Magnetic moment)The ratio of the carried-over moment at the other end to the fixed-end moment of the initial end is the carryover factor.
Permanent prevention of pseudopregnancy is accomplished with spaying.
Constituent atoms or molecules of paramagnetic materials have permanent magnetic moments (dipoles), even in the absence of an applied fieldIn pure paramagnetism, the dipoles do not interact with one another and are randomly oriented in the absence of an external field due to thermal agitation, resulting in zero net magnetic momentThis dipole moment is perpendicular to the carbon-chain axis.
Constituent atoms or molecules of paramagnetic materials have permanent magnetic moments (dipoles), even in the absence of an applied field- it can be explicitly seen that the instantaneous change in magnetic moment occurs perpendicular to both the applied field and the direction of the moment, with no change in moment in the direction of the field.
which is the potential due to applied field and, in addition, a dipole in the direction of the applied field (the ""z""-direction) of dipole moment:
PCTFE exhibits a permanent dipole moment due to the asymmetry of its repeating unitWhen a magnetic field is applied, the dipoles will tend to align with the applied field, resulting in a net magnetic moment in the direction of the applied fieldIn the classical description, this alignment can be understood to occur due to a torque being provided on the magnetic moments by an applied field, which tries to align the dipoles parallel to the applied fieldThe balancing moment is initially the same as the fixed-end momentHowever, the true origins of the alignment can only be understood via the quantum-mechanical properties of spin and angular momentumHowever, the true origins of the alignment can only be understood via the quantum-mechanical properties of spin and angular momentum.Carryover factors When a joint is released, balancing moment occurs to counterbalance the unbala","The permanent moment generally is due to the spin of unpaired electrons in atomic or molecular electron orbitals (see Magnetic moment)The ratio of the carried-over moment at the other end to the fixed-end moment of the initial end is the carryover factor.
Permanent prevention of pseudopregnancy is accomplished with spaying.
Constituent atoms or molecules of paramagnetic materials have permanent magnetic moments (dipoles), even in the absence of an applied fieldIn pure paramagnetism, the dipoles do not interact with one another and are randomly oriented in the absence of an external field due to thermal agitation, resulting in zero net magnetic momentThis dipole moment is perpendicular to the carbon-chain axis.
Constituent atoms or molecules of paramagnetic materials have permanent magnetic moments (dipoles), even in the absence of an applied field- it can be explicitly seen that the instantaneous change in magnetic moment occurs perpendicular to both the applied field and the direction of the moment, with no change in moment in the direction of the field.
which is the potential due to applied field and, in addition, a dipole in the direction of the applied field (the ""z""-direction) of dipole moment:
PCTFE exhibits a permanent dipole moment due to the asymmetry of its repeating unitWhen a magnetic field is applied, the dipoles will tend to align with the applied field, resulting in a net magnetic moment in the direction of the applied fieldIn the classical description, this alignment can be understood to occur due to a torque being provided on the magnetic moments by an applied field, which tries to align the dipoles parallel to the applied fieldThe balancing moment is initially the same as the fixed-end momentHowever, the true origins of the alignment can only be understood via the quantum-mechanical properties of spin and angular momentumHowever, the true origins of the alignment can only be understood via the quantum-mechanical properties of spin and angular momentum.Carryover factors When a joint is released, balancing moment occurs to counterbalance the unbala[SEP]What is the origin of the permanent moment in paramagnetism?","['D', 'C', 'B']",0.0
What is the reason that Newton's second law cannot be used to calculate the development of a physical system in quantum mechanics?,"Classical Newtonian physics has, formally, been replaced by quantum mechanics on the small scale and relativity on the large scale. The successes of kinetic theory gave further credence to the idea that matter is composed of atoms, yet the theory also had shortcomings that would only be resolved by the development of quantum mechanics. Despite the name, particles do not literally spin around an axis, and quantum mechanical spin has no correspondence in classical physics. At that point it was realised that the mathematics of the new quantum mechanics was already laid out in it. The first applications of quantum mechanics to physical systems were the algebraic determination of the hydrogen spectrum by Wolfgang Pauli and the treatment of diatomic molecules by Lucy Mensing. ==Modern quantum mechanics== Heisenberg formulated an early version of the uncertainty principle in 1927, analyzing a thought experiment where one attempts to measure an electron's position and momentum simultaneously. Thus special relativity rejects the absolute simultaneity assumed by classical mechanics; and quantum mechanics does not permit one to speak of properties of the system (exact position, say) other than those that can be connected to macro scale observations. Furthermore, to the postulates of quantum mechanics one should also add basic statements on the properties of spin and Pauli's exclusion principle, see below. === Spin === In addition to their other properties, all particles possess a quantity called spin, an intrinsic angular momentum. Quantum field theory has driven the development of more sophisticated formulations of quantum mechanics, of which the ones presented here are simple special cases. Although spin and the Pauli principle can only be derived from relativistic generalizations of quantum mechanics, the properties mentioned in the last two paragraphs belong to the basic postulates already in the non-relativistic limit. Early pioneers of quantum physics saw applications of quantum mechanics in biological problems. The mathematical formulations of quantum mechanics are those mathematical formalisms that permit a rigorous description of quantum mechanics. The history of quantum mechanics is a fundamental part of the history of modern physics. This was followed by other quantum models such as the John William Nicholson model of 1912 which was nuclear and discretized angular momentum.J. W. Nicholson, Month. They proposed that, of all closed classical orbits traced by a mechanical system in its phase space, only the ones that enclosed an area which was a multiple of Planck's constant were actually allowed. Building on de Broglie's approach, modern quantum mechanics was born in 1925, when the German physicists Werner Heisenberg, Max Born, and Pascual Jordan developed matrix mechanics and the Austrian physicist Erwin Schrödinger invented wave mechanics and the non-relativistic Schrödinger equation as an approximation of the generalised case of de Broglie's theory. Position and momentum are not things waiting for us to discover; rather, they are the results that are obtained by performing certain procedures. == Notes == #Messiah, Albert, Quantum Mechanics, volume I, pp. 45–50. == See also == * Heisenberg's microscope * Philosophy of physics == References == * Albert Messiah, Quantum Mechanics, English translation by G. M. Temmer of Mécanique Quantique, 1966, John Wiley and Sons * A lecture to his statistical mechanics class at the University of California at Santa Barbara by Dr. Herbert P. Broida (1920–1978) * ""Physics and the Real World"" by George F. R. Ellis, Physics Today, July, 2005 == External links == * Bohmian Mechanics website Category:Determinism Category:Experimental physics Category:Quantum measurement Category:Randomness Category:Philosophy of physics Category:Philosophy of language Category:Interpretation (philosophy) Because most humans continue to think in terms of the kind of events we perceive in the human scale of daily life, it became necessary to provide a new philosophical interpretation of classical physics. In particular, Einstein took the position that quantum mechanics must be incomplete, which motivated research into so-called hidden-variable theories. (""We became more and more convinced that a radical change of the foundations of physics was necessary, i.e., a new kind of mechanics for which we used the term quantum mechanics. Viewed through the lens of quantum mechanics or relativity, we can now see that classical physics, imported from the world of our everyday experience, includes notions for which there is no actual evidence. ","The existence of particle spin, which is linear momentum that can be described by the cumulative effect of point-like motions in space.","The existence of particle spin, which is angular momentum that is always equal to zero.","The existence of particle spin, which is linear momentum that cannot be described by the cumulative effect of point-like motions in space.","The existence of particle spin, which is angular momentum that cannot be described by the cumulative effect of point-like motions in space.","The existence of particle spin, which is angular momentum that can be described by the cumulative effect of point-like motions in space.",D,kaggle200,"A spin- particle is characterized by an angular momentum quantum number for spin s of . In solutions of the Schrödinger equation, angular momentum is quantized according to this number, so that total spin angular momentum
The spin angular momentum of light (SAM) is the component of angular momentum of light that is associated with the quantum spin and the rotation between the polarization degrees of freedom of the photon.
The quantity ""S"" is the density of spin angular momentum (spin in this case is not only for a point-like particle, but also for an extended body), and ""M"" is the density of orbital angular momentum. The total angular momentum is always the sum of spin and orbital contributions.
While angular momentum total conservation can be understood separately from Newton's laws of motion as stemming from Noether's theorem in systems symmetric under rotations, it can also be understood simply as an efficient method of calculation of results that can also be otherwise arrived at directly from Newton's second law, together with laws governing the forces of nature (such as Newton's third law, Maxwell's equations and Lorentz force). Indeed, given initial conditions of position and velocity for every point, and the forces at such a condition, one may use Newton's second law to calculate the second derivative of position, and solving for this gives full information on the development of the physical system with time. Note, however, that this is no longer true in quantum mechanics, due to the existence of particle spin, which is angular momentum that cannot be described by the cumulative effect of point-like motions in space.","Spin, orbital, and total angular momentum The classical definition of angular momentum as L=r×p can be carried over to quantum mechanics, by reinterpreting r as the quantum position operator and p as the quantum momentum operator. L is then an operator, specifically called the orbital angular momentum operator. The components of the angular momentum operator satisfy the commutation relations of the Lie algebra so(3). Indeed, these operators are precisely the infinitesimal action of the rotation group on the quantum Hilbert space. (See also the discussion below of the angular momentum operators as the generators of rotations.) However, in quantum physics, there is another type of angular momentum, called spin angular momentum, represented by the spin operator S. Spin is often depicted as a particle literally spinning around an axis, but this is a misleading and inaccurate picture: spin is an intrinsic property of a particle, unrelated to any sort of motion in space and fundamentally different from orbital angular momentum. All elementary particles have a characteristic spin (possibly zero), and almost all elementary particles have nonzero spin. For example electrons have ""spin 1/2"" (this actually means ""spin ħ/2""), photons have ""spin 1"" (this actually means ""spin ħ""), and pi-mesons have spin 0.Finally, there is total angular momentum J, which combines both the spin and orbital angular momentum of all particles and fields. (For one particle, J = L + S.) Conservation of angular momentum applies to J, but not to L or S; for example, the spin–orbit interaction allows angular momentum to transfer back and forth between L and S, with the total remaining constant. Electrons and photons need not have integer-based values for total angular momentum, but can also have half-integer values.In molecules the total angular momentum F is the sum of the rovibronic (orbital) angular momentum N, the electron spin angular momentum S, and the nuclear spin angular momentum I. For electronic singlet states the rovibronic angular momentum is denoted J rather than N. As explained by Van Vleck, the components of the molecular rovibronic angular momentum referred to molecule-fixed axes have different commutation relations from those for the components about space-fixed axes.
The spin angular momentum of light (SAM) is the component of angular momentum of light that is associated with the quantum spin and the rotation between the polarization degrees of freedom of the photon.
Relation to Newton's second law of motion While angular momentum total conservation can be understood separately from Newton's laws of motion as stemming from Noether's theorem in systems symmetric under rotations, it can also be understood simply as an efficient method of calculation of results that can also be otherwise arrived at directly from Newton's second law, together with laws governing the forces of nature (such as Newton's third law, Maxwell's equations and Lorentz force). Indeed, given initial conditions of position and velocity for every point, and the forces at such a condition, one may use Newton's second law to calculate the second derivative of position, and solving for this gives full information on the development of the physical system with time. Note, however, that this is no longer true in quantum mechanics, due to the existence of particle spin, which is angular momentum that cannot be described by the cumulative effect of point-like motions in space.","Indeed, given initial conditions of position and velocity for every point, and the forces at such a condition, one may use Newton's second law to calculate the second derivative of position, and solving for this gives full information on the development of the physical system with timeNote, however, that this is no longer true in quantum mechanics, due to the existence of particle spin, which is angular momentum that cannot be described by the cumulative effect of point-like motions in space.As explained by Van Vleck, the components of the molecular rovibronic angular momentum referred to molecule-fixed axes have different commutation relations from those for the components about space-fixed axes.
The spin angular momentum of light (SAM) is the component of angular momentum of light that is associated with the quantum spin and the rotation between the polarization degrees of freedom of the photon.
Relation to Newton's second law of motion While angular momentum total conservation can be understood separately from Newton's laws of motion as stemming from Noether's theorem in systems symmetric under rotations, it can also be understood simply as an efficient method of calculation of results that can also be otherwise arrived at directly from Newton's second law, together with laws governing the forces of nature (such as Newton's third law, Maxwell's equations and Lorentz force)Note, however, that this is no longer true in quantum mechanics, due to the existence of particle spin, which is angular momentum that cannot be described by the cumulative effect of point-like motions in spaceSpin, orbital, and total angular momentum The classical definition of angular momentum as L=r×p can be carried over to quantum mechanics, by reinterpreting r as the quantum position operator and p as the quantum momentum operatorThe total angular momentum is always the sum of spin and orbital contributions.
While angular momentum total conservation can be understood separately from Newton's laws of motion as stemming from Noether's theorem in systems symmetric under rotations, it can al","Indeed, given initial conditions of position and velocity for every point, and the forces at such a condition, one may use Newton's second law to calculate the second derivative of position, and solving for this gives full information on the development of the physical system with timeNote, however, that this is no longer true in quantum mechanics, due to the existence of particle spin, which is angular momentum that cannot be described by the cumulative effect of point-like motions in space.As explained by Van Vleck, the components of the molecular rovibronic angular momentum referred to molecule-fixed axes have different commutation relations from those for the components about space-fixed axes.
The spin angular momentum of light (SAM) is the component of angular momentum of light that is associated with the quantum spin and the rotation between the polarization degrees of freedom of the photon.
Relation to Newton's second law of motion While angular momentum total conservation can be understood separately from Newton's laws of motion as stemming from Noether's theorem in systems symmetric under rotations, it can also be understood simply as an efficient method of calculation of results that can also be otherwise arrived at directly from Newton's second law, together with laws governing the forces of nature (such as Newton's third law, Maxwell's equations and Lorentz force)Note, however, that this is no longer true in quantum mechanics, due to the existence of particle spin, which is angular momentum that cannot be described by the cumulative effect of point-like motions in spaceSpin, orbital, and total angular momentum The classical definition of angular momentum as L=r×p can be carried over to quantum mechanics, by reinterpreting r as the quantum position operator and p as the quantum momentum operatorThe total angular momentum is always the sum of spin and orbital contributions.
While angular momentum total conservation can be understood separately from Newton's laws of motion as stemming from Noether's theorem in systems symmetric under rotations, it can al[SEP]What is the reason that Newton's second law cannot be used to calculate the development of a physical system in quantum mechanics?","['D', 'C', 'E']",1.0
"What is the butterfly effect, as defined by Lorenz in his book ""The Essence of Chaos""?","In the book entitled The Essence of Chaos published in 1993, Lorenz defined butterfly effect as: ""The phenomenon that a small alteration in the state of a dynamical system will cause subsequent states to differ greatly from the states that would have followed without the alteration."" In chaos theory, the butterfly effect is the sensitive dependence on initial conditions in which a small change in one state of a deterministic nonlinear system can result in large differences in a later state. The butterfly effect describes a phenomenon in chaos theory whereby a minor change in circumstances can cause a large change in outcome. While the ""butterfly effect"" is often explained as being synonymous with sensitive dependence on initial conditions of the kind described by Lorenz in his 1963 paper (and previously observed by Poincaré), the butterfly metaphor was originally applied to work he published in 1969 which took the idea a step further. A short documentary that explains the ""butterfly effect"" in context of Lorenz's work. He noted that the butterfly effect is derived from the metaphorical example of the details of a tornado (the exact time of formation, the exact path taken) being influenced by minor perturbations such as a distant butterfly flapping its wings several weeks earlier. Whereas the classical butterfly effect considers the effect of a small change in the position and/or velocity of an object in a given Hamiltonian system, the quantum butterfly effect considers the effect of a small change in the Hamiltonian system with a given initial position and velocity. The butterfly effect concept has since been used outside the context of weather science as a broad term for any situation where a small change is supposed to be the cause of larger consequences. ==History== In The Vocation of Man (1800), Johann Gottlieb Fichte says ""you could not remove a single grain of sand from its place without thereby ... changing something throughout all parts of the immeasurable whole"". According to science journalist Peter Dizikes, the films Havana and The Butterfly Effect mischaracterize the butterfly effect by asserting the effect can be calculated with certainty, because this is the opposite of its scientific meaning in chaos theory as it relates to the unpredictability of certain physical systems; Dizikes writes in 2008, ""The larger meaning of the butterfly effect is not that we can readily track such connections, but that we can't."" Other authors suggest that the butterfly effect can be observed in quantum systems. The phrase refers to the idea that a butterfly's wings might create tiny changes in the atmosphere that may ultimately alter the path of a tornado or delay, accelerate, or even prevent the occurrence of a tornado in another location. Chaos theory and the sensitive dependence on initial conditions were described in numerous forms of literature. A comparison of the two kinds of butterfly effects and the third kind of butterfly effect has been documented. The two kinds of butterfly effects, including the sensitive dependence on initial conditions, and the ability of a tiny perturbation to create an organized circulation at large distances, are not exactly the same. An animation of the Lorenz attractor shows the continuous evolution. ==Theory and mathematical definition== Recurrence, the approximate return of a system toward its initial conditions, together with sensitive dependence on initial conditions, are the two main ingredients for chaotic motion. An introductory primer on chaos and fractals * * New England Complex Systems Institute - Concepts: Butterfly Effect * ChaosBook.org. Advanced graduate textbook on chaos (no fractals) * Category:Causality Category:Chaos theory Category:Determinism Category:Metaphors referring to insects Category:Physical phenomena Category:Stability theory In recent studies, it was reported that both meteorological and non- meteorological linear models have shown that instability plays a role in producing a butterfly effect, which is characterized by brief but significant exponential growth resulting from a small disturbance. In the 1993 movie Jurassic Park, Dr. Ian Malcolm (played by Jeff Goldblum) attempts to explain chaos theory to Dr. Ellie Sattler (played by Laura Dern), specifically referencing the butterfly effect, by stating ""It simply deals with unpredictability in complex systems"", and ""The shorthand is 'the butterfly effect.' The butterfly does not power or directly create the tornado, but the term is intended to imply that the flap of the butterfly's wings can cause the tornado: in the sense that the flap of the wings is a part of the initial conditions of an interconnected complex web; one set of conditions leads to a tornado, while the other set of conditions doesn't. The concept has been widely adopted by popular culture, and interpreted to mean that small events have a rippling effect that cause much larger events to occur, and has become a common reference. ==Examples== ===""A Sound of Thunder"" === The 1952 short story ""A Sound of Thunder"" by Ray Bradbury explores the concept of how the death of a butterfly in the past could have drastic changes in the future, and has been used as an example of ""the butterfly effect"" and how to consider chaos theory and the physics of time travel. ","The butterfly effect is the phenomenon that a small change in the initial conditions of a dynamical system can cause subsequent states to differ greatly from the states that would have followed without the alteration, as defined by Einstein in his book ""The Theory of Relativity.""","The butterfly effect is the phenomenon that a large change in the initial conditions of a dynamical system has no effect on subsequent states, as defined by Lorenz in his book ""The Essence of Chaos.""","The butterfly effect is the phenomenon that a small change in the initial conditions of a dynamical system can cause significant differences in subsequent states, as defined by Lorenz in his book ""The Essence of Chaos.""","The butterfly effect is the phenomenon that a small change in the initial conditions of a dynamical system has no effect on subsequent states, as defined by Lorenz in his book ""The Essence of Chaos.""","The butterfly effect is the phenomenon that a large change in the initial conditions of a dynamical system can cause significant differences in subsequent states, as defined by Lorenz in his book ""The Essence of Chaos.""",C,kaggle200,"Sensitivity to initial conditions is popularly known as the ""butterfly effect"", so-called because of the title of a paper given by Edward Lorenz in 1972 to the American Association for the Advancement of Science in Washington, D.C., entitled ""Predictability: Does the Flap of a Butterfly's Wings in Brazil set off a Tornado in Texas?"". The flapping wing represents a small change in the initial condition of the system, which causes a chain of events that prevents the predictability of large-scale phenomena. Had the butterfly not flapped its wings, the trajectory of the overall system could have been vastly different.
In chaos theory, the butterfly effect is the sensitive dependence on initial conditions in which a small change in one state of a deterministic nonlinear system can result in large differences in a later state.
As suggested in Lorenz's book entitled """"The Essence of Chaos"""", published in 1993, """"sensitive dependence can serve as an acceptable definition of chaos"""". In the same book, Lorenz defined the butterfly effect as: """"The phenomenon that a small alteration in the state of a dynamical system will cause subsequent states to differ greatly from the states that would have followed without the alteration."""" The above definition is consistent with the sensitive dependence of solutions on initial conditions (SDIC). An idealized skiing model was developed to illustrate the sensitivity of time-varying paths to initial positions. A predictability horizon can be determined before the onset of SDIC (i.e., prior to significant separations of initial nearby trajectories).
In the book entitled “""The Essence of Chaos""” published in 1993, Lorenz defined butterfly effect as: ""“The phenomenon that a small alteration in the state of a dynamical system will cause subsequent states to differ greatly from the states that would have followed without the alteration.”"" This feature is the same as sensitive dependence of solutions on initial conditions (SDIC) in . In the same book, Lorenz applied the activity of skiing and developed an idealized skiing model for revealing the sensitivity of time-varying paths to initial positions. A predictability horizon is determined before the onset of SDIC.","In chaos theory, the butterfly effect is the sensitive dependence on initial conditions in which a small change in one state of a deterministic nonlinear system can result in large differences in a later state.
As suggested in Lorenz's book entitled The Essence of Chaos, published in 1993, ""sensitive dependence can serve as an acceptable definition of chaos"". In the same book, Lorenz defined the butterfly effect as: ""The phenomenon that a small alteration in the state of a dynamical system will cause subsequent states to differ greatly from the states that would have followed without the alteration."" The above definition is consistent with the sensitive dependence of solutions on initial conditions (SDIC). An idealized skiing model was developed to illustrate the sensitivity of time-varying paths to initial positions. A predictability horizon can be determined before the onset of SDIC (i.e., prior to significant separations of initial nearby trajectories).A consequence of sensitivity to initial conditions is that if we start with a limited amount of information about the system (as is usually the case in practice), then beyond a certain time, the system would no longer be predictable. This is most prevalent in the case of weather, which is generally predictable only about a week ahead. This does not mean that one cannot assert anything about events far in the future—only that some restrictions on the system are present. For example, we know that the temperature of the surface of the earth will not naturally reach 100 °C (212 °F) or fall below −130 °C (−202 °F) on earth (during the current geologic era), but we cannot predict exactly which day will have the hottest temperature of the year.
The butterfly effect presents an obvious challenge to prediction, since initial conditions for a system such as the weather can never be known to complete accuracy. This problem motivated the development of ensemble forecasting, in which a number of forecasts are made from perturbed initial conditions.Some scientists have since argued that the weather system is not as sensitive to initial conditions as previously believed. David Orrell argues that the major contributor to weather forecast error is model error, with sensitivity to initial conditions playing a relatively small role. Stephen Wolfram also notes that the Lorenz equations are highly simplified and do not contain terms that represent viscous effects; he believes that these terms would tend to damp out small perturbations. Recent studies using generalized Lorenz models that included additional dissipative terms and nonlinearity suggested that a larger heating parameter is required for the onset of chaos.While the ""butterfly effect"" is often explained as being synonymous with sensitive dependence on initial conditions of the kind described by Lorenz in his 1963 paper (and previously observed by Poincaré), the butterfly metaphor was originally applied to work he published in 1969 which took the idea a step further. Lorenz proposed a mathematical model for how tiny motions in the atmosphere scale up to affect larger systems. He found that the systems in that model could only be predicted up to a specific point in the future, and beyond that, reducing the error in the initial conditions would not increase the predictability (as long as the error is not zero). This demonstrated that a deterministic system could be ""observationally indistinguishable"" from a non-deterministic one in terms of predictability. Recent re-examinations of this paper suggest that it offered a significant challenge to the idea that our universe is deterministic, comparable to the challenges offered by quantum physics.In the book entitled The Essence of Chaos published in 1993, Lorenz defined butterfly effect as: ""The phenomenon that a small alteration in the state of a dynamical system will cause subsequent states to differ greatly from the states that would have followed without the alteration."" This feature is the same as sensitive dependence of solutions on initial conditions (SDIC) in . In the same book, Lorenz applied the activity of skiing and developed an idealized skiing model for revealing the sensitivity of time-varying paths to initial positions. A predictability horizon is determined before the onset of SDIC.","In the same book, Lorenz defined the butterfly effect as: """"The phenomenon that a small alteration in the state of a dynamical system will cause subsequent states to differ greatly from the states that would have followed without the alteration."""" The above definition is consistent with the sensitive dependence of solutions on initial conditions (SDIC)Had the butterfly not flapped its wings, the trajectory of the overall system could have been vastly different.
In chaos theory, the butterfly effect is the sensitive dependence on initial conditions in which a small change in one state of a deterministic nonlinear system can result in large differences in a later state.
As suggested in Lorenz's book entitled """"The Essence of Chaos"""", published in 1993, """"sensitive dependence can serve as an acceptable definition of chaos""""Recent studies using generalized Lorenz models that included additional dissipative terms and nonlinearity suggested that a larger heating parameter is required for the onset of chaos.While the ""butterfly effect"" is often explained as being synonymous with sensitive dependence on initial conditions of the kind described by Lorenz in his 1963 paper (and previously observed by Poincaré), the butterfly metaphor was originally applied to work he published in 1969 which took the idea a step furtherIn the same book, Lorenz defined the butterfly effect as: ""The phenomenon that a small alteration in the state of a dynamical system will cause subsequent states to differ greatly from the states that would have followed without the alteration."" The above definition is consistent with the sensitive dependence of solutions on initial conditions (SDIC)In chaos theory, the butterfly effect is the sensitive dependence on initial conditions in which a small change in one state of a deterministic nonlinear system can result in large differences in a later state.
As suggested in Lorenz's book entitled The Essence of Chaos, published in 1993, ""sensitive dependence can serve as an acceptable definition of chaos""Recent re-examinations of this paper suggest that it offer","In the same book, Lorenz defined the butterfly effect as: """"The phenomenon that a small alteration in the state of a dynamical system will cause subsequent states to differ greatly from the states that would have followed without the alteration."""" The above definition is consistent with the sensitive dependence of solutions on initial conditions (SDIC)Had the butterfly not flapped its wings, the trajectory of the overall system could have been vastly different.
In chaos theory, the butterfly effect is the sensitive dependence on initial conditions in which a small change in one state of a deterministic nonlinear system can result in large differences in a later state.
As suggested in Lorenz's book entitled """"The Essence of Chaos"""", published in 1993, """"sensitive dependence can serve as an acceptable definition of chaos""""Recent studies using generalized Lorenz models that included additional dissipative terms and nonlinearity suggested that a larger heating parameter is required for the onset of chaos.While the ""butterfly effect"" is often explained as being synonymous with sensitive dependence on initial conditions of the kind described by Lorenz in his 1963 paper (and previously observed by Poincaré), the butterfly metaphor was originally applied to work he published in 1969 which took the idea a step furtherIn the same book, Lorenz defined the butterfly effect as: ""The phenomenon that a small alteration in the state of a dynamical system will cause subsequent states to differ greatly from the states that would have followed without the alteration."" The above definition is consistent with the sensitive dependence of solutions on initial conditions (SDIC)In chaos theory, the butterfly effect is the sensitive dependence on initial conditions in which a small change in one state of a deterministic nonlinear system can result in large differences in a later state.
As suggested in Lorenz's book entitled The Essence of Chaos, published in 1993, ""sensitive dependence can serve as an acceptable definition of chaos""Recent re-examinations of this paper suggest that it offer[SEP]What is the butterfly effect, as defined by Lorenz in his book ""The Essence of Chaos""?","['C', 'E', 'D']",1.0
What is the role of CYCLOIDEA genes in the evolution of bilateral symmetry?,"The CYLD lysine 63 deubiquitinase gene, also termed the CYLD gene, CYLD is an evolutionary ancient gene found to be present as far back on the evolutionary scale as in sponges. Xenambulacraria is a proposed clade of animals with bilateral symmetry as an embryo, consisting of the Xenacoelomorpha (i.e., Xenoturbella and acoelomorphs) and the Ambulacraria (i.e., echinoderms and hemichordates). The Chthonioidea are a superfamily of pseudoscorpions, representing the earliest diverging and most primitive living pseudoscorpions. If confirmed, the clade would either be the sister group to the chordates (if deuterostomes are monophyletic) or the sister group to all the other bilaterians, grouped together in Centroneuralia (with deuterostomes being paraphyletic). The CYLD gene in known to code for a cytoplasmic protein, termed CYLD lysine 63 deubiquitinase (here termed CYLD protein), which has three cytoskeletal- associated protein-glycine-conserved (CAP-GLY) domains (areas or the protein controlling critical functions). The superfamily contains two families. Cyclin-A2 is a protein that in humans is encoded by the CCNA2 gene. Cyclin A2 transcription is mostly regulated by the transcription factor E2F and begins in G1, after the R point. It is one of the two types of cyclin A: cyclin A1 is expressed during meiosis and embryogenesis while cyclin A2 is expressed in dividing somatic cells. == Function == Cyclin A2 belongs to the cyclin family, whose members regulate cell cycle progression by interacting with CDK kinases. Cyclin-O is a protein that in humans is encoded by the CCNO gene. == Interactions == Cyclin O has been shown to interact with RPA2 and PCNA. == References == == Further reading == * * * * * * * * * * * * * * * == External links == * * Cyclin A2 is synthesized at the onset of S phase and localizes to the nucleus, where the cyclin A2-CDK2 complex is implicated in the initiation and progression of DNA synthesis. Cyclin A2 is unique in that it can activate two different CDK kinases; it binds CDK2 during S phase, and CDK1 during the transition from G2 to M phase. Although the validity of the clade relies mostly on phylogenomics, molecular genetics studies have proposed pigment cell clusters expressing polyketide synthase (PKS) and sulfotransferase as a synapomorphy of Xenambulacraria. == Phylogeny == Xenambulacraria has usually been recovered as a clade inside of either of two distinct phylogenies. === Basal Xenambulacraria === The following phylogeny assumes a paraphyletic Deuterostomia, with Xenambulacraria at the base of Bilateria. === Xenambulacraria inside Deuterostomia === The following phylogeny assumes a monophyletic Deuterostomia, with Xenambulacraria nested inside of it. == Gallery == File:Nemertodermatida species.png|Various Acoelomorpha (nemertodermatids). The CYLD gene is classified as a tumor suppressor gene, i.e. a gene that regulates cell growth and when inactivated by a mutation leads to uncontrolled cell growth and the formation of tumors. After the R point, pRb is phosphorylated and can no longer bind E2F, leading to cyclin A2 transcription. Cyclin A2 is involved in the G2/M transition but it cannot independently form a maturation promoting factor (MPF). During mouse development and aging, cyclin A2 promotes DNA repair, particularly double-strand break repair, in the brain. The cyclin A2-CDK2 complex eventually phosphorylates E2F, turning off cyclin A2 transcription. Also in mice, cyclin A2 was found to be an RNA binding protein that controls the translation of Mre11 mRNA. == Clinical significance == Cyclin A2 (Ccna2) is a key protein involved in the direction of mammalian cardiac myocytes to grow and divide, and has been shown to induce cardiac repair following myocardial infarction. CYLD protein removes ubiquitin from proteins involved in regulating the NF-κB, Wnt, notch, TGF-β, and JNK cell signaling pathways; these pathways normally act to regulate hair formation, cell growth, cell survival, inflammatory responses, and/or tumor development. ",CYCLOIDEA genes are responsible for the selection of symmetry in the evolution of animals.,"CYCLOIDEA genes are responsible for the evolution of specialized pollinators in plants, which in turn led to the transition of radially symmetrical flowers to bilaterally symmetrical flowers.","CYCLOIDEA genes are responsible for the expression of dorsal petals in Antirrhinum majus, which control their size and shape.","CYCLOIDEA genes are responsible for the expression of transcription factors that control the expression of other genes, allowing their expression to influence developmental pathways relating to symmetry.",CYCLOIDEA genes are responsible for mutations that cause a reversion to radial symmetry.,D,kaggle200,"How is the enormous diversity in the shape, color and sizes of flowers established? There is enormous variation in the developmental program in different plants. For example, monocots possess structures like lodicules and palea, that were believed to be analogous to the dicot petals and carpels respectively. It turns out that this is true, and the variation is due to slight changes in the MADS-box genes and their expression pattern in the monocots. Another example is that of the toad-flax, ""Linaria vulgaris"", which has two kinds of flower symmetries: radial and bilateral. These symmetries are due to changes in copy number, timing, and location of expression in ""CYCLOIDEA,"" which is related to TCP1 in Arabidopsis.
In molecular biology, the protein domain TCP is actually a family of transcription factors named after: teosinte branched 1 (tb1, ""Zea mays"" (Maize)), cycloidea (cyc) (""Antirrhinum majus"") (Garden snapdragon) and PCF in rice (""Oryza sativa"").
Another example is that of ""Linaria vulgaris"", which has two kinds of flower symmetries-radial and bilateral. These symmetries are due to epigenetic changes in just one gene called ""CYCLOIDEA"".
Early flowering plants had radially symmetric flowers but since then many plants have evolved bilaterally symmetrical flowers. The evolution of bilateral symmetry is due to the expression of ""CYCLOIDEA"" genes. Evidence for the role of the ""CYCLOIDEA"" gene family comes from mutations in these genes which cause a reversion to radial symmetry. The ""CYCLOIDEA"" genes encode transcription factors, proteins which control the expression of other genes. This allows their expression to influence developmental pathways relating to symmetry. For example, in ""Antirrhinum majus"", ""CYCLOIDEA"" is expressed during early development in the dorsal domain of the flower meristem and continues to be expressed later on in the dorsal petals to control their size and shape. It is believed that the evolution of specialized pollinators may play a part in the transition of radially symmetrical flowers to bilaterally symmetrical flowers.","Evolution of symmetry in animals Symmetry is often selected for in the evolution of animals. This is unsurprising since asymmetry is often an indication of unfitness – either defects during development or injuries throughout a lifetime. This is most apparent during mating during which females of some species select males with highly symmetrical features. For example, facial symmetry influences human judgements of human attractiveness. Additionally, female barn swallows, a species where adults have long tail streamers, prefer to mate with males that have the most symmetrical tails.While symmetry is known to be under selection, the evolutionary history of different types of symmetry in animals is an area of extensive debate. Traditionally it has been suggested that bilateral animals evolved from a radial ancestor. Cnidarians, a phylum containing animals with radial symmetry, are the most closely related group to the bilaterians. Cnidarians are one of two groups of early animals considered to have defined structure, the second being the ctenophores. Ctenophores show biradial symmetry leading to the suggestion that they represent an intermediate step in the evolution of bilateral symmetry from radial symmetry.Interpretations based only on morphology are not sufficient to explain the evolution of symmetry. Two different explanations are proposed for the different symmetries in cnidarians and bilateria. The first suggestion is that an ancestral animal had no symmetry (was asymmetric) before cnidarians and bilaterians separated into different evolutionary lineages. Radial symmetry could have then evolved in cnidarians and bilateral symmetry in bilaterians. Alternatively, the second suggestion is that an ancestor of cnidarians and bilaterians had bilateral symmetry before the cnidarians evolved and became different by having radial symmetry. Both potential explanations are being explored and evidence continues to fuel the debate.
Factors influencing floral diversity How is the enormous diversity in the shape, color and sizes of flowers established? There is enormous variation in the developmental program in different plants. For example, monocots possess structures like lodicules and palea, that were believed to be analogous to the dicot petals and carpels respectively. It turns out that this is true, and the variation is due to slight changes in the MADS-box genes and their expression pattern in the monocots. Another example is that of the toad-flax, Linaria vulgaris, which has two kinds of flower symmetries: radial and bilateral. These symmetries are due to changes in copy number, timing, and location of expression in CYCLOIDEA, which is related to TCP1 in Arabidopsis.
Evolution of symmetry in plants Early flowering plants had radially symmetric flowers but since then many plants have evolved bilaterally symmetrical flowers. The evolution of bilateral symmetry is due to the expression of CYCLOIDEA genes. Evidence for the role of the CYCLOIDEA gene family comes from mutations in these genes which cause a reversion to radial symmetry. The CYCLOIDEA genes encode transcription factors, proteins which control the expression of other genes. This allows their expression to influence developmental pathways relating to symmetry. For example, in Antirrhinum majus, CYCLOIDEA is expressed during early development in the dorsal domain of the flower meristem and continues to be expressed later on in the dorsal petals to control their size and shape. It is believed that the evolution of specialized pollinators may play a part in the transition of radially symmetrical flowers to bilaterally symmetrical flowers.","The evolution of bilateral symmetry is due to the expression of CYCLOIDEA genesThe evolution of bilateral symmetry is due to the expression of ""CYCLOIDEA"" genesEvidence for the role of the CYCLOIDEA gene family comes from mutations in these genes which cause a reversion to radial symmetryEvidence for the role of the ""CYCLOIDEA"" gene family comes from mutations in these genes which cause a reversion to radial symmetryThese symmetries are due to changes in copy number, timing, and location of expression in CYCLOIDEA, which is related to TCP1 in Arabidopsis.
Evolution of symmetry in plants Early flowering plants had radially symmetric flowers but since then many plants have evolved bilaterally symmetrical flowersRadial symmetry could have then evolved in cnidarians and bilateral symmetry in bilateriansThis allows their expression to influence developmental pathways relating to symmetryThese symmetries are due to epigenetic changes in just one gene called ""CYCLOIDEA"".
Early flowering plants had radially symmetric flowers but since then many plants have evolved bilaterally symmetrical flowersAlternatively, the second suggestion is that an ancestor of cnidarians and bilaterians had bilateral symmetry before the cnidarians evolved and became different by having radial symmetryCtenophores show biradial symmetry leading to the suggestion that they represent an intermediate step in the evolution of bilateral symmetry from radial symmetry.Interpretations based only on morphology are not sufficient to explain the evolution of symmetryThe ""CYCLOIDEA"" genes encode transcription factors, proteins which control the expression of other genesThe CYCLOIDEA genes encode transcription factors, proteins which control the expression of other genesEvolution of symmetry in animals Symmetry is often selected for in the evolution of animalsThese symmetries are due to changes in copy number, timing, and location of expression in ""CYCLOIDEA,"" which is related to TCP1 in Arabidopsis.
In molecular biology, the protein domain TCP is actually a family of transcription factors named after: teosint","The evolution of bilateral symmetry is due to the expression of CYCLOIDEA genesThe evolution of bilateral symmetry is due to the expression of ""CYCLOIDEA"" genesEvidence for the role of the CYCLOIDEA gene family comes from mutations in these genes which cause a reversion to radial symmetryEvidence for the role of the ""CYCLOIDEA"" gene family comes from mutations in these genes which cause a reversion to radial symmetryThese symmetries are due to changes in copy number, timing, and location of expression in CYCLOIDEA, which is related to TCP1 in Arabidopsis.
Evolution of symmetry in plants Early flowering plants had radially symmetric flowers but since then many plants have evolved bilaterally symmetrical flowersRadial symmetry could have then evolved in cnidarians and bilateral symmetry in bilateriansThis allows their expression to influence developmental pathways relating to symmetryThese symmetries are due to epigenetic changes in just one gene called ""CYCLOIDEA"".
Early flowering plants had radially symmetric flowers but since then many plants have evolved bilaterally symmetrical flowersAlternatively, the second suggestion is that an ancestor of cnidarians and bilaterians had bilateral symmetry before the cnidarians evolved and became different by having radial symmetryCtenophores show biradial symmetry leading to the suggestion that they represent an intermediate step in the evolution of bilateral symmetry from radial symmetry.Interpretations based only on morphology are not sufficient to explain the evolution of symmetryThe ""CYCLOIDEA"" genes encode transcription factors, proteins which control the expression of other genesThe CYCLOIDEA genes encode transcription factors, proteins which control the expression of other genesEvolution of symmetry in animals Symmetry is often selected for in the evolution of animalsThese symmetries are due to changes in copy number, timing, and location of expression in ""CYCLOIDEA,"" which is related to TCP1 in Arabidopsis.
In molecular biology, the protein domain TCP is actually a family of transcription factors named after: teosint[SEP]What is the role of CYCLOIDEA genes in the evolution of bilateral symmetry?","['D', 'E', 'C']",1.0
What is the required excess quark per billion quark-antiquark pairs in the early universe in order to provide all the observed matter in the universe?,"This strongly suggested that there must also be a sixth quark, the top, to complete the pair. They may consist of five quarks tightly bound together, but it is also possible that they are more loosely bound and consist of a three-quark baryon and a two- quark meson interacting relatively weakly with each other via pion exchange (the same force that binds atomic nuclei) in a ""meson-baryon molecule"". ==History== ===Mid-2000s=== The requirement to include an antiquark means that many classes of pentaquark are hard to identify experimentally – if the flavour of the antiquark matches the flavour of any other quark in the quintuplet, it will cancel out and the particle will resemble its three-quark hadron cousin. During the quark epoch, the universe was filled with a dense, hot quark–gluon plasma, containing quarks, leptons and their antiparticles. The top quark is the only quark that has been directly observed due to its decay time being shorter than the hadronization time. ==History== In 1973, Makoto Kobayashi and Toshihide Maskawa predicted the existence of a third generation of quarks to explain observed CP violations in kaon decay. It was known that this quark would be heavier than the bottom, requiring more energy to create in particle collisions, but the general expectation was that the sixth quark would soon be found. These 'regular' hadrons are well documented and characterized; however, there is nothing in theory to prevent quarks from forming 'exotic' hadrons such as tetraquarks with two quarks and two antiquarks, or pentaquarks with four quarks and one antiquark. ==Structure== thumb|right|A diagram of the type pentaquark possibly discovered in July 2015, showing the flavours of each quark and one possible colour configuration.|alt=five circles arranged clockwise: blue circle marked ""c"", yellow (antiblue) circle marked ""c"" with an overscore, green circle marked ""u"", blue circle marked ""d"", and red circle marked ""u"". Along with the charm quark, it is part of the second generation of matter. The proposed state was composed of two up quarks, two down quarks, and one strange antiquark (uudd). As quarks have a baryon number of , and antiquarks of , the pentaquark would have a total baryon number of 1, and thus would be a baryon. In the following years, more evidence was collected and on 22 April 1994, the CDF group submitted their article presenting tentative evidence for the existence of a top quark with a mass of about . In physical cosmology, the quark epoch was the period in the evolution of the early universe when the fundamental interactions of gravitation, electromagnetism, the strong interaction and the weak interaction had taken their present forms, but the temperature of the universe was still too high to allow quarks to bind together to form hadrons. Hadrons made of one quark and one antiquark are known as mesons, while those made of three quarks are known as baryons. The top quark, sometimes also referred to as the truth quark, (symbol: t) is the most massive of all observed elementary particles. Further, because it has five quarks instead of the usual three found in regular baryons ( 'triquarks'), it is classified as an exotic baryon. Restoration of the symmetry implied the existence of a fifth and sixth quark. (The other second generation quark, the strange quark, was already detected in 1968.) The corresponding quark mass is then predicted. To identify which quarks compose a given pentaquark, physicists use the notation qqqq, where q and respectively refer to any of the six flavours of quarks and antiquarks. A first measurement of the top quark charge has been published, resulting in some confidence that the top quark charge is indeed . ==Production== Because top quarks are very massive, large amounts of energy are needed to create one. The quark epoch ended when the universe was about 10−6 seconds old, when the average energy of particle interactions had fallen below the binding energy of hadrons. ",One,Five,Three,Two,Four,A,kaggle200,"Padre employs plug-ins in order to provide all of its functionality on top of the runtime system. All the functionality except the core Perl 5 support is implemented as plug-ins. Padre has plug-ins for HTML and XML editing.
The more matter there is in the universe, the stronger the mutual gravitational pull of the matter. If the universe were ""too"" dense then it would re-collapse into a gravitational singularity. However, if the universe contained too ""little"" matter then the self-gravity would be too weak for astronomical structures, like galaxies or planets, to form. Since the Big Bang, the universe has expanded monotonically. Perhaps unsurprisingly, our universe has just the right mass-energy density, equivalent to about 5 protons per cubic metre, which has allowed it to expand for the last 13.8 billion years, giving time to form the universe as observed today.
Through this synergy pandeism claims to answer primary objections to deism (why would God create and then not interact with the universe?) and to pantheism (how did the universe originate and what is its purpose?).
The Standard Model can incorporate baryogenesis, though the amount of net baryons (and leptons) thus created may not be sufficient to account for the present baryon asymmetry. There is a required one excess quark per billion quark-antiquark pairs in the early universe in order to provide all the observed matter in the universe. This insufficiency has not yet been explained, theoretically or otherwise.","The early universe This period lasted around 370,000 years. Initially, various kinds of subatomic particles are formed in stages. These particles include almost equal amounts of matter and antimatter, so most of it quickly annihilates, leaving a small excess of matter in the universe.
Cosmogony deals specifically with the origin of the universe. Modern metaphysical cosmology and cosmogony try to address questions such as: What is the origin of the Universe? What is its first cause? Is its existence necessary? (see monism, pantheism, emanationism and creationism) What are the ultimate material components of the Universe? (see mechanism, dynamism, hylomorphism, atomism) What is the ultimate reason for the existence of the Universe? Does the cosmos have a purpose? (see teleology) Mind and matter Accounting for the existence of mind in a world largely composed of matter is a metaphysical problem which is so large and important as to have become a specialized subject of study in its own right, philosophy of mind.
The Standard Model can incorporate baryogenesis, though the amount of net baryons (and leptons) thus created may not be sufficient to account for the present baryon asymmetry. There is a required one excess quark per billion quark-antiquark pairs in the early universe in order to provide all the observed matter in the universe. This insufficiency has not yet been explained, theoretically or otherwise.","There is a required one excess quark per billion quark-antiquark pairs in the early universe in order to provide all the observed matter in the universeThe early universe This period lasted around 370,000 yearsPerhaps unsurprisingly, our universe has just the right mass-energy density, equivalent to about 5 protons per cubic metre, which has allowed it to expand for the last 13.8 billion years, giving time to form the universe as observed today.
Through this synergy pandeism claims to answer primary objections to deism (why would God create and then not interact with the universe?) and to pantheism (how did the universe originate and what is its purpose?).
The Standard Model can incorporate baryogenesis, though the amount of net baryons (and leptons) thus created may not be sufficient to account for the present baryon asymmetryThese particles include almost equal amounts of matter and antimatter, so most of it quickly annihilates, leaving a small excess of matter in the universe.
Cosmogony deals specifically with the origin of the universeHowever, if the universe contained too ""little"" matter then the self-gravity would be too weak for astronomical structures, like galaxies or planets, to formModern metaphysical cosmology and cosmogony try to address questions such as: What is the origin of the Universe? What is its first cause? Is its existence necessary? (see monism, pantheism, emanationism and creationism) What are the ultimate material components of the Universe? (see mechanism, dynamism, hylomorphism, atomism) What is the ultimate reason for the existence of the Universe? Does the cosmos have a purpose? (see teleology) Mind and matter Accounting for the existence of mind in a world largely composed of matter is a metaphysical problem which is so large and important as to have become a specialized subject of study in its own right, philosophy of mind.
The Standard Model can incorporate baryogenesis, though the amount of net baryons (and leptons) thus created may not be sufficient to account for the present baryon asymmetryInitially, various kinds of subatomic ","There is a required one excess quark per billion quark-antiquark pairs in the early universe in order to provide all the observed matter in the universeThe early universe This period lasted around 370,000 yearsPerhaps unsurprisingly, our universe has just the right mass-energy density, equivalent to about 5 protons per cubic metre, which has allowed it to expand for the last 13.8 billion years, giving time to form the universe as observed today.
Through this synergy pandeism claims to answer primary objections to deism (why would God create and then not interact with the universe?) and to pantheism (how did the universe originate and what is its purpose?).
The Standard Model can incorporate baryogenesis, though the amount of net baryons (and leptons) thus created may not be sufficient to account for the present baryon asymmetryThese particles include almost equal amounts of matter and antimatter, so most of it quickly annihilates, leaving a small excess of matter in the universe.
Cosmogony deals specifically with the origin of the universeHowever, if the universe contained too ""little"" matter then the self-gravity would be too weak for astronomical structures, like galaxies or planets, to formModern metaphysical cosmology and cosmogony try to address questions such as: What is the origin of the Universe? What is its first cause? Is its existence necessary? (see monism, pantheism, emanationism and creationism) What are the ultimate material components of the Universe? (see mechanism, dynamism, hylomorphism, atomism) What is the ultimate reason for the existence of the Universe? Does the cosmos have a purpose? (see teleology) Mind and matter Accounting for the existence of mind in a world largely composed of matter is a metaphysical problem which is so large and important as to have become a specialized subject of study in its own right, philosophy of mind.
The Standard Model can incorporate baryogenesis, though the amount of net baryons (and leptons) thus created may not be sufficient to account for the present baryon asymmetryInitially, various kinds of subatomic [SEP]What is the required excess quark per billion quark-antiquark pairs in the early universe in order to provide all the observed matter in the universe?","['D', 'B', 'C']",0.0
"What is the meaning of the term ""horror vacui""?","Horror vacui can refer to: *Horror vacui (art), a concept in art approximately translated from Latin fear of empty spaces *Horror vacui (physics), a physical postulate *Horror Vacui (film), a 1984 German satirical film *Horror Vacui (album), by Linea 77 *Horror Vacui, a composition by Jonny Greenwood By contrast, horror is the feeling of revulsion that usually follows a frightening sight, sound, or otherwise experience. Erotic horror, alternately called horror erotica or dark erotica, is a term applied to works of fiction in which sensual or sexual imagery are blended with horrific overtones or story elements for the sake of sexual titillation. According to Devendra Varma in The Gothic Flame (1966): > The difference between Terror and Horror is the difference between awful > apprehension and sickening realization: between the smell of death and > stumbling against a corpse. ==Horror fiction== Horror is also a genre of film and fiction that relies on horrifying images or situations to tell stories and prompt reactions or jump scares to put their audiences on edge. I'm not proud. ==Psychoanalytic views== Freud likened the experience of horror to that of the uncanny.S Freud, The “Uncanny” Imago V 1919 p. 27 In his wake, Georges Bataille saw horror as akin to ecstasy in its transcendence of the everyday;E Roudinesco, Jacques Lacan' (Cambridge 1999) p. 122 and p. 131 as opening a way to go beyond rational social consciousness.W Paulett, G S Bataille (2015) p. 67 and p. 101 Julia Kristeva in turn considered horror as evoking experience of the primitive, the infantile, and the demoniacal aspects of unmediated femininity.J Kristeva, Powers of Horror (New York 1981) p. 63-5 ==Horror, helplessness and trauma== The paradox of pleasure experienced through horror films/books can be explained partly as stemming from relief from real-life horror in the experience of horror in play, partly as a safe way to return in adult life to the paralysing feelings of infantile helplessness.R Solomon, In Defence of Sentimentality (200) p. 108-113 Helplessness is also a factor in the overwhelming experience of real horror in psychological trauma.D Goleman, Emotional Intelligence (London 1996) p. 203-4 Playing at re-experiencing the trauma may be a helpful way of overcoming it.O Fenichel, The Psychoanalytic Theory of Neurosis (London 1946) p. 542-3 == See also == ==References== == Bibliography == *Steven Bruhm (1994) Gothic Bodies: The Politics of Pain in Romantic Fiction. ""Horror,"" King writes, is that moment at which one sees the creature/aberration that causes the terror or suspense, a ""shock value"". H.P. Lovecraft explanation for the fascination of horror stems more from the lack of understanding of a humans true place and our deep inner instinct we are out of touch with, and the basic insignificance of ones life and the universe at large. Psychological horror is a subgenre of horror and psychological fiction with a particular focus on mental, emotional, and psychological states to frighten, disturb, or unsettle its audience. Modern research reveals the relationship between empathy and fear or the lack thereof with interest in horror. Citing many examples, he defines ""terror"" as the suspenseful moment in horror before the actual monster is revealed. The definition of creepypasta has expanded over time to include most horror stories written on the Internet. Psychological horror further forces the manifestation of each individuals own personal horror. The subgenre frequently overlaps with the related subgenre of psychological thriller, and often uses mystery elements and characters with unstable, unreliable, or disturbed psychological states to enhance the suspense, drama, action, and paranoia of the setting and plot and to provide an overall creepy, unpleasant, unsettling, or distressing atmosphere. == Characteristics == Psychological horror usually aims to create discomfort or dread by exposing common or universal psychological and emotional vulnerabilities/fears and revealing the darker parts of the human psyche that most people may repress or deny. Terror is usually described as the feeling of dread and anticipation that precedes the horrifying experience. The use of shadows through light to cover up information results in a subtle escalation of suspense and horror of what can not be seen. The distinction between terror and horror is a standard literary and psychological concept applied especially to Gothic and horror fiction.Radcliffe 1826; Varma 1966; Crawford 1986: 101-3; Bruhm 1994: 37; Wright 2007: 35-56. Psychological horror films sometimes frighten or unsettle by relying on the viewer's or character's own imagination or the anticipation of a threat rather than an actual threat or a material source of fear portrayed onscreen. Horror allows the watcher to escape mundane conventional life and express the inner workings of their irrational thoughts. As a result of the lack of cross cultural research on the psychological effects of horror, one hypothesis is that individual cultures develop their own unique sense of horror, based in their cultural experiences. Terror has also been defined by Noël Carroll as a combination of horror and revulsion.M Hills, The Pleasures of Horror' (2005) p. 17 ==Literary Gothic== The distinction between terror and horror was first characterized by the Gothic writer Ann Radcliffe (1764-1823), horror being more related to being shocked or scared (being horrified) at an awful realization or a deeply unpleasant occurrence, while terror is more related to being anxious or fearful.Varma 1966. ",The quantified extension of volume in empty space.,The commonly held view that nature abhorred a vacuum.,The medieval thought experiment into the idea of a vacuum.,The success of Descartes' namesake coordinate system.,The spatial-corporeal component of Descartes' metaphysics.,B,kaggle200,"The artwork in the Where's Wally? series of children's books is a commonly known example of horror vacui, as are many of the small books written or illustrated by the macabre imagination of Edward Gorey.
Another example comes from ancient Greece during the Geometric Age (1100–900 BCE), when horror vacui was considered a stylistic element of all art. The mature work of the French Renaissance engraver Jean Duvet consistently exhibits horror vacui.
The Tingatinga painting style of Dar es Salaam in Tanzania is a contemporary example of horror vacui. Other African artists such as Malangatana of Mozambique (Malangatana Ngwenya) also fill the canvas in this way.
In visual art, horror vacui (, ; ), also referred to as kenophobia (from ), is the filling of the entire surface of a space or an artwork with detail. In physics, ""horror vacui"" reflects Aristotle's idea that ""nature abhors an empty space.""","Italian art critic and scholar Mario Praz used this term to describe the excessive use of ornament in design during the Victorian age. Other examples of horror vacui can be seen in the densely decorated carpet pages of Insular illuminated manuscripts, where intricate patterns and interwoven symbols may have served ""apotropaic as well as decorative functions."" The interest in meticulously filling empty spaces is also reflected in Arabesque decoration in Islamic art from ancient times to present. The art historian Ernst Gombrich theorized that such highly ornamented patterns can function like a picture frame for sacred images and spaces. ""The richer the elements of the frame,"" Gombrich wrote, ""the more the centre will gain in dignity.""Another example comes from ancient Greece during the Geometric Age (1100–900 BCE), when horror vacui was considered a stylistic element of all art. The mature work of the French Renaissance engraver Jean Duvet consistently exhibits horror vacui.
The artwork in the Where's Wally? series of children's books is a commonly known example of horror vacui, as are many of the small books written or illustrated by the macabre imagination of Edward Gorey.
The Tingatinga painting style of Dar es Salaam in Tanzania is a contemporary example of horror vacui. Other African artists such as Malangatana of Mozambique (Malangatana Ngwenya) also fill the canvas in this way.
The arrangement of Ancient Egyptian hieroglyphs suggests an abhorrence of empty space. Signs were repeated or phonetic complements added to prevent gaps.
In visual art, horror vacui (Latin for 'fear of empty space'; UK: ; US: ), or kenophobia (Greek for 'fear of the empty'), is a phenomenon in which the entire surface of a space or an artwork is filled with detail and content, leaving as little perceived emptiness as possible. It relates to the antiquated physical idea, horror vacui, proposed by Aristotle who held that ""nature abhors an empty space"".","Signs were repeated or phonetic complements added to prevent gaps.
In visual art, horror vacui (Latin for 'fear of empty space'; UK: ; US: ), or kenophobia (Greek for 'fear of the empty'), is a phenomenon in which the entire surface of a space or an artwork is filled with detail and content, leaving as little perceived emptiness as possible- The artwork in the Where's Wally? series of children's books is a commonly known example of horror vacui, as are many of the small books written or illustrated by the macabre imagination of Edward Gorey.
Another example comes from ancient Greece during the Geometric Age (1100–900 BCE), when horror vacui was considered a stylistic element of all artIt relates to the antiquated physical idea, horror vacui, proposed by Aristotle who held that ""nature abhors an empty space""Other African artists such as Malangatana of Mozambique (Malangatana Ngwenya) also fill the canvas in this way.
In visual art, horror vacui (, ; ), also referred to as kenophobia (from ), is the filling of the entire surface of a space or an artwork with detailThe mature work of the French Renaissance engraver Jean Duvet consistently exhibits horror vacui.
The artwork in the Where's Wally? series of children's books is a commonly known example of horror vacui, as are many of the small books written or illustrated by the macabre imagination of Edward Gorey.
The Tingatinga painting style of Dar es Salaam in Tanzania is a contemporary example of horror vacuiThe mature work of the French Renaissance engraver Jean Duvet consistently exhibits horror vacui.
The Tingatinga painting style of Dar es Salaam in Tanzania is a contemporary example of horror vacuiIn physics, ""horror vacui"" reflects Aristotle's idea that ""nature abhors an empty space.""Other examples of horror vacui can be seen in the densely decorated carpet pages of Insular illuminated manuscripts, where intricate patterns and interwoven symbols may have served ""apotropaic as well as decorative functions."" The interest in meticulously filling empty spaces is also reflected in Arabesque decoration in Islamic ar","Signs were repeated or phonetic complements added to prevent gaps.
In visual art, horror vacui (Latin for 'fear of empty space'; UK: ; US: ), or kenophobia (Greek for 'fear of the empty'), is a phenomenon in which the entire surface of a space or an artwork is filled with detail and content, leaving as little perceived emptiness as possible- The artwork in the Where's Wally? series of children's books is a commonly known example of horror vacui, as are many of the small books written or illustrated by the macabre imagination of Edward Gorey.
Another example comes from ancient Greece during the Geometric Age (1100–900 BCE), when horror vacui was considered a stylistic element of all artIt relates to the antiquated physical idea, horror vacui, proposed by Aristotle who held that ""nature abhors an empty space""Other African artists such as Malangatana of Mozambique (Malangatana Ngwenya) also fill the canvas in this way.
In visual art, horror vacui (, ; ), also referred to as kenophobia (from ), is the filling of the entire surface of a space or an artwork with detailThe mature work of the French Renaissance engraver Jean Duvet consistently exhibits horror vacui.
The artwork in the Where's Wally? series of children's books is a commonly known example of horror vacui, as are many of the small books written or illustrated by the macabre imagination of Edward Gorey.
The Tingatinga painting style of Dar es Salaam in Tanzania is a contemporary example of horror vacuiThe mature work of the French Renaissance engraver Jean Duvet consistently exhibits horror vacui.
The Tingatinga painting style of Dar es Salaam in Tanzania is a contemporary example of horror vacuiIn physics, ""horror vacui"" reflects Aristotle's idea that ""nature abhors an empty space.""Other examples of horror vacui can be seen in the densely decorated carpet pages of Insular illuminated manuscripts, where intricate patterns and interwoven symbols may have served ""apotropaic as well as decorative functions."" The interest in meticulously filling empty spaces is also reflected in Arabesque decoration in Islamic ar[SEP]What is the meaning of the term ""horror vacui""?","['B', 'D', 'E']",1.0
What is the Droste effect?,"The Droste effect (), known in art as an example of mise en abyme, is the effect of a picture recursively appearing within itself, in a place where a similar picture would realistically be expected to appear. The illustration reappears on the cocoa package held by the nurse, inducing a recursive visual effect known today as the Droste effect.Törnqvist, Egil. They devised a method of filling in the artwork's central void in an additional application of the Droste effect by successively rotating and shrinking an image of the artwork. === Advertising === In the 20th century, the Droste effect was used to market a variety of products. The effect has been a motif, too, for the cover of many comic books, where it was especially popular in the 1940s. == Effect == === Origins === The Droste effect is named after the image on the tins and boxes of Droste cocoa powder which displayed a nurse carrying a serving tray with a cup of hot chocolate and a box with the same image, designed by Jan Misset.""Bedenker van Droste-effect bekend"", Trouw, 1 August 1994. File:Droste 1260359-nevit.jpg|Droste effect by image manipulation (using GIMP). === Medieval art === The Droste effect was anticipated by Giotto early in the 14th century, in his Stefaneschi Triptych. File:Polittico Stefaneschi, dettaglio.jpg| ... who is holding the triptych itself. === M. C. Escher === The Dutch artist M. C. Escher made use of the Droste effect in his 1956 lithograph Print Gallery, which portrays a gallery containing a print which depicts the gallery, each time both reduced and rotated, but with a void at the centre of the image. The effect is seen in the Dutch artist M. C. Escher's 1956 lithograph Print Gallery, which portrays a gallery that depicts itself. Apart from advertising, the Droste effect is displayed in the model village at Bourton-on-the-Water: this contains a model of itself, with two further iterations. The image would proclaim the wholesome effect of chocolate milk and became inseparable from the Droste brand. Little Giant Comics #1 (July 1938) is said to be the first-published example of an infinity cover. == See also == * Beyond the Infinite Two Minutes, a movie prominently incorporating the effect * Chinese boxes * Dream within a dream * Fractal * Homunculus argument * Infinity mirror * Infinite regress * Matryoshka doll * Infinity * Quine * Scale invariance * Self-similarity * Story within a story § Fractal fiction * Video feedback == Notes == == References == == External links == * Escher and the Droste effect * The Math Behind the Droste Effect (article by Jos Leys summarizing the results of the Leiden study and article) * Droste Effect with Mathematica * Droste Effect from Wolfram Demonstrations Project Category:Artistic techniques Category:Recursion Category:Symmetry By making dynamic and progressive commercials for Droste, CSM provided a rejuvenation of Droste's image. The Droste effect is a theme in Russell Hoban's children's novel, The Mouse and His Child, appearing in the form of a label on a can of ""Bonzo Dog Food"" which depicts itself. The effect is named after a Dutch brand of cocoa, with an image designed by Jan Misset in 1904. File:JudgeMagazine19Jan1918.png|Judge cover, 19 January 1918 File:LibertyMagazine10May1924.png|Liberty cover, 10 May 1924 File:Royal Baking Powder.jpg|Royal Baking Powder, early 20th century === Comic books === The Droste effect has been a motif for the cover of comic books for many years, known as an ""infinity cover"". Droste B.V. () is a Dutch chocolate manufacturer. It is believed that this illustration was created by Jan (Johannes) Musset, being inspired by a pastel known as La Belle Chocolatière (""The Pretty Chocolate Girl""). After the turn of the century the company had been exporting its products to Belgium, Germany and France, and in 1905 it entered the American market. ===The nurse=== The famous illustration of the woman in nurse clothes, holding a plate with a cup of milk and a Droste cocoa package, first appeared on Droste products around the year 1900. This produces a loop which in theory could go on forever, but in practice only continues as far as the image's resolution allows. In the meantime, Droste's assortment had grown to numerous cocoa and chocolate products, the famous Dutch chocolate letters included. Drost is a Dutch occupational surname. ",The Droste effect is a type of optical illusion that creates the appearance of a three-dimensional image within a two-dimensional picture.,"The Droste effect is a type of packaging design used by a variety of products, named after a Dutch brand of cocoa, with an image designed by Jan Misset in 1904.","The Droste effect is a type of painting technique used by Dutch artist M. C. Escher in his 1956 lithograph Print Gallery, which portrays a gallery that depicts itself.","The Droste effect is a recursive image effect in which a picture appears within itself in a place where a similar picture would realistically be expected to appear. This creates a loop that can continue as far as the image's resolution allows, and is named after a Dutch brand of cocoa.",The Droste effect is a type of recursive algorithm used in computer programming to create self-referential images.,D,kaggle200,"The Dutch artist M. C. Escher made use of the Droste effect in his 1956 lithograph ""Print Gallery"", which portrays a gallery containing a print which depicts the gallery, each time both reduced and rotated, but with a void at the centre of the image. The work has attracted the attention of mathematicians including Hendrik Lenstra. They devised a method of filling in the artwork's central void in an additional application of the Droste effect by successively rotating and shrinking an image of the artwork.
The Droste effect (), known in art as an example of ""mise en abyme"", is the effect of a picture recursively appearing within itself, in a place where a similar picture would realistically be expected to appear. This produces a loop which in theory could go on forever, but in practice only continues as far as the image's resolution allows.
The Droste effect is a theme in Russell Hoban's children's novel, ""The Mouse and His Child"", appearing in the form of a label on a can of ""Bonzo Dog Food"" which depicts itself.
The effect is named after a Dutch brand of cocoa, with an image designed by Jan Musset in 1904. It has since been used in the packaging of a variety of products. The effect is seen in the Dutch artist M. C. Escher's 1956 lithograph ""Print Gallery"", which portrays a gallery that depicts itself. Apart from advertising, the Droste effect is displayed in the model village at Bourton-on-the-Water: this contains a model of itself, with two further iterations. The effect has been a motif, too, for the cover of many comic books, where it was especially popular in the 1940s.","Advertising In the 20th century, the Droste effect was used to market a variety of products. The packaging of Land O'Lakes butter featured a Native American woman holding a package of butter with a picture of herself. Morton Salt similarly made use of the effect. The cover of the 1969 vinyl album Ummagumma by Pink Floyd shows the band members sitting in various places, with a picture on the wall showing the same scene, but the order of the band members rotated. The logo of The Laughing Cow cheese spread brand pictures a cow with earrings. On closer inspection, these are seen to be images of the circular cheese spread package, each bearing the image of the laughing cow. The Droste effect is a theme in Russell Hoban's children's novel, The Mouse and His Child, appearing in the form of a label on a can of ""Bonzo Dog Food"" which depicts itself.
The Droste effect (Dutch pronunciation: [ˈdrɔstə]), known in art as an example of mise en abyme, is the effect of a picture recursively appearing within itself, in a place where a similar picture would realistically be expected to appear. This produces a loop which in theory could go on forever, but in practice only continues as far as the image's resolution allows.
The effect is named after a Dutch brand of cocoa, with an image designed by Jan Misset in 1904. It has since been used in the packaging of a variety of products. The effect is seen in the Dutch artist M. C. Escher's 1956 lithograph Print Gallery, which portrays a gallery that depicts itself. Apart from advertising, the Droste effect is displayed in the model village at Bourton-on-the-Water: this contains a model of itself, with two further iterations. The effect has been a motif, too, for the cover of many comic books, where it was especially popular in the 1940s.","They devised a method of filling in the artwork's central void in an additional application of the Droste effect by successively rotating and shrinking an image of the artwork.
The Droste effect (), known in art as an example of ""mise en abyme"", is the effect of a picture recursively appearing within itself, in a place where a similar picture would realistically be expected to appearApart from advertising, the Droste effect is displayed in the model village at Bourton-on-the-Water: this contains a model of itself, with two further iterations The Droste effect is a theme in Russell Hoban's children's novel, The Mouse and His Child, appearing in the form of a label on a can of ""Bonzo Dog Food"" which depicts itself.
The Droste effect (Dutch pronunciation: [ˈdrɔstə]), known in art as an example of mise en abyme, is the effect of a picture recursively appearing within itself, in a place where a similar picture would realistically be expected to appearAdvertising In the 20th century, the Droste effect was used to market a variety of productsThis produces a loop which in theory could go on forever, but in practice only continues as far as the image's resolution allows.
The Droste effect is a theme in Russell Hoban's children's novel, ""The Mouse and His Child"", appearing in the form of a label on a can of ""Bonzo Dog Food"" which depicts itself.
The effect is named after a Dutch brand of cocoa, with an image designed by Jan Musset in 1904The effect is seen in the Dutch artist MEscher made use of the Droste effect in his 1956 lithograph ""Print Gallery"", which portrays a gallery containing a print which depicts the gallery, each time both reduced and rotated, but with a void at the centre of the imageMorton Salt similarly made use of the effectThis produces a loop which in theory could go on forever, but in practice only continues as far as the image's resolution allows.
The effect is named after a Dutch brand of cocoa, with an image designed by Jan Misset in 1904The effect has been a motif, too, for the cover of many comic books, where it was especially popular in the 1940s.","They devised a method of filling in the artwork's central void in an additional application of the Droste effect by successively rotating and shrinking an image of the artwork.
The Droste effect (), known in art as an example of ""mise en abyme"", is the effect of a picture recursively appearing within itself, in a place where a similar picture would realistically be expected to appearApart from advertising, the Droste effect is displayed in the model village at Bourton-on-the-Water: this contains a model of itself, with two further iterations The Droste effect is a theme in Russell Hoban's children's novel, The Mouse and His Child, appearing in the form of a label on a can of ""Bonzo Dog Food"" which depicts itself.
The Droste effect (Dutch pronunciation: [ˈdrɔstə]), known in art as an example of mise en abyme, is the effect of a picture recursively appearing within itself, in a place where a similar picture would realistically be expected to appearAdvertising In the 20th century, the Droste effect was used to market a variety of productsThis produces a loop which in theory could go on forever, but in practice only continues as far as the image's resolution allows.
The Droste effect is a theme in Russell Hoban's children's novel, ""The Mouse and His Child"", appearing in the form of a label on a can of ""Bonzo Dog Food"" which depicts itself.
The effect is named after a Dutch brand of cocoa, with an image designed by Jan Musset in 1904The effect is seen in the Dutch artist MEscher made use of the Droste effect in his 1956 lithograph ""Print Gallery"", which portrays a gallery containing a print which depicts the gallery, each time both reduced and rotated, but with a void at the centre of the imageMorton Salt similarly made use of the effectThis produces a loop which in theory could go on forever, but in practice only continues as far as the image's resolution allows.
The effect is named after a Dutch brand of cocoa, with an image designed by Jan Misset in 1904The effect has been a motif, too, for the cover of many comic books, where it was especially popular in the 1940s.[SEP]What is the Droste effect?","['D', 'E', 'A']",1.0
What is water hammer?,"In French and Italian, the terms for ""water hammer"" come from the hydraulic ram: coup de bélier (French) and colpo d'ariete (Italian) both mean ""blow of the ram"". see page 22. Other potential causes of water hammer: * A pump stopping * A check valve which closes quickly (i.e., ""check valve slam"") due to the flow in a pipe reversing direction on loss of motive power, such as a pump stopping. The following characteristics may reduce or eliminate water hammer: * Reduce the pressure of the water supply to the building by fitting a regulator. In residential plumbing systems, water hammer may occur when a dishwasher, washing machine or toilet suddenly shuts off water flow. Hydraulic hammer may refer to: *Breaker (hydraulic), a percussion hammer fitted to an excavator for demolishing concrete structures or rocks *Hydraulic hammer, a type of piling hammer As a result, we see that we can reduce the water hammer by: * increasing the pipe diameter at constant flow, which reduces the flow velocity and hence the deceleration of the liquid column; * employing the solid material as tight as possible with respect to the internal fluid bulk (solid Young modulus low with respect to fluid bulk modulus); * introducing a device that increases the flexibility of the entire hydraulic system, such as a hydraulic accumulator; * where possible, increasing the fraction of undissolved gases in the liquid. ==Dynamic equations== The water hammer effect can be simulated by solving the following partial differential equations. : \frac{\partial V}{\partial x} + \frac{1}{B} \frac{dP}{dt} = 0, : \frac{dV}{dt} + \frac{1}{\rho} \frac{\partial P}{\partial x} + \frac{f}{2D} V |V| = 0, where V is the fluid velocity inside pipe, \rho is the fluid density, B is the equivalent bulk modulus, and f is the Darcy–Weisbach friction factor. ==Column separation== Column separation is a phenomenon that can occur during a water- hammer event. Water hammer is related to the speed of sound in the fluid, and elbows reduce the influences of pressure waves. * The water hammer from a liquid jet created by a collapsing microcavity is studied for potential applications noninvasive transdermal drug delivery. ==See also== * Blood hammer * Cavitation * Fluid dynamics * Hydraulophone – musical instruments employing water and other fluids * Impact force * Transient (civil engineering) * Watson's water hammer pulse ==References== ==External links== * What Is Water Hammer and Why Is It Important That You Prevent it? Water hammer can cause pipelines to break if the pressure is sufficiently high. A hammer is a tool, most often a hand tool, consisting of a weighted ""head"" fixed to a long handle that is swung to deliver an impact to a small area of an object. * Fluid structure interaction: the pipeline reacts on the varying pressures and causes pressure waves itself. ==Applications== * The water hammer principle can be used to create a simple water pump called a hydraulic ram. Hydroelectric power plants especially must be carefully designed and maintained because the water hammer can cause water pipes to fail catastrophically. Hence, we can say that the magnitude of the water hammer largely depends upon the time of closure, elastic components of pipe & fluid properties. ==Expression for the excess pressure due to water hammer== When a valve with a volumetric flow rate Q is closed, an excess pressure ΔP is created upstream of the valve, whose value is given by the Joukowsky equation: : \Delta P = Z Q. * Use accumulator to prevent water hammer in pipeline * What Is Water Hammer/Steam Hammer? Water hammer was exploited before there was even a word for it. Water hammer can be analyzed by two different approaches—rigid column theory, which ignores compressibility of the fluid and elasticity of the walls of the pipe, or by a full analysis that includes elasticity. As the 19th century witnessed the installation of municipal water supplies, water hammer became a concern to civil engineers. thumbnail|300px|Effect of a pressure surge on a float gauge Hydraulic shock (colloquial: water hammer; fluid hammer) is a pressure surge or wave caused when a fluid in motion, usually a liquid but sometimes also a gas is forced to stop or change direction suddenly; a momentum change. Although most hammers are hand tools, powered hammers, such as steam hammers and trip hammers, are used to deliver forces beyond the capacity of the human arm. * A hydropneumatic device similar in principle to a shock absorber called a 'Water Hammer Arrestor' can be installed between the water pipe and the machine, to absorb the shock and stop the banging. ",Water hammer is a type of water turbine used in hydroelectric generating stations to generate electricity.,Water hammer is a type of air trap or standpipe used to dampen the sound of moving water in plumbing systems.,Water hammer is a type of plumbing tool used to break pipelines and absorb the potentially damaging forces caused by moving water.,Water hammer is a type of water pump used to increase the pressure of water in pipelines.,"Water hammer is a loud banging noise resembling a hammering sound that occurs when moving water is suddenly stopped, causing a rise in pressure and resulting shock wave.",E,kaggle200,"In 1772, Englishman John Whitehurst built a hydraulic ram for a home in Cheshire, England. In 1796, French inventor Joseph Michel Montgolfier (1740–1810) built a hydraulic ram for his paper mill in Voiron. In French and Italian, the terms for ""water hammer"" come from the hydraulic ram: ""coup de bélier"" (French) and ""colpo d'ariete"" (Italian) both mean ""blow of the ram"". As the 19th century witnessed the installation of municipal water supplies, water hammer became a concern to civil engineers. Water hammer also interested physiologists who were studying the circulatory system.
One of the first to successfully investigate the water hammer problem was the Italian engineer Lorenzo Allievi.
Water flowing through a pipe has momentum. If the moving water is suddenly stopped - such as by closing a valve downstream of the flowing water, the pressure can rise suddenly with a resulting shock wave. In domestic plumbing this shock wave is experienced as a loud banging resembling a hammering noise. Water hammer can cause pipelines to break if the pressure is sufficiently high. Air traps or stand pipes (open at the top) are sometimes added as dampers to water systems to absorb the potentially damaging forces caused by the moving water.
In the 1st century B.C., Marcus Vitruvius Pollio described the effect of water hammer in lead pipes and stone tubes of the Roman public water supply.","Water flowing through a pipe has momentum. If the moving water is suddenly stopped - such as by closing a valve downstream of the flowing water, the pressure can rise suddenly with a resulting shock wave. In domestic plumbing this shock wave is experienced as a loud banging resembling a hammering noise. Water hammer can cause pipelines to break if the pressure is sufficiently high. Air traps or stand pipes (open at the top) are sometimes added as dampers to water systems to absorb the potentially damaging forces caused by the moving water.
In the 1st century B.C., Marcus Vitruvius Pollio described the effect of water hammer in lead pipes and stone tubes of the Roman public water supply. Water hammer was exploited before there was even a word for it.
The water hammer principle can be used to create a simple water pump called a hydraulic ram.
Leaks can sometimes be detected using water hammer.
Enclosed air pockets can be detected in pipelines.
The water hammer from a liquid jet created by a collapsing microcavity is studied for potential applications noninvasive transdermal drug delivery.","In French and Italian, the terms for ""water hammer"" come from the hydraulic ram: ""coup de bélier"" (French) and ""colpo d'ariete"" (Italian) both mean ""blow of the ram""Water hammer can cause pipelines to break if the pressure is sufficiently high Water hammer was exploited before there was even a word for it.
The water hammer principle can be used to create a simple water pump called a hydraulic ram.
Leaks can sometimes be detected using water hammer.
Enclosed air pockets can be detected in pipelines.
The water hammer from a liquid jet created by a collapsing microcavity is studied for potential applications noninvasive transdermal drug deliveryWater hammer also interested physiologists who were studying the circulatory system.
One of the first to successfully investigate the water hammer problem was the Italian engineer Lorenzo Allievi.
Water flowing through a pipe has momentumAs the 19th century witnessed the installation of municipal water supplies, water hammer became a concern to civil engineersAir traps or stand pipes (open at the top) are sometimes added as dampers to water systems to absorb the potentially damaging forces caused by the moving water.
In the 1st century B.C., Marcus Vitruvius Pollio described the effect of water hammer in lead pipes and stone tubes of the Roman public water supply.Air traps or stand pipes (open at the top) are sometimes added as dampers to water systems to absorb the potentially damaging forces caused by the moving water.
In the 1st century B.C., Marcus Vitruvius Pollio described the effect of water hammer in lead pipes and stone tubes of the Roman public water supplyIn domestic plumbing this shock wave is experienced as a loud banging resembling a hammering noiseIf the moving water is suddenly stopped - such as by closing a valve downstream of the flowing water, the pressure can rise suddenly with a resulting shock waveWater flowing through a pipe has momentum- In 1772, Englishman John Whitehurst built a hydraulic ram for a home in Cheshire, EnglandIn 1796, French inventor Joseph Michel Montgolfier (1740–1810) built a hydraul","In French and Italian, the terms for ""water hammer"" come from the hydraulic ram: ""coup de bélier"" (French) and ""colpo d'ariete"" (Italian) both mean ""blow of the ram""Water hammer can cause pipelines to break if the pressure is sufficiently high Water hammer was exploited before there was even a word for it.
The water hammer principle can be used to create a simple water pump called a hydraulic ram.
Leaks can sometimes be detected using water hammer.
Enclosed air pockets can be detected in pipelines.
The water hammer from a liquid jet created by a collapsing microcavity is studied for potential applications noninvasive transdermal drug deliveryWater hammer also interested physiologists who were studying the circulatory system.
One of the first to successfully investigate the water hammer problem was the Italian engineer Lorenzo Allievi.
Water flowing through a pipe has momentumAs the 19th century witnessed the installation of municipal water supplies, water hammer became a concern to civil engineersAir traps or stand pipes (open at the top) are sometimes added as dampers to water systems to absorb the potentially damaging forces caused by the moving water.
In the 1st century B.C., Marcus Vitruvius Pollio described the effect of water hammer in lead pipes and stone tubes of the Roman public water supply.Air traps or stand pipes (open at the top) are sometimes added as dampers to water systems to absorb the potentially damaging forces caused by the moving water.
In the 1st century B.C., Marcus Vitruvius Pollio described the effect of water hammer in lead pipes and stone tubes of the Roman public water supplyIn domestic plumbing this shock wave is experienced as a loud banging resembling a hammering noiseIf the moving water is suddenly stopped - such as by closing a valve downstream of the flowing water, the pressure can rise suddenly with a resulting shock waveWater flowing through a pipe has momentum- In 1772, Englishman John Whitehurst built a hydraulic ram for a home in Cheshire, EnglandIn 1796, French inventor Joseph Michel Montgolfier (1740–1810) built a hydraul[SEP]What is water hammer?","['E', 'D', 'C']",1.0
What is the reason for the stochastic nature of all observed resistance-switching processes?,"In the latter case no price level drift is allowed away from the predetermined path, while in the former case any stochastic change to the price level permanently affects the expected values of the price level at each time along its future path. Duane's initial results using this hybrid stochastic simulation were positive when the model correctly supported the idea of an abrupt finite-temperature transition in quantum chromodynamics, which was an controversial subject at the time. Hybrid stochastic simulations are a sub-class of stochastic simulations. Stochastic drift can also occur in population genetics where it is known as genetic drift. In probability theory, stochastic drift is the change of the average value of a stochastic (random) process. The goal of a hybrid stochastic simulation varies based on context, however they typically aim to either improve accuracy or reduce computational complexity. Stochastics and Dynamics (SD) is an interdisciplinary journal published by World Scientific. In mathematics, a reversible diffusion is a specific example of a reversible stochastic process. The principal focus of this journal is theory and applications of stochastic processes. In sufficiently small populations, drift can also neutralize the effect of deterministic natural selection on the population. ==Stochastic drift in economics and finance== Time series variables in economics and finance — for example, stock prices, gross domestic product, etc. — generally evolve stochastically and frequently are non- stationary. So after the initial shock hits y, its value is incorporated forever into the mean of y, so we have stochastic drift. Stochastic Processes and Their Applications is a monthly peer-reviewed scientific journal published by Elsevier for the Bernoulli Society for Mathematical Statistics and Probability. In this case the stochastic term is stationary and hence there is no stochastic drift, though the time series itself may drift with no fixed long-run mean due to the deterministic component f(t) not having a fixed long-run mean. The Langevin equation excelled at simulating long- time properties, but the addition of noise into the system created inefficient exploration of short-time properties. The first hybrid stochastic simulation was developed in 1985. == History == The first hybrid stochastic simulation was developed by Simon Duane at the University of Illinois at Urbana-Champaign in 1985. A trend stationary process {yt} evolves according to :y_t = f(t) + e_t where t is time, f is a deterministic function, and et is a zero-long-run-mean stationary random variable. For example, a process that counts the number of heads in a series of n fair coin tosses has a drift rate of 1/2 per toss. Articles and papers in the journal describe theory, experiments, algorithms, numerical simulation and applications of stochastic phenomena, with a particular focus on random or stochastic ordinary, partial or functional differential equations and random mappings. == Abstracting and indexing == The journal is abstracted and indexed in: * Current Mathematical Publications * Mathematical Reviews * Science Citation Index-Expanded (SCIE), including the Web of Science * CompuMath Citation Index(CMCI) * ISI Alerting Services * Current Contents/Physical, Chemical & Earth Sciences (CC/PC&ES;) * Zentralblatt MATH == References == Category:Mathematics journals Category:Academic journals established in 2001 Category:English-language journals Category:World Scientific academic journals Duane's hybrid stochastic simulation was based upon the idea that the two algorithms complemented each other. In contrast, a unit root (difference stationary) process evolves according to :y_t = y_{t-1} + c + u_t where u_t is a zero-long-run-mean stationary random variable; here c is a non-stochastic drift parameter: even in the absence of the random shocks ut, the mean of y would change by c per period. ","The free-energy barriers for the transition {i} → {j} are not high enough, and the memory device can switch without having to do anything.","The device is subjected to random thermal fluctuations, which trigger the switching event, but it is impossible to predict when it will occur.","The memory device is found to be in a distinct resistance state {j}, and there exists no physical one-to-one relationship between its present state and its foregoing voltage history.","The device is subjected to biases below the threshold value, which still allows for a finite probability of switching, but it is possible to predict when it will occur.","The external bias is set to a value above a certain threshold value, which reduces the free-energy barrier for the transition {i} → {j} to zero.",B,kaggle200,"In other words, ""R"" mentions all letters that label a transition from ""i"" to ""j"", and we also include ε in the case where ""i""=""j"".
The above-mentioned thermodynamic principle furthermore implies that the operation of two-terminal non-volatile memory devices (e.g. ""resistance-switching"" memory devices (ReRAM)) cannot be associated with the memristor concept, i.e., such devices cannot by itself remember their current or voltage history. Transitions between distinct internal memory or resistance states are of probabilistic nature. The probability for a transition from state {i} to state {j} depends on the height of the free-energy barrier between both states. The transition probability can thus be influenced by suitably driving the memory device, i.e., by ""lowering"" the free-energy barrier for the transition {i} → {j} by means of, for example, an externally applied bias.
When a two-terminal non-volatile memory device is found to be in a distinct resistance state {j}, there exists therefore no physical one-to-one relationship between its present state and its foregoing voltage history. The switching behavior of individual non-volatile memory devices thus cannot be described within the mathematical framework proposed for memristor/memristive systems.
A ""resistance switching"" event can simply be enforced by setting the external bias to a value above a certain threshold value. This is the trivial case, i.e., the free-energy barrier for the transition {i} → {j} is reduced to zero. In case one applies biases below the threshold value, there is still a finite probability that the device will switch in course of time (triggered by a random thermal fluctuation), but – as one is dealing with probabilistic processes – it is impossible to predict when the switching event will occur. That is the basic reason for the stochastic nature of all observed resistance-switching (ReRAM) processes. If the free-energy barriers are not high enough, the memory device can even switch without having to do anything.","The above-mentioned thermodynamic principle furthermore implies that the operation of two-terminal non-volatile memory devices (e.g. ""resistance-switching"" memory devices (ReRAM)) cannot be associated with the memristor concept, i.e., such devices cannot by itself remember their current or voltage history. Transitions between distinct internal memory or resistance states are of probabilistic nature. The probability for a transition from state {i} to state {j} depends on the height of the free-energy barrier between both states. The transition probability can thus be influenced by suitably driving the memory device, i.e., by ""lowering"" the free-energy barrier for the transition {i} → {j} by means of, for example, an externally applied bias.
When a two-terminal non-volatile memory device is found to be in a distinct resistance state {j}, there exists therefore no physical one-to-one relationship between its present state and its foregoing voltage history. The switching behavior of individual non-volatile memory devices thus cannot be described within the mathematical framework proposed for memristor/memristive systems.
A ""resistance switching"" event can simply be enforced by setting the external bias to a value above a certain threshold value. This is the trivial case, i.e., the free-energy barrier for the transition {i} → {j} is reduced to zero. In case one applies biases below the threshold value, there is still a finite probability that the device will switch in course of time (triggered by a random thermal fluctuation), but – as one is dealing with probabilistic processes – it is impossible to predict when the switching event will occur. That is the basic reason for the stochastic nature of all observed resistance-switching (ReRAM) processes. If the free-energy barriers are not high enough, the memory device can even switch without having to do anything.","That is the basic reason for the stochastic nature of all observed resistance-switching (ReRAM) processesTransitions between distinct internal memory or resistance states are of probabilistic natureThe transition probability can thus be influenced by suitably driving the memory device, i.e., by ""lowering"" the free-energy barrier for the transition {i} → {j} by means of, for example, an externally applied bias.
When a two-terminal non-volatile memory device is found to be in a distinct resistance state {j}, there exists therefore no physical one-to-one relationship between its present state and its foregoing voltage historyThe switching behavior of individual non-volatile memory devices thus cannot be described within the mathematical framework proposed for memristor/memristive systems.
A ""resistance switching"" event can simply be enforced by setting the external bias to a value above a certain threshold value""resistance-switching"" memory devices (ReRAM)) cannot be associated with the memristor concept, i.e., such devices cannot by itself remember their current or voltage historyIn case one applies biases below the threshold value, there is still a finite probability that the device will switch in course of time (triggered by a random thermal fluctuation), but – as one is dealing with probabilistic processes – it is impossible to predict when the switching event will occurIf the free-energy barriers are not high enough, the memory device can even switch without having to do anythingIf the free-energy barriers are not high enough, the memory device can even switch without having to do anything.- In other words, ""R"" mentions all letters that label a transition from ""i"" to ""j"", and we also include ε in the case where ""i""=""j"".
The above-mentioned thermodynamic principle furthermore implies that the operation of two-terminal non-volatile memory devices (e.gThe above-mentioned thermodynamic principle furthermore implies that the operation of two-terminal non-volatile memory devices (e.gThe probability for a transition from state {i} to state {j} depends on the height of ","That is the basic reason for the stochastic nature of all observed resistance-switching (ReRAM) processesTransitions between distinct internal memory or resistance states are of probabilistic natureThe transition probability can thus be influenced by suitably driving the memory device, i.e., by ""lowering"" the free-energy barrier for the transition {i} → {j} by means of, for example, an externally applied bias.
When a two-terminal non-volatile memory device is found to be in a distinct resistance state {j}, there exists therefore no physical one-to-one relationship between its present state and its foregoing voltage historyThe switching behavior of individual non-volatile memory devices thus cannot be described within the mathematical framework proposed for memristor/memristive systems.
A ""resistance switching"" event can simply be enforced by setting the external bias to a value above a certain threshold value""resistance-switching"" memory devices (ReRAM)) cannot be associated with the memristor concept, i.e., such devices cannot by itself remember their current or voltage historyIn case one applies biases below the threshold value, there is still a finite probability that the device will switch in course of time (triggered by a random thermal fluctuation), but – as one is dealing with probabilistic processes – it is impossible to predict when the switching event will occurIf the free-energy barriers are not high enough, the memory device can even switch without having to do anythingIf the free-energy barriers are not high enough, the memory device can even switch without having to do anything.- In other words, ""R"" mentions all letters that label a transition from ""i"" to ""j"", and we also include ε in the case where ""i""=""j"".
The above-mentioned thermodynamic principle furthermore implies that the operation of two-terminal non-volatile memory devices (e.gThe above-mentioned thermodynamic principle furthermore implies that the operation of two-terminal non-volatile memory devices (e.gThe probability for a transition from state {i} to state {j} depends on the height of [SEP]What is the reason for the stochastic nature of all observed resistance-switching processes?","['C', 'B', 'D']",0.5
What is the Einstein@Home project?,"Einstein@Home is a volunteer computing project that searches for signals from spinning neutron stars in data from gravitational-wave detectors, from large radio telescopes, and from a gamma-ray telescope. Users regularly contribute about 12.7 petaFLOPS of computational power, which would rank Einstein@Home among the top 45 on the TOP500 list of supercomputers. == Scientific objectives == The Einstein@Home project was originally created to perform all-sky searches for previously unknown continuous gravitational-wave (CW) sources using data from the Laser Interferometer Gravitational-Wave Observatory (LIGO) detector instruments in Washington and Louisiana, USA. Einstein@Home uses the power of volunteer computing in solving the computationally intensive problem of analyzing a large volume of data. As of July 2022, the Einstein@Home project had discovered a total of 39 gamma-ray pulsars in Fermi LAT data. == See also == * Gravitational wave * Laser Interferometer Gravitational-Wave Observatory (LIGO) * List of volunteer computing projects == References == == Scientific Publications == * * * * * * * * * * * * * * * * * * * * == External links == * Einstein@Home Website * Einstein@Home project information in Chinese * Einstein@Home user statistics * Berkeley Open Infrastructure for Network Computing (BOINC) * * Category: Science in society Category:Volunteer computing projects Category:Gravitational-wave telescopes Category:University of Wisconsin–Milwaukee Category:Free science software Category:2005 software Category:Research institutes in Lower Saxony The Einstein@Home analysis of the LAT data makes use of methods initially developed for the detection of continuous gravitational waves. == Gravitational-wave data analysis and results == alt=responsive graphics|thumb|Einstein@Home screensaver Einstein@Home has carried out many analysis runs using data from the LIGO instruments. Since March 2009, part of the Einstein@Home computing power has also been used to analyze data taken by the PALFA Consortium at the Arecibo Observatory in Puerto Rico. Einstein@Home searches data from the LIGO gravitational-wave detectors. The results of this search have led to the first scientific publication of Einstein@Home in Physical Review D. Einstein@Home gained considerable attention in the international volunteer computing community when an optimized application for the S4 data set analysis was developed and released in March 2006 by project volunteer Akos Fekete, a Hungarian programmer. Einstein@Home runs through the same software platform as SETI@home, the Berkeley Open Infrastructure for Network Computing (BOINC). Besides validating Einstein's theory of General Relativity, direct detection of gravitational waves would also constitute an important new astronomical tool. Since July 2011, Einstein@Home is also analyzing data from the Large Area Telescope (LAT), the primary instrument on Fermi Gamma-ray Space Telescope to search for pulsed gamma-ray emission from spinning neutron stars (gamma-ray pulsars). The project conducts the most sensitive all-sky searches for continuous gravitational waves. Einstein@Home is hosted by the Max Planck Institute for Gravitational Physics (Albert Einstein Institute, Hannover, Germany) and the University of Wisconsin–Milwaukee. Cosmology@Home is a volunteer computing project that uses the BOINC platform and was once run at the Departments of Astronomy and Physics at the University of Illinois at Urbana-Champaign. Both these new methods were employed in the first Einstein@Home all-sky search for continuous gravitational waves in Advanced LIGO data from the first observing run (O1), the results of which were published on 8 December 2017. The project includes two space observatories, and several observational cosmology probes. It describes the design of searches for continuous gravitational waves over a wide frequency range from three supernova remnants (Vela Jr., Cassiopeia A, and G347.3). As of late July 2006, this new official application had become widely distributed among Einstein@Home users. The Einstein@Home project director is Bruce Allen. The Cosmology@Home application is proprietary. == Milestones == *2007-06-30 Project launches for closed alpha testing - invitation only. *2007-08-23 Project opens registration for public alpha testing. *2007-11-05 Project enters beta testing stage. *2016-12-15 Project moved to the Institut Lagrange de Paris and the Institut d'astrophysique de Paris, both of which are located at the Pierre and Marie Curie University. == See also == * List of volunteer computing projects * Berkeley Open Infrastructure for Network Computing (BOINC) == References == == External links == * * Website of the Research Group running Cosmology@Home * ApJ paper on PICO * The PICO home page * Category:Volunteer computing projects Category:Free science software Category:French National Centre for Scientific Research Category:University of Illinois Urbana-Champaign Category:Science in society ",The Einstein@Home project is a project that aims to detect signals from supernovae or binary black holes. It takes data from LIGO and GEO and sends it out in little pieces to thousands of volunteers for parallel analysis on their home computers.,The Einstein@Home project is a project that aims to detect signals from supernovae or binary black holes. It takes data from SETI and GEO and sends it out in little pieces to thousands of volunteers for parallel analysis on their home computers.,The Einstein@Home project is a distributed computing project that aims to detect simple gravitational waves with constant frequency. It takes data from LIGO and GEO and sends it out in little pieces to thousands of volunteers for parallel analysis on their home computers.,The Einstein@Home project is a project that aims to detect simple gravitational waves with constant frequency. It takes data from LIGO and GEO and sends it out in large pieces to thousands of volunteers for parallel analysis on their home computers.,The Einstein@Home project is a project that aims to detect simple gravitational waves with constant frequency. It takes data from SETI and GEO and sends it out in little pieces to thousands of volunteers for parallel analysis on their home computers.,C,kaggle200,"On 24 March 2009, it was announced that the Einstein@Home project was beginning to analyze data received by the PALFA Consortium at the Arecibo Observatory in Puerto Rico.
On 1 March 2011, the Einstein@Home project announced their second discovery: a binary pulsar system PSR J1952+2630. The computers of Einstein@Home volunteers from Russia and the UK observed PSR J1952+2630 with the highest statistical significance.
The Einstein@Home project is a distributed computing project similar to SETI@home intended to detect this type of gravitational wave. By taking data from LIGO and GEO, and sending it out in little pieces to thousands of volunteers for parallel analysis on their home computers, Einstein@Home can sift through the data far more quickly than would be possible otherwise.
The Einstein@Home project is a distributed computing project similar to SETI@home intended to detect this type of simple gravitational wave. By taking data from LIGO and GEO, and sending it out in little pieces to thousands of volunteers for parallel analysis on their home computers, Einstein@Home can sift through the data far more quickly than would be possible otherwise.","On 24 March 2009, it was announced that the Einstein@Home project was beginning to analyze data received by the PALFA Consortium at the Arecibo Observatory in Puerto Rico.On 26 November 2009, a CUDA-optimized application for the Arecibo Binary Pulsar Search was first detailed on official Einstein@Home webpages. This application uses both a regular CPU and an NVIDIA GPU to perform analyses faster (in some cases up to 50% faster).On 12 August 2010, the Einstein@Home project announced the discovery of a new disrupted binary pulsar, PSR J2007+2722; it may be the fastest-spinning such pulsar discovered to date. The computers of Einstein@Home volunteers Chris and Helen Colvin and Daniel Gebhardt observed PSR 2007+2722 with the highest statistical significance.
The Einstein@Home project is a distributed computing project similar to SETI@home intended to detect this type of gravitational wave. By taking data from LIGO and GEO, and sending it out in little pieces to thousands of volunteers for parallel analysis on their home computers, Einstein@Home can sift through the data far more quickly than would be possible otherwise.
The Einstein@Home project is a distributed computing project similar to SETI@home intended to detect this type of simple gravitational wave. By taking data from LIGO and GEO, and sending it out in little pieces to thousands of volunteers for parallel analysis on their home computers, Einstein@Home can sift through the data far more quickly than would be possible otherwise.","By taking data from LIGO and GEO, and sending it out in little pieces to thousands of volunteers for parallel analysis on their home computers, Einstein@Home can sift through the data far more quickly than would be possible otherwise.By taking data from LIGO and GEO, and sending it out in little pieces to thousands of volunteers for parallel analysis on their home computers, Einstein@Home can sift through the data far more quickly than would be possible otherwiseBy taking data from LIGO and GEO, and sending it out in little pieces to thousands of volunteers for parallel analysis on their home computers, Einstein@Home can sift through the data far more quickly than would be possible otherwise.
The Einstein@Home project is a distributed computing project similar to SETI@home intended to detect this type of simple gravitational wave- On 24 March 2009, it was announced that the Einstein@Home project was beginning to analyze data received by the PALFA Consortium at the Arecibo Observatory in Puerto Rico.
On 1 March 2011, the Einstein@Home project announced their second discovery: a binary pulsar system PSR J1952+2630On 24 March 2009, it was announced that the Einstein@Home project was beginning to analyze data received by the PALFA Consortium at the Arecibo Observatory in Puerto Rico.On 26 November 2009, a CUDA-optimized application for the Arecibo Binary Pulsar Search was first detailed on official Einstein@Home webpagesThis application uses both a regular CPU and an NVIDIA GPU to perform analyses faster (in some cases up to 50% faster).On 12 August 2010, the Einstein@Home project announced the discovery of a new disrupted binary pulsar, PSR J2007+2722; it may be the fastest-spinning such pulsar discovered to dateThe computers of Einstein@Home volunteers from Russia and the UK observed PSR J1952+2630 with the highest statistical significance.
The Einstein@Home project is a distributed computing project similar to SETI@home intended to detect this type of gravitational waveThe computers of Einstein@Home volunteers Chris and Helen Colvin and Daniel Gebhardt observed PSR","By taking data from LIGO and GEO, and sending it out in little pieces to thousands of volunteers for parallel analysis on their home computers, Einstein@Home can sift through the data far more quickly than would be possible otherwise.By taking data from LIGO and GEO, and sending it out in little pieces to thousands of volunteers for parallel analysis on their home computers, Einstein@Home can sift through the data far more quickly than would be possible otherwiseBy taking data from LIGO and GEO, and sending it out in little pieces to thousands of volunteers for parallel analysis on their home computers, Einstein@Home can sift through the data far more quickly than would be possible otherwise.
The Einstein@Home project is a distributed computing project similar to SETI@home intended to detect this type of simple gravitational wave- On 24 March 2009, it was announced that the Einstein@Home project was beginning to analyze data received by the PALFA Consortium at the Arecibo Observatory in Puerto Rico.
On 1 March 2011, the Einstein@Home project announced their second discovery: a binary pulsar system PSR J1952+2630On 24 March 2009, it was announced that the Einstein@Home project was beginning to analyze data received by the PALFA Consortium at the Arecibo Observatory in Puerto Rico.On 26 November 2009, a CUDA-optimized application for the Arecibo Binary Pulsar Search was first detailed on official Einstein@Home webpagesThis application uses both a regular CPU and an NVIDIA GPU to perform analyses faster (in some cases up to 50% faster).On 12 August 2010, the Einstein@Home project announced the discovery of a new disrupted binary pulsar, PSR J2007+2722; it may be the fastest-spinning such pulsar discovered to dateThe computers of Einstein@Home volunteers from Russia and the UK observed PSR J1952+2630 with the highest statistical significance.
The Einstein@Home project is a distributed computing project similar to SETI@home intended to detect this type of gravitational waveThe computers of Einstein@Home volunteers Chris and Helen Colvin and Daniel Gebhardt observed PSR[SEP]What is the Einstein@Home project?","['C', 'A', 'D']",1.0
What happens to an initially inhomogeneous physical system that is isolated by a thermodynamic operation?,"It is, however, the fruit of experience that some physical systems, including isolated ones, do seem to reach their own states of internal thermodynamic equilibrium. It is an axiom of thermodynamics that an isolated system eventually reaches internal thermodynamic equilibrium, when its state no longer changes with time. For example, for a closed system of interest, a change of internal energy (an extensive state variable of the system) can be occasioned by transfer of energy as heat. The internal energy of a thermally isolated system may therefore change due to the exchange of work energy. Thermodynamic systems may be isolated, closed, or open. In thermodynamics, a thermally isolated system can exchange no mass or heat energy with its environment. In thermodynamics, a mechanically isolated system is a system that is mechanically constraint to disallow deformations, so that it cannot perform any work on its environment. thumb|Properties of Isolated, closed, and open systems in exchanging energy and matter In physical science, an isolated system is either of the following: # a physical system so far removed from other systems that it does not interact with them. # a thermodynamic system enclosed by rigid immovable walls through which neither mass nor energy can pass. thumb|Properties of isolated, closed, and open thermodynamic systems in exchanging energy and matter A thermodynamic system is a body of matter and/or radiation, considered as separate from its surroundings, and studied using the laws of thermodynamics. Classical thermodynamics postulates the existence of systems in their own states of internal thermodynamic equilibrium. Overall, in an isolated system, the internal energy is constant and the entropy can never decrease. By the inverse thermodynamic operation, the system can be split into two subsystems in the obvious way. At equilibrium, only a thermally isolating boundary can support a temperature difference. ==See also== * Closed system * Dynamical system * Mechanically isolated system * Open system * Thermodynamic system * Isolated system ==References== Category:Thermodynamic systems According to Uffink, ""... thermodynamic processes only take place after an external intervention on the system (such as: removing a partition, establishing thermal contact with a heat bath, pushing a piston, etc.). The entropy of a thermally isolated system will increase over time if it is not at equilibrium, but as long as it is at equilibrium, its entropy will be at a maximum and constant value and will not change, no matter how much work energy the system exchanges with its environment. The second law of thermodynamics for isolated systems states that the entropy of an isolated system not in equilibrium tends to increase over time, approaching maximum value at equilibrium. 'Closed system' is often used in thermodynamics discussions when 'isolated system' would be correct – i.e. there is an assumption that energy does not enter or leave the system. ==Selective transfer of matter== For a thermodynamic process, the precise physical properties of the walls and surroundings of the system are important, because they determine the possible processes. The equilibrium state is considered to be stable and the main property of the internal variables, as measures of non-equilibrium of the system, is their trending to disappear; the local law of disappearing can be written as relaxation equation for each internal variable where \tau_i= \tau_i(T, x_1, x_2, \ldots, x_n) is a relaxation time of a corresponding variable. An isolated system exchanges no matter or energy with its surroundings, whereas a closed system does not exchange matter but may exchange heat and experience and exert forces. Though very useful, they are strictly hypothetical.Thermodynamics of Spontaneous and Non- Spontaneous Processes; I. M. Kolesnikov et al, pg 136 – at https://books.google.com/books?id=2RzE2pCfijYC&pg;=PA3A System and Its Surroundings; UC Davis ChemWiki, by University of California - Davis, at http://chemwiki.ucdavis.edu/Physical_Chemistry/Thermodynamics/A_System_And_Its_Surroundings#Isolated_SystemHyperphysics, by the Department of Physics and Astronomy of Georgia State University; at http://hyperphysics.phy-astr.gsu.edu/hbase/conser.html#isosys Classical thermodynamics is usually presented as postulating the existence of isolated systems. ",It will change its internal state only if it is composed of a single subsystem and has internal walls.,It will change its internal state only if it is composed of several subsystems separated from each other by walls.,It will remain in its initial state indefinitely.,It will generally change its internal state over time.,It will change its internal state only if it is composed of a single subsystem.,D,kaggle200,"Another commonly used term that indicates a thermodynamic operation is 'change of constraint', for example referring to the removal of a wall between two otherwise isolated compartments.
An ordinary language expression for a thermodynamic operation is used by Edward A. Guggenheim: ""tampering"" with the bodies.
As a matter of history, the distinction, between a thermodynamic operation and a thermodynamic process, is not found in these terms in nineteenth century accounts. For example, Kelvin spoke of a ""thermodynamic operation"" when he meant what present-day terminology calls a thermodynamic operation followed by a thermodynamic process. Again, Planck usually spoke of a ""process"" when our present-day terminology would speak of a thermodynamic operation followed by a thermodynamic process.
An isolated physical system may be inhomogeneous, or may be composed of several subsystems separated from each other by walls. If an initially inhomogeneous physical system, without internal walls, is isolated by a thermodynamic operation, it will in general over time change its internal state. Or if it is composed of several subsystems separated from each other by walls, it may change its state after a thermodynamic operation that changes its walls. Such changes may include change of temperature or spatial distribution of temperature, by changing the state of constituent materials. A rod of iron, initially prepared to be hot at one end and cold at the other, when isolated, will change so that its temperature becomes uniform all along its length; during the process, the rod is not in thermal equilibrium until its temperature is uniform. In a system prepared as a block of ice floating in a bath of hot water, and then isolated, the ice can melt; during the melting, the system is not in thermal equilibrium; but eventually, its temperature will become uniform; the block of ice will not re-form. A system prepared as a mixture of petrol vapour and air can be ignited by a spark and produce carbon dioxide and water; if this happens in an isolated system, it will increase the temperature of the system, and during the increase, the system is not in thermal equilibrium; but eventually, the system will settle to a uniform temperature.","An ordinary language expression for a thermodynamic operation is used by Edward A. Guggenheim: ""tampering"" with the bodies.
As a matter of history, the distinction, between a thermodynamic operation and a thermodynamic process, is not found in these terms in nineteenth century accounts. For example, Kelvin spoke of a ""thermodynamic operation"" when he meant what present-day terminology calls a thermodynamic operation followed by a thermodynamic process. Again, Planck usually spoke of a ""process"" when our present-day terminology would speak of a thermodynamic operation followed by a thermodynamic process.
An isolated physical system may be inhomogeneous, or may be composed of several subsystems separated from each other by walls. If an initially inhomogeneous physical system, without internal walls, is isolated by a thermodynamic operation, it will in general over time change its internal state. Or if it is composed of several subsystems separated from each other by walls, it may change its state after a thermodynamic operation that changes its walls. Such changes may include change of temperature or spatial distribution of temperature, by changing the state of constituent materials. A rod of iron, initially prepared to be hot at one end and cold at the other, when isolated, will change so that its temperature becomes uniform all along its length; during the process, the rod is not in thermal equilibrium until its temperature is uniform. In a system prepared as a block of ice floating in a bath of hot water, and then isolated, the ice can melt; during the melting, the system is not in thermal equilibrium; but eventually, its temperature will become uniform; the block of ice will not re-form. A system prepared as a mixture of petrol vapour and air can be ignited by a spark and produce carbon dioxide and water; if this happens in an isolated system, it will increase the temperature of the system, and during the increase, the system is not in thermal equilibrium; but eventually, the system will settle to a uniform temperature.","If an initially inhomogeneous physical system, without internal walls, is isolated by a thermodynamic operation, it will in general over time change its internal stateOr if it is composed of several subsystems separated from each other by walls, it may change its state after a thermodynamic operation that changes its walls- Another commonly used term that indicates a thermodynamic operation is 'change of constraint', for example referring to the removal of a wall between two otherwise isolated compartments.
An ordinary language expression for a thermodynamic operation is used by Edward AA system prepared as a mixture of petrol vapour and air can be ignited by a spark and produce carbon dioxide and water; if this happens in an isolated system, it will increase the temperature of the system, and during the increase, the system is not in thermal equilibrium; but eventually, the system will settle to a uniform temperature.Again, Planck usually spoke of a ""process"" when our present-day terminology would speak of a thermodynamic operation followed by a thermodynamic process.
An isolated physical system may be inhomogeneous, or may be composed of several subsystems separated from each other by wallsA system prepared as a mixture of petrol vapour and air can be ignited by a spark and produce carbon dioxide and water; if this happens in an isolated system, it will increase the temperature of the system, and during the increase, the system is not in thermal equilibrium; but eventually, the system will settle to a uniform temperatureIn a system prepared as a block of ice floating in a bath of hot water, and then isolated, the ice can melt; during the melting, the system is not in thermal equilibrium; but eventually, its temperature will become uniform; the block of ice will not re-formAn ordinary language expression for a thermodynamic operation is used by Edward ASuch changes may include change of temperature or spatial distribution of temperature, by changing the state of constituent materialsGuggenheim: ""tampering"" with the bodies.
As a matter of history, the distinction,","If an initially inhomogeneous physical system, without internal walls, is isolated by a thermodynamic operation, it will in general over time change its internal stateOr if it is composed of several subsystems separated from each other by walls, it may change its state after a thermodynamic operation that changes its walls- Another commonly used term that indicates a thermodynamic operation is 'change of constraint', for example referring to the removal of a wall between two otherwise isolated compartments.
An ordinary language expression for a thermodynamic operation is used by Edward AA system prepared as a mixture of petrol vapour and air can be ignited by a spark and produce carbon dioxide and water; if this happens in an isolated system, it will increase the temperature of the system, and during the increase, the system is not in thermal equilibrium; but eventually, the system will settle to a uniform temperature.Again, Planck usually spoke of a ""process"" when our present-day terminology would speak of a thermodynamic operation followed by a thermodynamic process.
An isolated physical system may be inhomogeneous, or may be composed of several subsystems separated from each other by wallsA system prepared as a mixture of petrol vapour and air can be ignited by a spark and produce carbon dioxide and water; if this happens in an isolated system, it will increase the temperature of the system, and during the increase, the system is not in thermal equilibrium; but eventually, the system will settle to a uniform temperatureIn a system prepared as a block of ice floating in a bath of hot water, and then isolated, the ice can melt; during the melting, the system is not in thermal equilibrium; but eventually, its temperature will become uniform; the block of ice will not re-formAn ordinary language expression for a thermodynamic operation is used by Edward ASuch changes may include change of temperature or spatial distribution of temperature, by changing the state of constituent materialsGuggenheim: ""tampering"" with the bodies.
As a matter of history, the distinction,[SEP]What happens to an initially inhomogeneous physical system that is isolated by a thermodynamic operation?","['D', 'E', 'B']",1.0
"What is the concept of simultaneity in Einstein's book, Relativity?","In physics, the relativity of simultaneity is the concept that distant simultaneity – whether two spatially separated events occur at the same time – is not absolute, but depends on the observer's reference frame. Simultaneity may refer to: * Relativity of simultaneity, a concept in special relativity. However, this paper does not contain any discussion of Lorentz's theory or the possible difference in defining simultaneity for observers in different states of motion. The term that accounts for the failure of absolute simultaneity is the vx/c2. thumb|250px|right|A spacetime diagram showing the set of points regarded as simultaneous by a stationary observer (horizontal dotted line) and the set of points regarded as simultaneous by an observer moving at v = 0.25c (dashed line) The equation t′ = constant defines a ""line of simultaneity"" in the (x′, t′) coordinate system for the second (moving) observer, just as the equation t = constant defines the ""line of simultaneity"" for the first (stationary) observer in the (x, t) coordinate system. The book culminates in chapter 6, ""The transition to the relativistic conception of simultaneity"". The Lorentz-transform calculation above uses a definition of extended-simultaneity (i.e. of when and where events occur at which you were not present) that might be referred to as the co-moving or ""tangent free- float-frame"" definition. That is, the set of events which are regarded as simultaneous depends on the frame of reference used to make the comparison. If one reference frame assigns precisely the same time to two events that are at different points in space, a reference frame that is moving relative to the first will generally assign different times to the two events (the only exception being when motion is exactly perpendicular to the line connecting the locations of both events). In this picture, however, the points at which the light flashes hit the ends of the train are not at the same level; they are not simultaneous. ==Lorentz transformation== The relativity of simultaneity can be demonstrated using the Lorentz transformation, which relates the coordinates used by one observer to coordinates used by another in uniform relative motion with respect to the first. Thus, a simultaneity succession is a succession of simultaneities. This possibility was raised by mathematician Henri Poincaré in 1900, and thereafter became a central idea in the special theory of relativity. ==Description== According to the special theory of relativity introduced by Albert Einstein, it is impossible to say in an absolute sense that two distinct events occur at the same time if those events are separated in space. In 1990 Robert Goldblatt wrote Orthogonality and Spacetime Geometry, directly addressing the structure Minkowski had put in place for simultaneity.A.D. Taimanov (1989) ""Review of Orthogonality and Spacetime Geometry"", Bulletin of the American Mathematical Society 21(1) In 2006 Max Jammer, through Project MUSE, published Concepts of Simultaneity: from antiquity to Einstein and beyond. In Minkowski's view, the naïve notion of velocity is replaced with rapidity, and the ordinary sense of simultaneity becomes dependent on hyperbolic orthogonality of spatial directions to the worldline associated to the rapidity. If two events happen at the same time in the frame of the first observer, they will have identical values of the t-coordinate. The principle of relativity can be expressed as the arbitrariness of which pair are taken to represent space and time in a plane. ==Thought experiments== ===Einstein's train=== right|thumb|250px|Einstein imagined a stationary observer who witnessed two lightning bolts simultaneously striking both ends of a moving train. A simultaneity succession is a series of different groups of pitches or pitch classes, each of which is played at the same time as the other pitches of its group. Simultaneity is a more specific and more general term than chord: many but not all chords or harmonies are simultaneities, though not all but some simultaneities are chords. In general the second observer traces out a worldline in the spacetime of the first observer described by t = x/v, and the set of simultaneous events for the second observer (at the origin) is described by the line t = vx. This was done by Henri Poincaré who already emphasized in 1898 the conventional nature of simultaneity and who argued that it is convenient to postulate the constancy of the speed of light in all directions. This means that the events are simultaneous. ","Simultaneity is relative, meaning that two events that appear simultaneous to an observer in a particular inertial reference frame need not be judged as simultaneous by a second observer in a different inertial frame of reference.","Simultaneity is relative, meaning that two events that appear simultaneous to an observer in a particular inertial reference frame will always be judged as simultaneous by a second observer in a different inertial frame of reference.","Simultaneity is absolute, meaning that two events that appear simultaneous to an observer in a particular inertial reference frame will always be judged as simultaneous by a second observer in a different inertial frame of reference.",Simultaneity is a concept that applies only to Newtonian theories and not to relativistic theories.,Simultaneity is a concept that applies only to relativistic theories and not to Newtonian theories.,A,kaggle200,"Fig. 2-6 illustrates the use of spacetime diagrams in the analysis of the relativity of simultaneity. The events in spacetime are invariant, but the coordinate frames transform as discussed above for Fig. 2-3. The three events are simultaneous from the reference frame of an observer moving at From the reference frame of an observer moving at the events appear to occur in the order From the reference frame of an observer moving at , the events appear to occur in the order . The white line represents a ""plane of simultaneity"" being moved from the past of the observer to the future of the observer, highlighting events residing on it. The gray area is the light cone of the observer, which remains invariant.
Consider two events happening in two different locations that occur simultaneously in the reference frame of one inertial observer. They may occur non-simultaneously in the reference frame of another inertial observer (lack of absolute simultaneity).
In special relativity, an observer is a frame of reference from which a set of objects or events are being measured. Usually this is an inertial reference frame or ""inertial observer"". Less often an observer may be an arbitrary non-inertial reference frame such as a Rindler frame which may be called an ""accelerating observer"".
Einstein wrote in his book, ""Relativity"", that simultaneity is also relative, i.e., two events that appear simultaneous to an observer in a particular inertial reference frame need not be judged as simultaneous by a second observer in a different inertial frame of reference.","Relativity of simultaneity All observers will agree that for any given event, an event within the given event's future light cone occurs after the given event. Likewise, for any given event, an event within the given event's past light cone occurs before the given event. The before–after relationship observed for timelike-separated events remains unchanged no matter what the reference frame of the observer, i.e. no matter how the observer may be moving. The situation is quite different for spacelike-separated events. Fig. 2-4 was drawn from the reference frame of an observer moving at v = 0. From this reference frame, event C is observed to occur after event O, and event B is observed to occur before event O. From a different reference frame, the orderings of these non-causally-related events can be reversed. In particular, one notes that if two events are simultaneous in a particular reference frame, they are necessarily separated by a spacelike interval and thus are noncausally related. The observation that simultaneity is not absolute, but depends on the observer's reference frame, is termed the relativity of simultaneity.Fig. 2-6 illustrates the use of spacetime diagrams in the analysis of the relativity of simultaneity. The events in spacetime are invariant, but the coordinate frames transform as discussed above for Fig. 2-3. The three events (A, B, C) are simultaneous from the reference frame of an observer moving at v = 0. From the reference frame of an observer moving at v = 0.3c, the events appear to occur in the order C, B, A. From the reference frame of an observer moving at v = −0.5c, the events appear to occur in the order A, B, C. The white line represents a plane of simultaneity being moved from the past of the observer to the future of the observer, highlighting events residing on it. The gray area is the light cone of the observer, which remains invariant.
Relativity of simultaneity For a moment-by-moment understanding of how the time difference between the twins unfolds, one must understand that in special relativity there is no concept of absolute present. For different inertial frames there are different sets of events that are simultaneous in that frame. This relativity of simultaneity means that switching from one inertial frame to another requires an adjustment in what slice through spacetime counts as the ""present"". In the spacetime diagram on the right, drawn for the reference frame of the Earth-based twin, that twin's world line coincides with the vertical axis (his position is constant in space, moving only in time). On the first leg of the trip, the second twin moves to the right (black sloped line); and on the second leg, back to the left. Blue lines show the planes of simultaneity for the traveling twin during the first leg of the journey; red lines, during the second leg. Just before turnaround, the traveling twin calculates the age of the Earth-based twin by measuring the interval along the vertical axis from the origin to the upper blue line. Just after turnaround, if he recalculates, he will measure the interval from the origin to the lower red line. In a sense, during the U-turn the plane of simultaneity jumps from blue to red and very quickly sweeps over a large segment of the world line of the Earth-based twin. When one transfers from the outgoing inertial frame to the incoming inertial frame there is a jump discontinuity in the age of the Earth-based twin (6.4 years in the example above).
Einstein (The Meaning of Relativity): ""Two events taking place at the points A and B of a system K are simultaneous if they appear at the same instant when observed from the middle point, M, of the interval AB. Time is then defined as the ensemble of the indications of similar clocks, at rest relative to K, which register the same simultaneously."" Einstein wrote in his book, Relativity, that simultaneity is also relative, i.e., two events that appear simultaneous to an observer in a particular inertial reference frame need not be judged as simultaneous by a second observer in a different inertial frame of reference.","The observation that simultaneity is not absolute, but depends on the observer's reference frame, is termed the relativity of simultaneity.FigTime is then defined as the ensemble of the indications of similar clocks, at rest relative to K, which register the same simultaneously."" Einstein wrote in his book, Relativity, that simultaneity is also relative, i.e., two events that appear simultaneous to an observer in a particular inertial reference frame need not be judged as simultaneous by a second observer in a different inertial frame of referenceThis relativity of simultaneity means that switching from one inertial frame to another requires an adjustment in what slice through spacetime counts as the ""present""2-6 illustrates the use of spacetime diagrams in the analysis of the relativity of simultaneityLess often an observer may be an arbitrary non-inertial reference frame such as a Rindler frame which may be called an ""accelerating observer"".
Einstein wrote in his book, ""Relativity"", that simultaneity is also relative, i.e., two events that appear simultaneous to an observer in a particular inertial reference frame need not be judged as simultaneous by a second observer in a different inertial frame of reference.They may occur non-simultaneously in the reference frame of another inertial observer (lack of absolute simultaneity).
In special relativity, an observer is a frame of reference from which a set of objects or events are being measuredRelativity of simultaneity All observers will agree that for any given event, an event within the given event's future light cone occurs after the given eventIn particular, one notes that if two events are simultaneous in a particular reference frame, they are necessarily separated by a spacelike interval and thus are noncausally relatedFor different inertial frames there are different sets of events that are simultaneous in that frameThe gray area is the light cone of the observer, which remains invariant.
Relativity of simultaneity For a moment-by-moment understanding of how the time difference between the twins unfolds, on","The observation that simultaneity is not absolute, but depends on the observer's reference frame, is termed the relativity of simultaneity.FigTime is then defined as the ensemble of the indications of similar clocks, at rest relative to K, which register the same simultaneously."" Einstein wrote in his book, Relativity, that simultaneity is also relative, i.e., two events that appear simultaneous to an observer in a particular inertial reference frame need not be judged as simultaneous by a second observer in a different inertial frame of referenceThis relativity of simultaneity means that switching from one inertial frame to another requires an adjustment in what slice through spacetime counts as the ""present""2-6 illustrates the use of spacetime diagrams in the analysis of the relativity of simultaneityLess often an observer may be an arbitrary non-inertial reference frame such as a Rindler frame which may be called an ""accelerating observer"".
Einstein wrote in his book, ""Relativity"", that simultaneity is also relative, i.e., two events that appear simultaneous to an observer in a particular inertial reference frame need not be judged as simultaneous by a second observer in a different inertial frame of reference.They may occur non-simultaneously in the reference frame of another inertial observer (lack of absolute simultaneity).
In special relativity, an observer is a frame of reference from which a set of objects or events are being measuredRelativity of simultaneity All observers will agree that for any given event, an event within the given event's future light cone occurs after the given eventIn particular, one notes that if two events are simultaneous in a particular reference frame, they are necessarily separated by a spacelike interval and thus are noncausally relatedFor different inertial frames there are different sets of events that are simultaneous in that frameThe gray area is the light cone of the observer, which remains invariant.
Relativity of simultaneity For a moment-by-moment understanding of how the time difference between the twins unfolds, on[SEP]What is the concept of simultaneity in Einstein's book, Relativity?","['A', 'B', 'E']",1.0
What is the Josephson effect?,"SQUIDs, or superconducting quantum interference devices, are very sensitive magnetometers that operate via the Josephson effect. Josephson effect has also been observed in superfluid helium quantum interference devices (SHeQUIDs), the superfluid helium analog of a dc-SQUID.Physics Today, Superfluid helium interferometers, Y. Sato and R. Packard, October 2012, page 31 ==The Josephson equations== thumb|Diagram of a single Josephson junction. Josephson junctions have important applications in quantum-mechanical circuits, such as SQUIDs, superconducting qubits, and RSFQ digital electronics. In physics, the Josephson effect is a phenomenon that occurs when two superconductors are placed in proximity, with some barrier or restriction between them. The Josephson effect has many practical applications because it exhibits a precise relationship between different physical measures, such as voltage and frequency, facilitating highly accurate measurements. The Josephson effect can be calculated using the laws of quantum mechanics. The Josephson constant is defined as: K_J=\frac{2 e}{h}\,, and its inverse is the magnetic flux quantum: \Phi_0=\frac{h}{2 e}=2 \pi \frac{\hbar}{2 e}\,. The Josephson effect is also used for the most precise measurements of elementary charge in terms of the Josephson constant and von Klitzing constant which is related to the quantum Hall effect. Single-electron transistors are often constructed of superconducting materials, allowing use to be made of the Josephson effect to achieve novel effects. The Josephson effect produces a current, known as a supercurrent, that flows continuously without any voltage applied, across a device known as a Josephson junction (JJ). The DC Josephson effect had been seen in experiments prior to 1962, but had been attributed to ""super-shorts"" or breaches in the insulating barrier leading to the direct conduction of electrons between the superconductors. The critical current of the Josephson junction depends on the properties of the superconductors, and can also be affected by environmental factors like temperature and externally applied magnetic field. Josephson junctions are integral in superconducting quantum computing as qubits such as in a flux qubit or others schemes where the phase and charge act as the conjugate variables. Josephson junctions are active circuit elements in superconducting circuits. This phenomenon is also known as kinetic inductance. == Three main effects == There are three main effects predicted by Josephson that follow directly from the Josephson equations: ===The DC Josephson effect=== The DC Josephson effect is a direct current crossing the insulator in the absence of any external electromagnetic field, owing to tunneling. The accuracy of the Josephson voltage–frequency relation V = nf/K_\text{J} , and its independence from experimental conditions, such as bias current, temperature, and junction materials, have been subjected to many tests.V. Kose, and J. Niemeyer: in The Art of Measurement, ed. B. Kramer (Weinheim: VCH) 249 (1988) No significant deviation from this relation has ever been found. The Josephson penetration depth usually ranges from a few μm to several mm if the critical supercurrent density is very low. ==See also== *Pi Josephson junction *φ Josephson junction *Josephson diode *Andreev reflection *Fractional vortices *Ginzburg–Landau theory *Macroscopic quantum phenomena *Macroscopic quantum self-trapping *Quantum computer *Quantum gyroscope *Rapid single flux quantum (RSFQ) *Semifluxon *Zero-point energy *Josephson vortex == References == Category:Condensed matter physics Category:Superconductivity Category:Sensors Category:Mesoscopic physics Category:Energy (physics) This effect, known as the (inverse) AC Josephson effect, is observed as a constant voltage step at V = hf/2e in the voltage–current (I–V) curve of the junction. This behaviour is derived from the kinetic energy of the charge carriers, instead of the energy in a magnetic field. ==Josephson energy== Based on the similarity of the Josephson junction to a non-linear inductor, the energy stored in a Josephson junction when a supercurrent flows through it can be calculated.Michael Tinkham, Introduction to superconductivity, Courier Corporation, 1986 The supercurrent flowing through the junction is related to the Josephson phase by the current-phase relation (CPR): :I = I_c \sin\varphi. The Josephson effect has found wide usage, for example in the following areas. ","The Josephson effect is a phenomenon exploited by superconducting devices such as SQUIDs. It is used in the most accurate available measurements of the magnetic flux quantum Φ0 = h/(2e), where h is the Planck constant.","The Josephson effect is a phenomenon exploited by magnetic devices such as SQUIDs. It is used in the most accurate available measurements of the magnetic flux quantum Φ0 = h/(2e), where h is the magnetic constant.","The Josephson effect is a phenomenon exploited by superconducting devices such as SQUIDs. It is used in the most accurate available measurements of the electric flux quantum Φ0 = h/(2e), where h is the Planck constant.","The Josephson effect is a phenomenon exploited by superconducting devices such as SQUIDs. It is used in the most accurate available measurements of the magnetic flux quantum Φ0 = e/(2h), where h is the Planck constant.","The Josephson effect is a phenomenon exploited by magnetic devices such as SQUIDs. It is used in the most accurate available measurements of the electric flux quantum Φ0 = h/(2e), where h is the magnetic constant.",A,kaggle200,"where formula_63 is the magnetic flux quantum, formula_64 is the critical supercurrent density (A/m), and formula_65 characterizes the inductance of the superconducting electrodes
In 1962, the first commercial superconducting wire, a niobium-titanium alloy, was developed by researchers at Westinghouse, allowing the construction of the first practical superconducting magnets. In the same year, Josephson made the important theoretical prediction that a supercurrent can flow between two pieces of superconductor separated by a thin layer of insulator. This phenomenon, now called the Josephson effect, is exploited by superconducting devices such as SQUIDs. It is used in the most accurate available measurements of the magnetic flux quantum formula_2, and thus (coupled with the quantum Hall resistivity) for the Planck constant ""h"". Josephson was awarded the Nobel Prize for this work in 1973.
In 1962, Brian Josephson made the important theoretical prediction that a supercurrent can flow between two pieces of superconductor separated by a thin layer of insulator. This phenomenon, now called the Josephson effect, is exploited by superconducting devices such as SQUIDs. It is used in the most accurate available measurements of the magnetic flux quantum ""h""/2""e"", and thus (coupled with the quantum Hall resistivity) for Planck's constant ""h"". Josephson was awarded the Nobel Prize in Physics for this work in 1973.
In 1962, Josephson made the important theoretical prediction that a supercurrent can flow between two pieces of superconductor separated by a thin layer of insulator. This phenomenon, now called the Josephson effect, is exploited by superconducting devices such as SQUIDs. It is used in the most accurate available measurements of the magnetic flux quantum ""Φ"" = ""h""/(2""e""), where ""h"" is the Planck constant. Coupled with the quantum Hall resistivity, this leads to a precise measurement of the Planck constant. Josephson was awarded the Nobel Prize for this work in 1973.","In 1962, Brian Josephson made the important theoretical prediction that a supercurrent can flow between two pieces of superconductor separated by a thin layer of insulator. This phenomenon, now called the Josephson effect, is exploited by superconducting devices such as SQUIDs. It is used in the most accurate available measurements of the magnetic flux quantum h/2e, and thus (coupled with the quantum Hall resistivity) for Planck's constant h. Josephson was awarded the Nobel Prize in Physics for this work in 1973.
In 1962, the first commercial superconducting wire, a niobium-titanium alloy, was developed by researchers at Westinghouse, allowing the construction of the first practical superconducting magnets. In the same year, Josephson made the important theoretical prediction that a supercurrent can flow between two pieces of superconductor separated by a thin layer of insulator. This phenomenon, now called the Josephson effect, is exploited by superconducting devices such as SQUIDs. It is used in the most accurate available measurements of the magnetic flux quantum Φ0=h2e , and thus (coupled with the quantum Hall resistivity) for the Planck constant h. Josephson was awarded the Nobel Prize for this work in 1973.
In 1962, Josephson made the important theoretical prediction that a supercurrent can flow between two pieces of superconductor separated by a thin layer of insulator. This phenomenon, now called the Josephson effect, is exploited by superconducting devices such as SQUIDs. It is used in the most accurate available measurements of the magnetic flux quantum Φ0 = h/(2e), where h is the Planck constant. Coupled with the quantum Hall resistivity, this leads to a precise measurement of the Planck constant. Josephson was awarded the Nobel Prize for this work in 1973.In 2008, it was proposed that the same mechanism that produces superconductivity could produce a superinsulator state in some materials, with almost infinite electrical resistance. The first development and study of superconducting Bose–Einstein condensate (BEC) in 2020 suggests that there is a ""smooth transition between"" BEC and Bardeen-Cooper-Shrieffer regimes.","This phenomenon, now called the Josephson effect, is exploited by superconducting devices such as SQUIDsJosephson was awarded the Nobel Prize in Physics for this work in 1973.
In 1962, Josephson made the important theoretical prediction that a supercurrent can flow between two pieces of superconductor separated by a thin layer of insulatorJosephson was awarded the Nobel Prize for this work in 1973.
In 1962, Josephson made the important theoretical prediction that a supercurrent can flow between two pieces of superconductor separated by a thin layer of insulatorJosephson was awarded the Nobel Prize for this work in 1973.In 2008, it was proposed that the same mechanism that produces superconductivity could produce a superinsulator state in some materials, with almost infinite electrical resistanceJosephson was awarded the Nobel Prize for this work in 1973.
In 1962, Brian Josephson made the important theoretical prediction that a supercurrent can flow between two pieces of superconductor separated by a thin layer of insulatorIn the same year, Josephson made the important theoretical prediction that a supercurrent can flow between two pieces of superconductor separated by a thin layer of insulatorIn 1962, Brian Josephson made the important theoretical prediction that a supercurrent can flow between two pieces of superconductor separated by a thin layer of insulatorJosephson was awarded the Nobel Prize for this work in 1973.Josephson was awarded the Nobel Prize in Physics for this work in 1973.
In 1962, the first commercial superconducting wire, a niobium-titanium alloy, was developed by researchers at Westinghouse, allowing the construction of the first practical superconducting magnetsIt is used in the most accurate available measurements of the magnetic flux quantum ""Φ"" = ""h""/(2""e""), where ""h"" is the Planck constantIt is used in the most accurate available measurements of the magnetic flux quantum Φ0 = h/(2e), where h is the Planck constantIt is used in the most accurate available measurements of the magnetic flux quantum h/2e, and thus (coupled with the quantum Hal","This phenomenon, now called the Josephson effect, is exploited by superconducting devices such as SQUIDsJosephson was awarded the Nobel Prize in Physics for this work in 1973.
In 1962, Josephson made the important theoretical prediction that a supercurrent can flow between two pieces of superconductor separated by a thin layer of insulatorJosephson was awarded the Nobel Prize for this work in 1973.
In 1962, Josephson made the important theoretical prediction that a supercurrent can flow between two pieces of superconductor separated by a thin layer of insulatorJosephson was awarded the Nobel Prize for this work in 1973.In 2008, it was proposed that the same mechanism that produces superconductivity could produce a superinsulator state in some materials, with almost infinite electrical resistanceJosephson was awarded the Nobel Prize for this work in 1973.
In 1962, Brian Josephson made the important theoretical prediction that a supercurrent can flow between two pieces of superconductor separated by a thin layer of insulatorIn the same year, Josephson made the important theoretical prediction that a supercurrent can flow between two pieces of superconductor separated by a thin layer of insulatorIn 1962, Brian Josephson made the important theoretical prediction that a supercurrent can flow between two pieces of superconductor separated by a thin layer of insulatorJosephson was awarded the Nobel Prize for this work in 1973.Josephson was awarded the Nobel Prize in Physics for this work in 1973.
In 1962, the first commercial superconducting wire, a niobium-titanium alloy, was developed by researchers at Westinghouse, allowing the construction of the first practical superconducting magnetsIt is used in the most accurate available measurements of the magnetic flux quantum ""Φ"" = ""h""/(2""e""), where ""h"" is the Planck constantIt is used in the most accurate available measurements of the magnetic flux quantum Φ0 = h/(2e), where h is the Planck constantIt is used in the most accurate available measurements of the magnetic flux quantum h/2e, and thus (coupled with the quantum Hal[SEP]What is the Josephson effect?","['C', 'A', 'D']",0.5
What is the SI unit of the physical quantity m/Q?,"The SI unit of the physical quantity m/Q is kilogram per coulomb. ===Mass spectrometry and m/z=== The units and notation above are used when dealing with the physics of mass spectrometry; however, the m/z notation is used for the independent variable in a mass spectrum. The metre per second squared is the unit of acceleration in the International System of Units (SI). Therefore, the unit metre per second squared is equivalent to newton per kilogram, N·kg−1, or N/kg.Kirk, Tim: Physics for the IB Diploma; Standard and Higher Level, Page 61, Oxford University Press, 2003. The SI has special names for 22 of these derived units (for example, hertz, the SI unit of measurement of frequency), but the rest merely reflect their derivation: for example, the square metre (m2), the SI derived unit of area; and the kilogram per cubic metre (kg/m3 or kg⋅m−3), the SI derived unit of density. }} The mass-to-charge ratio (m/Q) is a physical quantity relating the mass (quantity of matter) and the electric charge of a given particle, expressed in units of kilograms per coulomb (kg/C). The unit of force is the newton (N), and mass has the SI unit kilogram (kg). As a derived unit, it is composed from the SI base units of length, the metre, and time, the second. Name Symbol Quantity Equivalent SI unit gal Gal acceleration 1 Gal = 1 cm⋅s−2 = 0.01 m⋅s−2 unified atomic mass unit u mass 1 u = volt-ampere reactive var reactive power 1 var = 1 V⋅A == Changes to units mentioned in the SI == With the publication of each edition of the SI brochure, the list of non-SI units listed in tables changed compared to the preceding SI brochures.Bureau international des poids et mesures, Le Système international d'unités (SI) / The International System of Units (SI), 8th ed. (Sèvres: Organisation Intergouvernementale de la Convention du Mètre, 2006‑05), . One newton equals one kilogram metre per second squared. From 2005 to early 2019, the definitions of the SI base units were as follows: SI base units Name Symbol Measure Pre-2019 (2005) formal definition Historical origin / justification Dimension symbol metre m length ""The metre is the length of the path travelled by light in vacuum during a time interval of 1 / of a second."" This is for compatibility with East Asian encodings and not intended to be used in new documents. ==Conversions== == See also == * Foot per second squared * Gal * Gravitational acceleration * Standard gravity *acceleration ==References== Category:Units of acceleration Category:SI derived units This is a list of units that are not defined as part of the International System of Units (SI) but are otherwise mentioned in the SI Brochure,Bureau international des poids et mesures, ""Non-SI units that are accepted for use with the SI"", in: Le Système international d'unités (SI) / The International System of Units (SI), 9th ed. (Sèvres: 2019), , c. 4, pp. 145–146. listed as being accepted for use alongside SI-units, or for explanatory purposes. ==Units officially accepted for use with the SI== Name Symbol Quantity Value in SI units minute min time 1 min = 60 s hour h time 1 h = 60 min = 3 600 s day d time 1 d = 24 h = 1440 min = 86 400 s astronomical unit au length 1 au = 149 597 870 700 m degree ° plane angle and phase angle 1° = (/180) rad arcminute ′ plane angle and phase angle 1′ = (1/60)° = (/10 800) rad arcsecond ″ plane angle and phase angle 1″ = (1/60)′ = (1/3 600)° = (/648 000) rad hectare ha area 1 ha = 1 hm2 = 10 000 m2 litre l, L volume 1 L = 1 dm3 = 1 000 cm3 = 0.001 m3 tonne t mass 1 t = 1 Mg = 1 000 kg dalton Da mass 1 Da = electronvolt eV energy 1 eV = neper Np logarithmic ratio quantity — bel, decibel B, dB logarithmic ratio quantity — The SI prefixes can be used with several of these units, but not, for example, with the non-SI units of time. == Other units defined but not officially sanctioned == The following table lists units that are effectively defined in side- and footnotes in the 9th SI brochure. As acceleration, the unit is interpreted physically as change in velocity or speed per time interval, i.e. metre per second per second and is treated as a vector quantity. ==Example== An object experiences a constant acceleration of one metre per second squared (1 m/s2) from a state of rest, then it achieves the speed of 5 m/s after 5 seconds and 10 m/s after 10 seconds. SI derived units are units of measurement derived from the seven base units specified by the International System of Units (SI). * Symbols Units and Nomenclature in Physics IUPAP-25 IUPAP-25, E.R. Cohen & P. Giacomo, Physics 146A (1987) 1–68. ==External links== *BIPM SI brochure * AIP style manual * NIST on units and manuscript check list * Physics Today's instructions on quantities and units Category:Physical quantities Category:Mass spectrometry Category:Metrology Category:Ratios The names of SI derived units, when written in full, are always in lowercase. L kilogram kg mass ""The kilogram is the unit of mass; it is equal to the mass of the international prototype of the kilogram."" The charge-to-mass ratio (Q/m) of an object is, as its name implies, the charge of an object divided by the mass of the same object. For example, the symbol for hertz is ""Hz"", while the symbol for metre is ""m"". ==Special names== The International System of Units assigns special names to 22 derived units, which includes two dimensionless derived units, the radian (rad) and the steradian (sr). The Coulomb (C) is the SI unit of charge; however, other units can be used, such as expressing charge in terms of the elementary charge (e). ",Meter per second,Pascal per second,Kilogram per coulomb,Newton per meter,Joule per second,C,kaggle200,"The inch per second is a unit of speed or velocity. It expresses the distance in inches (""in"") traveled or displaced, divided by time in seconds (""s"", or ""sec""). The equivalent SI unit is the metre per second.
A corresponding but distinct quantity for describing rotation is angular velocity, for which the SI unit is the radian per second.
Acceleration is quantified in the SI unit metres per second per second (m/s), in the cgs unit gal (Gal), or popularly in terms of standard gravity (""g"").
The IUPAC recommended symbol for mass and charge are ""m"" and ""Q"", respectively, however using a lowercase ""q"" for charge is also very common. Charge is a scalar property, meaning that it can be either positive (+) or negative (−). The Coulomb (C) is the SI unit of charge; however, other units can be used, such as expressing charge in terms of the elementary charge (""e""). The SI unit of the physical quantity ""m""/""Q"" is kilogram per coulomb.","The SI unit for molality is moles per kilogram of solvent.
U.S. perm The U.S. perm is defined as 1 grain of water vapor per hour, per square foot, per inch of mercury.Metric perm The metric perm (not an SI unit) is defined as 1 gram of water vapor per day, per square meter, per millimeter of mercury.Equivalent SI unit The equivalent SI measure is the nanogram per second per square meter per pascal.The base normal SI unit for permeance is the kilogram per second per square meter per pascal.
The IUPAC-recommended symbols for mass and charge are m and Q, respectively, however using a lowercase q for charge is also very common. Charge is a scalar property, meaning that it can be either positive (+) or negative (−). The Coulomb (C) is the SI unit of charge; however, other units can be used, such as expressing charge in terms of the elementary charge (e). The SI unit of the physical quantity m/Q is kilogram per coulomb.","The SI unit of the physical quantity m/Q is kilogram per coulombThe SI unit of the physical quantity ""m""/""Q"" is kilogram per coulomb.The equivalent SI unit is the metre per second.
A corresponding but distinct quantity for describing rotation is angular velocity, for which the SI unit is the radian per second.
Acceleration is quantified in the SI unit metres per second per second (m/s), in the cgs unit gal (Gal), or popularly in terms of standard gravity (""g"").
The IUPAC recommended symbol for mass and charge are ""m"" and ""Q"", respectively, however using a lowercase ""q"" for charge is also very commonperm is defined as 1 grain of water vapor per hour, per square foot, per inch of mercury.Metric perm The metric perm (not an SI unit) is defined as 1 gram of water vapor per day, per square meter, per millimeter of mercury.Equivalent SI unit The equivalent SI measure is the nanogram per second per square meter per pascal.The base normal SI unit for permeance is the kilogram per second per square meter per pascal.
The IUPAC-recommended symbols for mass and charge are m and Q, respectively, however using a lowercase q for charge is also very commonThe Coulomb (C) is the SI unit of charge; however, other units can be used, such as expressing charge in terms of the elementary charge (""e"")The Coulomb (C) is the SI unit of charge; however, other units can be used, such as expressing charge in terms of the elementary charge (e)The SI unit for molality is moles per kilogram of solvent.
U.S- The inch per second is a unit of speed or velocityIt expresses the distance in inches (""in"") traveled or displaced, divided by time in seconds (""s"", or ""sec"")Charge is a scalar property, meaning that it can be either positive (+) or negative (−)perm The U.S","The SI unit of the physical quantity m/Q is kilogram per coulombThe SI unit of the physical quantity ""m""/""Q"" is kilogram per coulomb.The equivalent SI unit is the metre per second.
A corresponding but distinct quantity for describing rotation is angular velocity, for which the SI unit is the radian per second.
Acceleration is quantified in the SI unit metres per second per second (m/s), in the cgs unit gal (Gal), or popularly in terms of standard gravity (""g"").
The IUPAC recommended symbol for mass and charge are ""m"" and ""Q"", respectively, however using a lowercase ""q"" for charge is also very commonperm is defined as 1 grain of water vapor per hour, per square foot, per inch of mercury.Metric perm The metric perm (not an SI unit) is defined as 1 gram of water vapor per day, per square meter, per millimeter of mercury.Equivalent SI unit The equivalent SI measure is the nanogram per second per square meter per pascal.The base normal SI unit for permeance is the kilogram per second per square meter per pascal.
The IUPAC-recommended symbols for mass and charge are m and Q, respectively, however using a lowercase q for charge is also very commonThe Coulomb (C) is the SI unit of charge; however, other units can be used, such as expressing charge in terms of the elementary charge (""e"")The Coulomb (C) is the SI unit of charge; however, other units can be used, such as expressing charge in terms of the elementary charge (e)The SI unit for molality is moles per kilogram of solvent.
U.S- The inch per second is a unit of speed or velocityIt expresses the distance in inches (""in"") traveled or displaced, divided by time in seconds (""s"", or ""sec"")Charge is a scalar property, meaning that it can be either positive (+) or negative (−)perm The U.S[SEP]What is the SI unit of the physical quantity m/Q?","['C', 'D', 'E']",1.0
How many crystallographic point groups are there in three-dimensional space?,"However, the crystallographic restriction on the general point groups results in there being only 32 crystallographic point groups. The 27 point groups in the table plus T, Td, Th, O and Oh constitute 32 crystallographic point groups. === Hermann–Mauguin notation=== 480px|thumb|right|Subgroup relations of the 32 crystallographic point groups (rows represent group orders from bottom to top as: 1,2,3,4,6,8,12,16,24, and 48.) Together, these make up the 32 so- called crystallographic point groups. ==The seven infinite series of axial groups== The infinite series of axial or prismatic groups have an index n, which can be any integer; in each series, the nth symmetry group contains n-fold rotational symmetry about an axis, i.e. symmetry with respect to a rotation by an angle 360°/n. n=1 covers the cases of no rotational symmetry at all. There are infinitely many three-dimensional point groups. These 32 point groups are one-and-the-same as the 32 types of morphological (external) crystalline symmetries derived in 1830 by Johann Friedrich Christian Hessel from a consideration of observed crystal forms. The possible combinations are: **Four 3-fold axes (the three tetrahedral symmetries T, Th, and Td) **Four 3-fold axes and three 4-fold axes (octahedral symmetries O and Oh) **Ten 3-fold axes and six 5-fold axes (icosahedral symmetries I and Ih) According to the crystallographic restriction theorem, only a limited number of point groups are compatible with discrete translational symmetry: 27 from the 7 infinite series, and 5 of the 7 others. In three dimensional geometry, there are four infinite series of point groups in three dimensions (n≥1) with n-fold rotational or reflectional symmetry about one axis (by an angle of 360°/n) that does not change the object. Thus we have, with bolding of the 3 dihedral crystallographic point groups: Order Isometry group Abstract group # of order 2 elements Cycle diagram 8 D2h Z23 7 40px 16 D4h Dih4 × Z2 11 40px 24 D6h Dih6 × Z2 = Dih3 × Z22 15 32 D8h Dih8 × Z2 19 etc. Finite spherical symmetry groups are also called point groups in three dimensions. The remaining seven are, with bolding of the 5 crystallographic point groups (see also above): Order Isometry group Abstract group # of order 2 elements Cycle diagram 12 T A4 3 40px 24 Td, O S4 9 40px 24 Th A4 × Z2 7 40px 48 Oh S4 × Z2 19 60 I A5 15 120 Ih A5 × Z2 31 ==Fundamental domain== Disdyakis triacontahedron 120px 120px The planes of reflection for icosahedral symmetry intersect the sphere on great circles, with right spherical triangle fundamental domains The planes of reflection for icosahedral symmetry intersect the sphere on great circles, with right spherical triangle fundamental domains The fundamental domain of a point group is a conic solid. For finite 3D point groups, see also spherical symmetry groups. Thus we have, with bolding of the 10 cyclic crystallographic point groups, for which the crystallographic restriction applies: Order Isometry groups Abstract group # of order 2 elements Cycle diagram 1 C1 Z1 0 40px 2 C2, Ci, Cs Z2 1 40px 3 C3 Z3 0 40px 4 C4, S4 Z4 1 40px 5 C5 Z5 0 40px 6 C6, S6, C3h Z6 = Z3 × Z2 1 40px 7 C7 Z7 0 40px 8 C8, S8 Z8 1 40px 9 C9 Z9 0 40px 10 C10, S10, C5h Z10 = Z5 × Z2 1 40px etc. ===Symmetry groups in 3D that are dihedral as abstract group=== In 2D dihedral group Dn includes reflections, which can also be viewed as flipping over flat objects without distinction of front- and backside. In crystallography, a crystallographic point group is a set of symmetry operations, corresponding to one of the point groups in three dimensions, such that each operation (perhaps followed by a translation) would leave the structure of a crystal unchanged i.e. the same kinds of atoms would be placed in similar positions as before the transformation. This constraint means that the point group must be the symmetry of some three-dimensional lattice. The crystallography groups, 32 in total, are a subset with element orders 2, 3, 4 and 6.Sands, 1993 == Involutional symmetry == There are four involutional groups: no symmetry (C1), reflection symmetry (Cs), 2-fold rotational symmetry (C2), and central point symmetry (Ci). Up to conjugacy, the set of finite 3D point groups consists of: *, which have at most one more-than-2-fold rotation axis; they are the finite symmetry groups on an infinite cylinder, or equivalently, those on a finite cylinder. # Axes of rotation, rotoinversion axes, and mirror planes remain unchanged. ==See also== * Molecular symmetry * Point group * Space group * Point groups in three dimensions * Crystal system == References == ==External links== *Point-group symbols in International Tables for Crystallography (2006). This is in contrast to projective polyhedra – the sphere does cover projective space (and also lens spaces), and thus a tessellation of projective space or lens space yields a distinct notion of polyhedron. ==See also== *List of spherical symmetry groups *List of character tables for chemically important 3D point groups *Point groups in two dimensions *Point groups in four dimensions *Symmetry *Euclidean plane isometry *Group action *Point group *Crystal system *Space group *List of small groups *Molecular symmetry ==Footnotes== ==References== * . * 6.5 The binary polyhedral groups, p. 68 * ==External links== *Graphic overview of the 32 crystallographic point groups – form the first parts (apart from skipping n=5) of the 7 infinite series and 5 of the 7 separate 3D point groups *Overview of properties of point groups *Simplest Canonical Polyhedra of Each Symmetry Type (uses Java) *Point Groups and Crystal Systems, by Yi-Shu Wei, pp. 4–6 * The Geometry Center: 10.1 Formulas for Symmetries in Cartesian Coordinates (three dimensions) Category:Euclidean symmetries Category:Group theory The point groups in three dimensions are heavily used in chemistry, especially to describe the symmetries of a molecule and of molecular orbitals forming covalent bonds, and in this context they are also called molecular point groups. ==3D isometries that leave origin fixed== The symmetry group operations (symmetry operations) are the isometries of three-dimensional space R3 that leave the origin fixed, forming the group O(3). A, ch. 10.1, p. 794 *Pictorial overview of the 32 groups Category:Symmetry Category:Crystallography Category:Discrete groups ",7,32,14,5,27,B,kaggle200,"There are many papers dealing with the formula_36 symbols or Clebsch-Gordon coefficients for the finite crystallographic point groups
An abbreviated form of the Hermann–Mauguin notation commonly used for space groups also serves to describe crystallographic point groups. Group names are
In the classification of crystals, each point group defines a so-called (geometric) crystal class. There are infinitely many three-dimensional point groups. However, the crystallographic restriction on the general point groups results in there being only 32 crystallographic point groups. These 32 point groups are one-and-the-same as the 32 types of morphological (external) crystalline symmetries derived in 1830 by Johann Friedrich Christian Hessel from a consideration of observed crystal forms.
Thus we have, with bolding of the 10 cyclic crystallographic point groups, for which the crystallographic restriction applies:","Many of the crystallographic point groups share the same internal structure. For example, the point groups 1, 2, and m contain different geometric symmetry operations, (inversion, rotation, and reflection, respectively) but all share the structure of the cyclic group C2. All isomorphic groups are of the same order, but not all groups of the same order are isomorphic. The point groups which are isomorphic are shown in the following table: This table makes use of cyclic groups (C1, C2, C3, C4, C6), dihedral groups (D2, D3, D4, D6), one of the alternating groups (A4), and one of the symmetric groups (S4). Here the symbol "" × "" indicates a direct product.
The remaining non-centrosymmetric crystallographic point groups 4, 42m, 6, 6m2, 43m are neither polar nor chiral.
This is a summary of 4-dimensional point groups in Coxeter notation. 227 of them are crystallographic point groups (for particular values of p and q). (nc) is given for non-crystallographic groups. Some crystallographic group have their orders indexed (order.index) by their abstract group structure.","However, the crystallographic restriction on the general point groups results in there being only 32 crystallographic point groupsThere are infinitely many three-dimensional point groupsMany of the crystallographic point groups share the same internal structure227 of them are crystallographic point groups (for particular values of p and q)- There are many papers dealing with the formula_36 symbols or Clebsch-Gordon coefficients for the finite crystallographic point groups
An abbreviated form of the Hermann–Mauguin notation commonly used for space groups also serves to describe crystallographic point groupsGroup names are
In the classification of crystals, each point group defines a so-called (geometric) crystal classThese 32 point groups are one-and-the-same as the 32 types of morphological (external) crystalline symmetries derived in 1830 by Johann Friedrich Christian Hessel from a consideration of observed crystal forms.
Thus we have, with bolding of the 10 cyclic crystallographic point groups, for which the crystallographic restriction applies:The point groups which are isomorphic are shown in the following table: This table makes use of cyclic groups (C1, C2, C3, C4, C6), dihedral groups (D2, D3, D4, D6), one of the alternating groups (A4), and one of the symmetric groups (S4)Some crystallographic group have their orders indexed (order.index) by their abstract group structureHere the symbol "" × "" indicates a direct product.
The remaining non-centrosymmetric crystallographic point groups 4, 42m, 6, 6m2, 43m are neither polar nor chiral.
This is a summary of 4-dimensional point groups in Coxeter notation(nc) is given for non-crystallographic groupsFor example, the point groups 1, 2, and m contain different geometric symmetry operations, (inversion, rotation, and reflection, respectively) but all share the structure of the cyclic group C2All isomorphic groups are of the same order, but not all groups of the same order are isomorphic","However, the crystallographic restriction on the general point groups results in there being only 32 crystallographic point groupsThere are infinitely many three-dimensional point groupsMany of the crystallographic point groups share the same internal structure227 of them are crystallographic point groups (for particular values of p and q)- There are many papers dealing with the formula_36 symbols or Clebsch-Gordon coefficients for the finite crystallographic point groups
An abbreviated form of the Hermann–Mauguin notation commonly used for space groups also serves to describe crystallographic point groupsGroup names are
In the classification of crystals, each point group defines a so-called (geometric) crystal classThese 32 point groups are one-and-the-same as the 32 types of morphological (external) crystalline symmetries derived in 1830 by Johann Friedrich Christian Hessel from a consideration of observed crystal forms.
Thus we have, with bolding of the 10 cyclic crystallographic point groups, for which the crystallographic restriction applies:The point groups which are isomorphic are shown in the following table: This table makes use of cyclic groups (C1, C2, C3, C4, C6), dihedral groups (D2, D3, D4, D6), one of the alternating groups (A4), and one of the symmetric groups (S4)Some crystallographic group have their orders indexed (order.index) by their abstract group structureHere the symbol "" × "" indicates a direct product.
The remaining non-centrosymmetric crystallographic point groups 4, 42m, 6, 6m2, 43m are neither polar nor chiral.
This is a summary of 4-dimensional point groups in Coxeter notation(nc) is given for non-crystallographic groupsFor example, the point groups 1, 2, and m contain different geometric symmetry operations, (inversion, rotation, and reflection, respectively) but all share the structure of the cyclic group C2All isomorphic groups are of the same order, but not all groups of the same order are isomorphic[SEP]How many crystallographic point groups are there in three-dimensional space?","['B', 'D', 'C']",1.0
What is the Liouville density?,"* An Introduction to Liouville Theory, Talk at Institute for Advanced Study by Antti Kupiainen, May 2018. In physics, Liouville field theory (or simply Liouville theory) is a two- dimensional conformal field theory whose classical equation of motion is a generalization of Liouville's equation. The Dirichlet inverse of Liouville function is the absolute value of the Möbius function, \lambda^{-1}(n) = |\mu(n)| = \mu^2(n), the characteristic function of the squarefree integers. The model can be viewed as a perturbation of Liouville theory. In the mathematical field of differential geometry a Liouville surface is a type of surface which in local coordinates may be written as a graph in R3 :z=f(x,y) such that the first fundamental form is of the form :ds^2 = \big(f_1(x) + f_2(y)\big)\left(dx^2+dy^2\right).\, Sometimes a metric of this form is called a Liouville metric. Liouville theory is defined for all complex values of the central charge c of its Virasoro symmetry algebra, but it is unitary only if :c\in(1,+\infty), and its classical limit is : c\to +\infty. Although it is an interacting theory with a continuous spectrum, Liouville theory has been solved. The Liouville Lambda function, denoted by λ(n) and named after Joseph Liouville, is an important arithmetic function. In that case, certain correlation functions between primary fields in the Liouville theory are mapped to correlation functions of the Gibbs measure of the particle. Liouville theory is unitary if and only if c\in (1,+\infty). It was first called Liouville theory when it was found to actually exist, and to be spacelike rather than timelike. The Lambert series for the Liouville function is :\sum_{n=1}^\infty \frac{\lambda(n)q^n}{1-q^n} = \sum_{n=1}^\infty q^{n^2} = \frac{1}{2}\left(\vartheta_3(q)-1\right), where \vartheta_3(q) is the Jacobi theta function. ==Conjectures on weighted summatory functions== thumb|none|Summatory Liouville function L(n) up to n = 104. In particular, its three-point function on the sphere has been determined analytically. ==Introduction== Liouville theory describes the dynamics of a field \phi called the Liouville field, which is defined on a two-dimensional space. However, it has been argued that the model itself is not invariant. ==Applications== ===Liouville gravity=== In two dimensions, the Einstein equations reduce to Liouville's equation, so Liouville theory provides a quantum theory of gravity that is called Liouville gravity. Where it is not, it is more usual to specify the density directly. This has applications to extreme value statistics of the two-dimensional Gaussian free field, and allows to predict certain universal properties of the log- correlated random energy models (in two dimensions and beyond). ===Other applications=== Liouville theory is related to other subjects in physics and mathematics, such as three-dimensional general relativity in negatively curved spaces, the uniformization problem of Riemann surfaces, and other problems in conformal mapping. Moreover, correlation functions of the H_3^+ model (the Euclidean version of the SL_2(\mathbb{R}) WZW model) can be expressed in terms of correlation functions of Liouville theory. The spectrum of Liouville theory does not include a vacuum state. In that case the density around any given location is determined by calculating the density of a small volume around that location. Mathematically, density is defined as mass divided by volume: \rho = \frac{m}{V} where ρ is the density, m is the mass, and V is the volume. ",The Liouville density is a probability distribution that specifies the probability of finding a particle at a certain position in phase space for a collection of particles.,The Liouville density is a quasiprobability distribution that plays an analogous role to the probability distribution for a quantum particle.,The Liouville density is a bounded probability distribution that is a convenient indicator of quantum-mechanical interference.,The Liouville density is a probability distribution that takes on negative values for states which have no classical model.,The Liouville density is a probability distribution that satisfies all the properties of a conventional probability distribution for a quantum particle.,A,kaggle200,"Example: The series formula_69 is a ""super Liouville number"", while the series formula_70 is a Liouville number with irrationality base 2. (formula_71 represents tetration.)
The above probability function only characterizes a probability distribution if it satisfies all the Kolmogorov axioms, that is:
Manuel Bächtold analyzed Peres' textbook from a standpoint of philosophical pragmatism. John Conway and Simon Kochen used a Kochen–Specker configuration from the book in order to prove their free will theorem. Peres' insistence in his textbook that the classical analogue of a quantum state is a Liouville density function was influential in the development of QBism.
A classical particle has a definite position and momentum, and hence it is represented by a point in phase space. Given a collection (ensemble) of particles, the probability of finding a particle at a certain position in phase space is specified by a probability distribution, the Liouville density. This strict interpretation fails","It is known that π and e are not Liouville numbers.
An absolutely continuous probability distribution is a probability distribution on the real numbers with uncountably many possible values, such as a whole interval in the real line, and where the probability of any event can be expressed as an integral. More precisely, a real random variable X has an absolutely continuous probability distribution if there is a function f:R→[0,∞] such that for each interval [a,b]⊂R the probability of X belonging to [a,b] is given by the integral of f over I This is the definition of a probability density function, so that absolutely continuous probability distributions are exactly those with a probability density function.
A classical particle has a definite position and momentum, and hence it is represented by a point in phase space. Given a collection (ensemble) of particles, the probability of finding a particle at a certain position in phase space is specified by a probability distribution, the Liouville density. This strict interpretation fails for a quantum particle, due to the uncertainty principle. Instead, the above quasiprobability Wigner distribution plays an analogous role, but does not satisfy all the properties of a conventional probability distribution; and, conversely, satisfies boundedness properties unavailable to classical distributions.","Given a collection (ensemble) of particles, the probability of finding a particle at a certain position in phase space is specified by a probability distribution, the Liouville densityIt is known that π and e are not Liouville numbers.
An absolutely continuous probability distribution is a probability distribution on the real numbers with uncountably many possible values, such as a whole interval in the real line, and where the probability of any event can be expressed as an integralPeres' insistence in his textbook that the classical analogue of a quantum state is a Liouville density function was influential in the development of QBism.
A classical particle has a definite position and momentum, and hence it is represented by a point in phase space- Example: The series formula_69 is a ""super Liouville number"", while the series formula_70 is a Liouville number with irrationality base 2(formula_71 represents tetration.)
The above probability function only characterizes a probability distribution if it satisfies all the Kolmogorov axioms, that is:
Manuel Bächtold analyzed Peres' textbook from a standpoint of philosophical pragmatismMore precisely, a real random variable X has an absolutely continuous probability distribution if there is a function f:R→[0,∞] such that for each interval [a,b]⊂R the probability of X belonging to [a,b] is given by the integral of f over I This is the definition of a probability density function, so that absolutely continuous probability distributions are exactly those with a probability density function.
A classical particle has a definite position and momentum, and hence it is represented by a point in phase spaceInstead, the above quasiprobability Wigner distribution plays an analogous role, but does not satisfy all the properties of a conventional probability distribution; and, conversely, satisfies boundedness properties unavailable to classical distributionsThis strict interpretation fails for a quantum particle, due to the uncertainty principleThis strict interpretation failsJohn Conway and Simon Kochen used a Kochen–Specke","Given a collection (ensemble) of particles, the probability of finding a particle at a certain position in phase space is specified by a probability distribution, the Liouville densityIt is known that π and e are not Liouville numbers.
An absolutely continuous probability distribution is a probability distribution on the real numbers with uncountably many possible values, such as a whole interval in the real line, and where the probability of any event can be expressed as an integralPeres' insistence in his textbook that the classical analogue of a quantum state is a Liouville density function was influential in the development of QBism.
A classical particle has a definite position and momentum, and hence it is represented by a point in phase space- Example: The series formula_69 is a ""super Liouville number"", while the series formula_70 is a Liouville number with irrationality base 2(formula_71 represents tetration.)
The above probability function only characterizes a probability distribution if it satisfies all the Kolmogorov axioms, that is:
Manuel Bächtold analyzed Peres' textbook from a standpoint of philosophical pragmatismMore precisely, a real random variable X has an absolutely continuous probability distribution if there is a function f:R→[0,∞] such that for each interval [a,b]⊂R the probability of X belonging to [a,b] is given by the integral of f over I This is the definition of a probability density function, so that absolutely continuous probability distributions are exactly those with a probability density function.
A classical particle has a definite position and momentum, and hence it is represented by a point in phase spaceInstead, the above quasiprobability Wigner distribution plays an analogous role, but does not satisfy all the properties of a conventional probability distribution; and, conversely, satisfies boundedness properties unavailable to classical distributionsThis strict interpretation fails for a quantum particle, due to the uncertainty principleThis strict interpretation failsJohn Conway and Simon Kochen used a Kochen–Specke[SEP]What is the Liouville density?","['B', 'A', 'C']",0.5
What are the four qualitative levels of crystallinity described by geologists?,"Crystallinity can be measured using x-ray crystallography, but calorimetric techniques are also commonly used. == Rock crystallinity == Geologists describe four qualitative levels of crystallinity: * holocrystalline rocks are completely crystalline; * hypocrystalline rocks are partially crystalline, with crystals embedded in an amorphous or glassy matrix; * hypohyaline rocks are partially glassy; * holohyaline rocks (such as obsidian) are completely glassy. ==References== Oxford dictionary of science, 1999, . Crystallinity refers to the degree of structural order in a solid. In such cases, crystallinity is usually specified as a percentage of the volume of the material that is crystalline. The inclusions in the crystals (both solid and fluid) are of great interest; one mineral may enclose another, or may contain spaces occupied by glass, by fluids or by gases. ==Microstructure== The structure of the rock - the relation of its components to one another - is usually clearly indicated, whether it is fragmented or massive; the presence of glassy matter in contradistinction to a completely crystalline or ""holo-crystalline"" condition; the nature and origin of organic fragments; banding, foliation or lamination; the pumiceous or porous structure of many lavas. The degree of crystallinity has a big influence on hardness, density, transparency and diffusion. Hence, it is also important to describe the quality of the shape of a mineral specimen: * Euhedral: a crystal that is completely bounded by its characteristic faces, well-formed. By observing the presence or absence of such lines in liquids with different indices, the index of the crystal can be estimated, usually to within . ==Systematic== Systematic mineralogy is the identification and classification of minerals by their properties. The Manual of Mineralogy places minerals in the following classes: native elements, sulfides, sulfosalts, oxides and hydroxides, halides, carbonates, nitrates and borates, sulfates, chromates, molybdates and tungstates, phosphates, arsenates and vanadates, and silicates. ==Formation environments== The environments of mineral formation and growth are highly varied, ranging from slow crystallization at the high temperatures and pressures of igneous melts deep within the Earth's crust to the low temperature precipitation from a saline brine at the Earth's surface. It is described by the quality (e.g., perfect or fair) and the orientation of the plane in crystallographic nomenclature. Their cross-sections often reveal a ""concentric"" pattern calcite, chrysocolla, goethite, malachite Stellate none Wavellite Star-like, radial aggregates radiating from a ""star""-like point to produce gross spheres (crystals are not or weakly separated and have similar lengths) pyrophyllite, aragonite, wavellite, ""pyrite suns"" Tabular/Blocky/Stubby none Vanadinite More elongated than equant, slightly longer than wide, flat tablet-shaped feldspar, topaz, vanadinite Wheat sheaf none Stilbite Aggregates resembling hand-reaped wheat sheaves stilbite ===Asymmetrical/Irregular habits=== Habit Image Description Common example(s) Amygdaloidal none Native copper Like embedded almonds heulandite, subhedral zircon Hemimorphic none Hemimorphite Doubly terminated crystal with two differently shaped ends hemimorphite, elbaite Massive/Compact none Turquoise Shapeless, no distinctive external crystal shape limonite, turquoise, cinnabar, quartz, realgar, lazurite Nodular/Tuberose none Agate Deposit of roughly spherical form with irregular protuberances agate (and other chalcedony) Sceptered none Quartz Crystal growth stops and continues at the top of the crystal, but not at the bottom hedenbergite, quartz ===Symmetrical habits=== Habit Image Description Common example(s) Cubic none Halite Cube shape fluorite, pyrite, galena, halite Dodecahedral none Pyrite Dodecahedron- shaped, 12-sided garnet, pyrite Enantiomorphic none Gypsum Mirror-image habit (i.e. crystal twinning) and optical characteristics; right- and left-handed crystals gypsum, quartz, plagioclase, staurolite Equant/Stout none Olivine Length, width, and breadth roughly equal apophyllite, olivine, garnet Hexagonal none Emerald Hexagonal prism (six-sided) emerald, galena, quartz, hanksite, vanadinite Icositetrahedral none Spessartine Icositetrahedron- shaped, 24-faced spessartine Octahedral none Fluorine Octahedron-shaped, square bipyramid (eight-sided) diamond, fluorine, fluorite, magnetite, pyrite Prismatic none Beryl Elongate, prism-like: well-developed crystal faces parallel to the vertical axis beryl, tourmaline, vanadinite, emerald Pseudo- hexagonal none Aragonite Hexagon-like appearance due to cyclic twinning aragonite, chrysoberyl Rhombohedral none Siderite Rhombohedron-shaped (six- faced rhombi) calcite, rhodochrosite, siderite Scalenohedral none Rhodochrosite Scalenohedron-shaped, pointy ends calcite, rhodochrosite, titanite Tetrahedral none Sphalerite Tetrahedron-shaped, triangular pyramid (four-sided) tetrahedrite, spinel, sphalerite, magnetite ===Rounded/Spherical habits=== Habit Image Description Common example(s) Botryoidal none Chalcedony Grape-like, large and small hemispherical masses, nearly differentiated/separated from each other chalcedony, pyrite, smithsonite, hemimorphite Colloform none Sphalerite Rounded, finely banded sphalerite, pyrite Globular none Gyrolite Isolated hemispheres or spheres calcite, fluorite, gyrolite Mammillary none Chalcedony Breast-like: surface formed by intersecting partial spherical shapes, larger version of botryoidal and/or reniform, also concentric layered aggregates. The habit of a crystal is dependent on its crystallographic form and growth conditions, which generally creates irregularities due to limited space in the crystallizing medium (commonly in rocks).Klein, Cornelis, 2007, Minerals and Rocks: Exercises in Crystal and Mineral Chemistry, Crystallography, X-ray Powder Diffraction, Mineral and Rock Identification, and Ore Mineralogy, Wiley, third edition, Wenk, Hans-Rudolph and Andrei Bulakh, 2004, Minerals: Their Constitution and Origin, Cambridge, first edition, ==Crystal forms== Recognizing the habit can aid in mineral identification and description, as the crystal habit is an external representation of the internal ordered atomic arrangement. It, however, retains a focus on the crystal structures commonly encountered in rock-forming minerals (such as the perovskites, clay minerals and framework silicates). If the mineral is well crystallized, it will also have a distinctive crystal habit (for example, hexagonal, columnar, botryoidal) that reflects the crystal structure or internal arrangement of atoms. Historically, mineralogy was heavily concerned with taxonomy of the rock- forming minerals. Mineralogy is a subject of geology specializing in the scientific study of the chemistry, crystal structure, and physical (including optical) properties of minerals and mineralized artifacts. Category:Crystals Category:Physical quantities Category:Phases of matter From the remaining chemical constituents, Al2O3 and K2O are allocated with silica for orthoclase; sodium, aluminium and potassium for albite, and so on until either there is no silica left (in which case feldspathoids are calculated) or excess, in which case the rock contains normative quartz. == Normative and modal mineralogy == Normative mineralogy is an estimate of the mineralogy of the rock. The normative mineralogy of the rock then is calculated, based upon assumptions about the order of mineral formation and known phase relationships of rocks and minerals, and using simplified mineral formulas. Many crystals are polymorphic, having more than one possible crystal structure depending on factors such as pressure and temperature. ==Crystal structure== The crystal structure is the arrangement of atoms in a crystal. A microscopic rock-section in ordinary light, if a suitable magnification (e.g. around 30x) be employed, is seen to consist of grains or crystals varying in color, size, and shape. ==Characteristics of minerals== === Color === Some minerals are colorless and transparent (quartz, calcite, feldspar, muscovite, etc.), while others are yellow or brown (rutile, tourmaline, biotite), green (diopside, hornblende, chlorite), blue (glaucophane). ","Holocrystalline, hypocrystalline, hypercrystalline, and holohyaline","Holocrystalline, hypocrystalline, hypohyaline, and holohyaline","Holocrystalline, hypohyaline, hypercrystalline, and holohyaline","Holocrystalline, hypocrystalline, hypercrystalline, and hyperhyaline","Holocrystalline, hypocrystalline, hypohyaline, and hyperhyaline",B,kaggle200,"This belt of rock was a part of Laurentia, thought by some geologists to be the core of Rodinia. This belt stops suddenly at its western margin, leading geologists to suspect that some piece of crust had rifted away from what is now the West Coast of the United States.
Kyanite is used as a raw material in the manufacture of ceramics and abrasives, and it is an important index mineral used by geologists to trace metamorphic zones.
Nepheline syenite is a holocrystalline plutonic rock that consists largely of nepheline and alkali feldspar. The rocks are mostly pale colored, grey or pink, and in general appearance they are not unlike granites, but dark green varieties are also known. Phonolite is the fine-grained extrusive equivalent.
is a mafic, holocrystalline, subvolcanic rock equivalent to volcanic basalt or plutonic gabbro. Diabase dikes and sills are typically shallow intrusive bodies and often exhibit fine-grained to aphanitic chilled margins which may contain tachylite (dark mafic glass).","Diabase (), also called dolerite () or microgabbro, is a mafic, holocrystalline, subvolcanic rock equivalent to volcanic basalt or plutonic gabbro. Diabase dikes and sills are typically shallow intrusive bodies and often exhibit fine-grained to aphanitic chilled margins which may contain tachylite (dark mafic glass).
Diabase is the preferred name in North America, while dolerite is the preferred name in the rest of the English-speaking world, where sometimes the name diabase refers to altered dolerites and basalts. Some geologists prefer to avoid confusion by using the name microgabbro.
The name diabase comes from the French diabase, and ultimately from the Greek διάβασις - meaning ""act of crossing over, transition"".
Nepheline syenite is a holocrystalline plutonic rock that consists largely of nepheline and alkali feldspar. The rocks are mostly pale colored, grey or pink, and in general appearance they are not unlike granites, but dark green varieties are also known. Phonolite is the fine-grained extrusive equivalent.
Geologists describe four qualitative levels of crystallinity: holocrystalline rocks are completely crystalline; hypocrystalline rocks are partially crystalline, with crystals embedded in an amorphous or glassy matrix; hypohyaline rocks are partially glassy; holohyaline rocks (such as obsidian) are completely glassy.","Phonolite is the fine-grained extrusive equivalent.
Geologists describe four qualitative levels of crystallinity: holocrystalline rocks are completely crystalline; hypocrystalline rocks are partially crystalline, with crystals embedded in an amorphous or glassy matrix; hypohyaline rocks are partially glassy; holohyaline rocks (such as obsidian) are completely glassyPhonolite is the fine-grained extrusive equivalent.
is a mafic, holocrystalline, subvolcanic rock equivalent to volcanic basalt or plutonic gabbroThe rocks are mostly pale colored, grey or pink, and in general appearance they are not unlike granites, but dark green varieties are also knownThis belt stops suddenly at its western margin, leading geologists to suspect that some piece of crust had rifted away from what is now the West Coast of the United States.
Kyanite is used as a raw material in the manufacture of ceramics and abrasives, and it is an important index mineral used by geologists to trace metamorphic zones.
Nepheline syenite is a holocrystalline plutonic rock that consists largely of nepheline and alkali feldsparDiabase dikes and sills are typically shallow intrusive bodies and often exhibit fine-grained to aphanitic chilled margins which may contain tachylite (dark mafic glass).Some geologists prefer to avoid confusion by using the name microgabbro.
The name diabase comes from the French diabase, and ultimately from the Greek διάβασις - meaning ""act of crossing over, transition"".
Nepheline syenite is a holocrystalline plutonic rock that consists largely of nepheline and alkali feldsparDiabase (), also called dolerite () or microgabbro, is a mafic, holocrystalline, subvolcanic rock equivalent to volcanic basalt or plutonic gabbroDiabase dikes and sills are typically shallow intrusive bodies and often exhibit fine-grained to aphanitic chilled margins which may contain tachylite (dark mafic glass).
Diabase is the preferred name in North America, while dolerite is the preferred name in the rest of the English-speaking world, where sometimes the name diabase refers to altered dolerites and basal","Phonolite is the fine-grained extrusive equivalent.
Geologists describe four qualitative levels of crystallinity: holocrystalline rocks are completely crystalline; hypocrystalline rocks are partially crystalline, with crystals embedded in an amorphous or glassy matrix; hypohyaline rocks are partially glassy; holohyaline rocks (such as obsidian) are completely glassyPhonolite is the fine-grained extrusive equivalent.
is a mafic, holocrystalline, subvolcanic rock equivalent to volcanic basalt or plutonic gabbroThe rocks are mostly pale colored, grey or pink, and in general appearance they are not unlike granites, but dark green varieties are also knownThis belt stops suddenly at its western margin, leading geologists to suspect that some piece of crust had rifted away from what is now the West Coast of the United States.
Kyanite is used as a raw material in the manufacture of ceramics and abrasives, and it is an important index mineral used by geologists to trace metamorphic zones.
Nepheline syenite is a holocrystalline plutonic rock that consists largely of nepheline and alkali feldsparDiabase dikes and sills are typically shallow intrusive bodies and often exhibit fine-grained to aphanitic chilled margins which may contain tachylite (dark mafic glass).Some geologists prefer to avoid confusion by using the name microgabbro.
The name diabase comes from the French diabase, and ultimately from the Greek διάβασις - meaning ""act of crossing over, transition"".
Nepheline syenite is a holocrystalline plutonic rock that consists largely of nepheline and alkali feldsparDiabase (), also called dolerite () or microgabbro, is a mafic, holocrystalline, subvolcanic rock equivalent to volcanic basalt or plutonic gabbroDiabase dikes and sills are typically shallow intrusive bodies and often exhibit fine-grained to aphanitic chilled margins which may contain tachylite (dark mafic glass).
Diabase is the preferred name in North America, while dolerite is the preferred name in the rest of the English-speaking world, where sometimes the name diabase refers to altered dolerites and basal[SEP]What are the four qualitative levels of crystallinity described by geologists?","['B', 'D', 'C']",1.0
What is an order parameter?,"That is, a parameter is an element of a system that is useful, or critical, when identifying the system, or when evaluating its performance, status, condition, etc. Parameter has more specific meanings within various disciplines, including mathematics, computer programming, engineering, statistics, logic, linguistics, and electronic musical composition. A parameter (), generally, is any characteristic that can help in defining or classifying a particular system (meaning an event, project, object, situation, etc.). These concepts play an important role in many applications of order theory. In some informal situations it is a matter of convention (or historical accident) whether some or all of the symbols in a function definition are called parameters. The notion of order is very general, extending beyond contexts that have an immediate, intuitive feel of sequence or relative quantity. A court order is an official proclamation by a judge (or panel of judges) that defines the legal relationships between the parties to a hearing, a trial, an appeal or other court proceedings. Several types of orders can be defined from numerical data on the items of the order: a total order results from attaching distinct real numbers to each item and using the numerical comparisons to order the items; instead, if distinct items are allowed to have equal numerical scores, one obtains a strict weak ordering. There are often several choices for the parameters, and choosing a convenient set of parameters is called parametrization. An order is an instruction to buy or sell on a trading venue such as a stock market, bond market, commodity market, financial derivative market or cryptocurrency exchange. Order theory is a branch of mathematics that investigates the intuitive notion of order using binary relations. Paul Lansky and George Perle criticized the extension of the word ""parameter"" to this sense, since it is not closely related to its mathematical sense, but it remains common. A parameter could be incorporated into the function name to indicate its dependence on the parameter. Order theory captures the intuition of orders that arises from such examples in a general setting. Conditional orders generally get priority based on the time the condition is met. In addition, order theory does not restrict itself to the various classes of ordering relations, but also considers appropriate functions between them. Parameters in a model are the weight of the various probabilities. ""Speaking generally, properties are those physical quantities which directly describe the physical attributes of the system; parameters are those combinations of the properties which suffice to determine the response of the system. Orders are drawn bottom-up: if an element x is smaller than (precedes) y then there exists a path from x to y that is directed upwards. That is, a total order is a binary relation \leq on some set X, which satisfies the following for all a, b and c in X: # a \leq a (reflexive). Chapter 4 ""Orders and Order Properties."" ",An order parameter is a measure of the temperature of a physical system.,An order parameter is a measure of the gravitational force in a physical system.,An order parameter is a measure of the magnetic field strength in a physical system.,An order parameter is a measure of the degree of symmetry breaking in a physical system.,An order parameter is a measure of the rotational symmetry in a physical system.,D,kaggle200,"An order parameter is a measure of the degree of order across the boundaries in a phase transition system; it normally ranges between zero in one phase (usually above the critical point) and nonzero in the other. At the critical point, the order parameter susceptibility will usually diverge.
If formula_1 is the order parameter of the system, then mean field theory requires that the fluctuations in the order parameter are much smaller than the actual value of the order parameter near the critical point.
An ""ordered medium"" is defined as a region of space described by a function ""f""(""r"") that assigns to every point in the region an ""order parameter"", and the possible values of the order parameter space constitute an ""order parameter space"". The homotopy theory of defects uses the fundamental group of the order parameter space of a medium to discuss the existence, stability and classifications of topological defects in that medium.
where formula_4 is the standard nematic scalar order parameter and formula_5 is a measure of the biaxiality.","If ϕ is the order parameter of the system, then mean field theory requires that the fluctuations in the order parameter are much smaller than the actual value of the order parameter near the critical point.
An ordered medium is defined as a region of space described by a function f(r) that assigns to every point in the region an order parameter, and the possible values of the order parameter space constitute an order parameter space. The homotopy theory of defects uses the fundamental group of the order parameter space of a medium to discuss the existence, stability and classifications of topological defects in that medium.Suppose R is the order parameter space for a medium, and let G be a Lie group of transformations on R. Let H be the symmetry subgroup of G for the medium. Then, the order parameter space can be written as the Lie group quotient R = G/H.
Order parameters An order parameter is a measure of the degree of order across the boundaries in a phase transition system; it normally ranges between zero in one phase (usually above the critical point) and nonzero in the other. At the critical point, the order parameter susceptibility will usually diverge.
An example of an order parameter is the net magnetization in a ferromagnetic system undergoing a phase transition. For liquid/gas transitions, the order parameter is the difference of the densities.","- An order parameter is a measure of the degree of order across the boundaries in a phase transition system; it normally ranges between zero in one phase (usually above the critical point) and nonzero in the otherThen, the order parameter space can be written as the Lie group quotient R = G/H.
Order parameters An order parameter is a measure of the degree of order across the boundaries in a phase transition system; it normally ranges between zero in one phase (usually above the critical point) and nonzero in the otherAt the critical point, the order parameter susceptibility will usually diverge.
If formula_1 is the order parameter of the system, then mean field theory requires that the fluctuations in the order parameter are much smaller than the actual value of the order parameter near the critical point.
An ""ordered medium"" is defined as a region of space described by a function ""f""(""r"") that assigns to every point in the region an ""order parameter"", and the possible values of the order parameter space constitute an ""order parameter space""For liquid/gas transitions, the order parameter is the difference of the densitiesIf ϕ is the order parameter of the system, then mean field theory requires that the fluctuations in the order parameter are much smaller than the actual value of the order parameter near the critical point.
An ordered medium is defined as a region of space described by a function f(r) that assigns to every point in the region an order parameter, and the possible values of the order parameter space constitute an order parameter spaceAt the critical point, the order parameter susceptibility will usually diverge.
An example of an order parameter is the net magnetization in a ferromagnetic system undergoing a phase transitionThe homotopy theory of defects uses the fundamental group of the order parameter space of a medium to discuss the existence, stability and classifications of topological defects in that medium.Suppose R is the order parameter space for a medium, and let G be a Lie group of transformations on RThe homotopy theory of defects uses t","- An order parameter is a measure of the degree of order across the boundaries in a phase transition system; it normally ranges between zero in one phase (usually above the critical point) and nonzero in the otherThen, the order parameter space can be written as the Lie group quotient R = G/H.
Order parameters An order parameter is a measure of the degree of order across the boundaries in a phase transition system; it normally ranges between zero in one phase (usually above the critical point) and nonzero in the otherAt the critical point, the order parameter susceptibility will usually diverge.
If formula_1 is the order parameter of the system, then mean field theory requires that the fluctuations in the order parameter are much smaller than the actual value of the order parameter near the critical point.
An ""ordered medium"" is defined as a region of space described by a function ""f""(""r"") that assigns to every point in the region an ""order parameter"", and the possible values of the order parameter space constitute an ""order parameter space""For liquid/gas transitions, the order parameter is the difference of the densitiesIf ϕ is the order parameter of the system, then mean field theory requires that the fluctuations in the order parameter are much smaller than the actual value of the order parameter near the critical point.
An ordered medium is defined as a region of space described by a function f(r) that assigns to every point in the region an order parameter, and the possible values of the order parameter space constitute an order parameter spaceAt the critical point, the order parameter susceptibility will usually diverge.
An example of an order parameter is the net magnetization in a ferromagnetic system undergoing a phase transitionThe homotopy theory of defects uses the fundamental group of the order parameter space of a medium to discuss the existence, stability and classifications of topological defects in that medium.Suppose R is the order parameter space for a medium, and let G be a Lie group of transformations on RThe homotopy theory of defects uses t[SEP]What is an order parameter?","['D', 'C', 'E']",1.0
What is the significance of the discovery of the Crab pulsar?,"Their discovery was confirmed by Nather, Warner, and Macfarlane. thumb|left|Light curve and slow motion picture of the pulsar located in the center of the Crab Nebula. The Crab Pulsar (PSR B0531+21) is a relatively young neutron star. The period and location of the Crab Nebula pulsar NP 0532 was discovered by Richard V. E. Lovelace and collaborators on November 10, 1968, at the Arecibo radio observatory.IAU Circ. No. 2113, 1968. The Crab Pulsar is one of very few pulsars to be identified optically. In 2019 the Crab Nebula, and presumably therefore the Crab Pulsar, was observed to emit gamma rays in excess of 100 TeV, making it the first identified source of ultra-high-energy cosmic rays. ==References== Category:Crab Nebula Tauri, CM Category:Optical pulsars Category:Taurus (constellation) Image taken with a photon counting camera on the 80cm telescope of the Wendelstein Observatory, Dr. F. Fleischmann, 1998 Jocelyn Bell Burnell, who co-discovered the first pulsar PSR B1919+21 in 1967, relates that in the late 1950s a woman viewed the Crab Nebula source at the University of Chicago's telescope, then open to the public, and noted that it appeared to be flashing. The Crab Pulsar was the first pulsar for which the spin-down limit was broken using several months of data of the LIGO observatory. The star is the central star in the Crab Nebula, a remnant of the supernova SN 1054, which was widely observed on Earth in the year 1054.Supernova 1054 – Creation of the Crab Nebula. Discovered in 1968, the pulsar was the first to be connected with a supernova remnant. In late 1968, David H. Staelin and Edward C. Reifenstein III reported the discovery of two pulsating radio sources ""near the crab nebula that could be coincident with it"" using the Green Bank radio antenna. Very few X-ray sources ever exceed one crab in brightness. ==History of observation== The Crab Nebula was identified as the remnant of SN 1054 by 1939. The Crab Nebula is often used as a calibration source in X-ray astronomy. A radio source was also reported coincident with the Crab Nebula in late 1968 by L. I. Matveenko in Soviet Astronomy. Most pulsars do not rotate at constant rotation frequency, but can be observed to slow down at a very slow rate (3.7 Hz/s in case of the Crab). Bell Burnell notes that the 30 Hz frequency of the Crab Nebula optical pulsar is difficult for many people to see.""Beautiful Minds: Jocelyn Bell Burnell"", BBC television documentary broadcast 7 April 2010. It was during this period that Crabtree was called upon as an advisor in lithic studies to the University of Pennsylvania, where he was associated with Edgar B. Howard and the Clovis point type site at Black Water Draw. A subsequent study by them, including William D. Brundage, also found that the NP 0532 source is located at the Crab Nebula. This larger abundance of food is very beneficial to the crab larvae. In 1969 some of Crabtree's work was featured in a special exhibition at New York's American Museum of Natural History. The non- observation so far is not totally unexpected, since physical models of the rotational symmetry of pulsars puts a more realistic upper limit on the amplitude of gravitational waves several orders of magnitude below the spin- down limit. ",The discovery of the Crab pulsar confirmed the black hole model of pulsars.,The discovery of the Crab pulsar confirmed the rotating neutron star model of pulsars.,The discovery of the Crab pulsar confirmed the white dwarf model of pulsars.,The discovery of the Crab pulsar disproved the rotating neutron star model of pulsars.,The discovery of the Crab pulsar confirmed the red giant model of pulsars.,B,kaggle200,"An optical pulsar is a pulsar which can be detected in the visible spectrum. There are very few of these known: the Crab Pulsar was detected by stroboscopic techniques in 1969, shortly after its discovery in radio waves, at the Steward Observatory. The Vela Pulsar was detected in 1977 at the Anglo-Australian Observatory, and was the faintest star ever imaged at that time.
In 1965, Antony Hewish and Samuel Okoye discovered ""an unusual source of high radio brightness temperature in the Crab Nebula"". This source turned out to be the Crab Pulsar that resulted from the great supernova of 1054.
In 1968, Richard V. E. Lovelace and collaborators discovered period formula_10 ms of the Crab pulsar using Arecibo Observatory. After this discovery, scientists concluded that pulsars were rotating neutron stars. Before that, many scientists believed that pulsars were pulsating white dwarfs.
The discovery of the Crab pulsar provided confirmation of the rotating neutron star model of pulsars. The Crab pulsar 33-millisecond pulse period was too short to be consistent with other proposed models for pulsar emission. Moreover, the Crab pulsar is so named because it is located at the center of the Crab Nebula, consistent with the 1933 prediction of Baade and Zwicky.","In 1965, Antony Hewish and Samuel Okoye discovered ""an unusual source of high radio brightness temperature in the Crab Nebula"". This source turned out to be the Crab Pulsar that resulted from the great supernova of 1054.
In 1968, Richard V. E. Lovelace and collaborators discovered period 33 ms of the Crab pulsar using Arecibo Observatory. After this discovery, scientists concluded that pulsars were rotating neutron stars. Before that, many scientists believed that pulsars were pulsating white dwarfs.
The discovery of the Crab pulsar provided confirmation of the rotating neutron star model of pulsars. The Crab pulsar 33-millisecond pulse period was too short to be consistent with other proposed models for pulsar emission. Moreover, the Crab pulsar is so named because it is located at the center of the Crab Nebula, consistent with the 1933 prediction of Baade and Zwicky.","Moreover, the Crab pulsar is so named because it is located at the center of the Crab Nebula, consistent with the 1933 prediction of Baade and ZwickyBefore that, many scientists believed that pulsars were pulsating white dwarfs.
The discovery of the Crab pulsar provided confirmation of the rotating neutron star model of pulsarsMoreover, the Crab pulsar is so named because it is located at the center of the Crab Nebula, consistent with the 1933 prediction of Baade and Zwicky.Lovelace and collaborators discovered period 33 ms of the Crab pulsar using Arecibo ObservatoryThere are very few of these known: the Crab Pulsar was detected by stroboscopic techniques in 1969, shortly after its discovery in radio waves, at the Steward ObservatoryThe Crab pulsar 33-millisecond pulse period was too short to be consistent with other proposed models for pulsar emissionThis source turned out to be the Crab Pulsar that resulted from the great supernova of 1054.
In 1968, Richard VLovelace and collaborators discovered period formula_10 ms of the Crab pulsar using Arecibo ObservatoryThe Vela Pulsar was detected in 1977 at the Anglo-Australian Observatory, and was the faintest star ever imaged at that time.
In 1965, Antony Hewish and Samuel Okoye discovered ""an unusual source of high radio brightness temperature in the Crab Nebula""After this discovery, scientists concluded that pulsars were rotating neutron starsIn 1965, Antony Hewish and Samuel Okoye discovered ""an unusual source of high radio brightness temperature in the Crab Nebula""- An optical pulsar is a pulsar which can be detected in the visible spectrumE","Moreover, the Crab pulsar is so named because it is located at the center of the Crab Nebula, consistent with the 1933 prediction of Baade and ZwickyBefore that, many scientists believed that pulsars were pulsating white dwarfs.
The discovery of the Crab pulsar provided confirmation of the rotating neutron star model of pulsarsMoreover, the Crab pulsar is so named because it is located at the center of the Crab Nebula, consistent with the 1933 prediction of Baade and Zwicky.Lovelace and collaborators discovered period 33 ms of the Crab pulsar using Arecibo ObservatoryThere are very few of these known: the Crab Pulsar was detected by stroboscopic techniques in 1969, shortly after its discovery in radio waves, at the Steward ObservatoryThe Crab pulsar 33-millisecond pulse period was too short to be consistent with other proposed models for pulsar emissionThis source turned out to be the Crab Pulsar that resulted from the great supernova of 1054.
In 1968, Richard VLovelace and collaborators discovered period formula_10 ms of the Crab pulsar using Arecibo ObservatoryThe Vela Pulsar was detected in 1977 at the Anglo-Australian Observatory, and was the faintest star ever imaged at that time.
In 1965, Antony Hewish and Samuel Okoye discovered ""an unusual source of high radio brightness temperature in the Crab Nebula""After this discovery, scientists concluded that pulsars were rotating neutron starsIn 1965, Antony Hewish and Samuel Okoye discovered ""an unusual source of high radio brightness temperature in the Crab Nebula""- An optical pulsar is a pulsar which can be detected in the visible spectrumE[SEP]What is the significance of the discovery of the Crab pulsar?","['B', 'E', 'C']",1.0
What is the De Haas-Van Alphen effect?,"The De Haas–Van Alphen effect, often abbreviated to DHVA, is a quantum mechanical effect in which the magnetic susceptibility of a pure metal crystal oscillates as the intensity of the magnetic field B is increased. ""On the theory of the De Haas–Van Alphen effect for particles with an arbitrary dispersion law."" The inspiration for the experiment was the recently discovered Shubnikov–de Haas effect by Lev Shubnikov and De Haas, which showed oscillations of the electrical resistivity as function of a strong magnetic field. By the 1970s the Fermi surface of most metallic elements had been reconstructed using De Haas–Van Alphen and Shubnikov–de Haas effects. De Haas thought that the magnetoresistance should behave in an analogous way. The modern formulation allows the experimental determination of the Fermi surface of a metal from measurements performed with different orientations of the magnetic field around the sample. == History == Experimentally it was discovered in 1930 by W.J. de Haas and P.M. van Alphen under careful study of the magnetization of a single crystal of bismuth. A strong homogeneous magnetic field -- typically several teslas -- and a low temperature are required to cause a material to exhibit the DHVA effect. Koninklijke Akademie van > Wetenschappen te Amsterdam, Proceedings 18 (1915–16). > Einstein wrote three papers with Wander J. de Haas on experimental work they > did together on Ampère's molecular currents, known as the Einstein–De Haas > effect. Other quantities also oscillate, such as the electrical resistivity (Shubnikov–de Haas effect), specific heat, and sound attenuation and speed. The Einstein–de Haas effect is a physical phenomenon in which a change in the magnetic moment of a free body causes this body to rotate. An equivalent phenomenon at low magnetic fields is known as Landau diamagnetism. == Description == The differential magnetic susceptibility of a material is defined as :\chi=\frac{\partial M}{\partial H} where H is the applied external magnetic field and M the magnetization of the material. The theoretical prediction of the phenomenon was formulated before the experiment, in the same year, by Lev Landau,Landau, L. D. ""Diamagnetismus der Metalle."" ""Experimental Proof of the Existence of Ampère's Molecular Currents"" > (with Wander J. de Haas) (in English). It is strong enough to be observable in ferromagnetic materials. Probably, he attributed the hyphenated name to de Haas, not meaning both de Haas and H. A. Lorentz. ==Later measurements and applications== The effect was used to measure the properties of various ferromagnetic elements and alloys. The effect was described mathematically using Landau quantization of the electron energies in an applied magnetic field. These measurements also allow the separation of the two contributions to the magnetization: that which is associated with the spin and with the orbital motion of the electrons. According to Frenkel Einstein wrote in a report to the German Physical Society: ""In the past three months I have performed experiments jointly with de Haas–Lorentz in the Imperial Physicotechnical Institute that have firmly established the existence of Ampère molecular currents."" Einstein and de Haas published two papers in April 1915 containing a description of the expected effect and the experimental results. It is named after Wander Johannes de Haas and his student Pieter M. van Alphen. ",The measurement of the electronic properties of a material using several experimental techniques.,The complex number quantity that describes AC susceptibility and AC permeability.,"The oscillation of the differential susceptibility as a function of 1/H in metals under strong magnetic fields, which relates the period of the susceptibility with the Fermi surface of the material.",The analogue non-linear relation between magnetization and magnetic field in antiferromagnetic materials.,The measurement of magnetic susceptibility in response to an AC magnetic field.,C,kaggle200,"When the magnetic susceptibility is measured in response to an AC magnetic field (i.e. a magnetic field that varies sinusoidally), this is called ""AC susceptibility"". AC susceptibility (and the closely related ""AC permeability"") are complex number quantities, and various phenomena, such as resonance, can be seen in AC susceptibility that cannot occur in constant-field (DC) susceptibility. In particular, when an AC field is applied perpendicular to the detection direction (called the ""transverse susceptibility"" regardless of the frequency), the effect has a peak at the ferromagnetic resonance frequency of the material with a given static applied field. Currently, this effect is called the ""microwave permeability"" or ""network ferromagnetic resonance"" in the literature. These results are sensitive to the domain wall configuration of the material and eddy currents.
The De Haas–Van Alphen effect, often abbreviated to DHVA, is a quantum mechanical effect in which the magnetic susceptibility of a pure metal crystal oscillates as the intensity of the magnetic field ""B"" is increased. It can be used to determine the Fermi surface of a material. Other quantities also oscillate, such as the electrical resistivity (Shubnikov–de Haas effect), specific heat, and sound attenuation and speed. It is named after Wander Johannes de Haas and his student Pieter M. van Alphen. The DHVA effect comes from the orbital motion of itinerant electrons in the material. An equivalent phenomenon at low magnetic fields is known as Landau diamagnetism.
The magnetic response calculated for a gas of electrons is not the full picture as the magnetic susceptibility coming from the ions has to be included. Additionally, these formulas may break down for confined systems that differ from the bulk, like quantum dots, or for high fields, as demonstrated in the De Haas-Van Alphen effect.
Several experimental techniques allow for the measurement of the electronic properties of a material. An important effect in metals under strong magnetic fields, is the oscillation of the differential susceptibility as function of . This behaviour is known as the De Haas–Van Alphen effect and relates the period of the susceptibility with the Fermi surface of the material.","The Pauli paramagnetic susceptibility is a macroscopic effect and has to be contrasted with Landau diamagnetic susceptibility which is equal to minus one third of Pauli's and also comes from delocalized electrons. The Pauli susceptibility comes from the spin interaction with the magnetic field while the Landau susceptibility comes from the spatial motion of the electrons and it is independent of the spin. In doped semiconductors the ratio between Landau's and Pauli's susceptibilities changes as the effective mass of the charge carriers m∗ can differ from the electron mass me The magnetic response calculated for a gas of electrons is not the full picture as the magnetic susceptibility coming from the ions has to be included. Additionally, these formulas may break down for confined systems that differ from the bulk, like quantum dots, or for high fields, as demonstrated in the De Haas-Van Alphen effect.
The De Haas–Van Alphen effect, often abbreviated to DHVA, is a quantum mechanical effect in which the magnetic susceptibility of a pure metal crystal oscillates as the intensity of the magnetic field B is increased. It can be used to determine the Fermi surface of a material. Other quantities also oscillate, such as the electrical resistivity (Shubnikov–de Haas effect), specific heat, and sound attenuation and speed. It is named after Wander Johannes de Haas and his student Pieter M. van Alphen. The DHVA effect comes from the orbital motion of itinerant electrons in the material. An equivalent phenomenon at low magnetic fields is known as Landau diamagnetism.
Several experimental techniques allow for the measurement of the electronic properties of a material. An important effect in metals under strong magnetic fields, is the oscillation of the differential susceptibility as function of 1/H. This behaviour is known as the De Haas–Van Alphen effect and relates the period of the susceptibility with the Fermi surface of the material.
An analogue non-linear relation between magnetization and magnetic field happens for antiferromagnetic materials.","Additionally, these formulas may break down for confined systems that differ from the bulk, like quantum dots, or for high fields, as demonstrated in the De Haas-Van Alphen effect.
The De Haas–Van Alphen effect, often abbreviated to DHVA, is a quantum mechanical effect in which the magnetic susceptibility of a pure metal crystal oscillates as the intensity of the magnetic field B is increasedThese results are sensitive to the domain wall configuration of the material and eddy currents.
The De Haas–Van Alphen effect, often abbreviated to DHVA, is a quantum mechanical effect in which the magnetic susceptibility of a pure metal crystal oscillates as the intensity of the magnetic field ""B"" is increasedThis behaviour is known as the De Haas–Van Alphen effect and relates the period of the susceptibility with the Fermi surface of the material.Additionally, these formulas may break down for confined systems that differ from the bulk, like quantum dots, or for high fields, as demonstrated in the De Haas-Van Alphen effect.
Several experimental techniques allow for the measurement of the electronic properties of a materialThis behaviour is known as the De Haas–Van Alphen effect and relates the period of the susceptibility with the Fermi surface of the material.
An analogue non-linear relation between magnetization and magnetic field happens for antiferromagnetic materialsThe DHVA effect comes from the orbital motion of itinerant electrons in the materialIt is named after Wander Johannes de Haas and his student Pieter Mvan AlphenIt can be used to determine the Fermi surface of a materialAn equivalent phenomenon at low magnetic fields is known as Landau diamagnetism.
Several experimental techniques allow for the measurement of the electronic properties of a materialCurrently, this effect is called the ""microwave permeability"" or ""network ferromagnetic resonance"" in the literatureOther quantities also oscillate, such as the electrical resistivity (Shubnikov–de Haas effect), specific heat, and sound attenuation and speedAn important effect in metals under strong magnetic fields,","Additionally, these formulas may break down for confined systems that differ from the bulk, like quantum dots, or for high fields, as demonstrated in the De Haas-Van Alphen effect.
The De Haas–Van Alphen effect, often abbreviated to DHVA, is a quantum mechanical effect in which the magnetic susceptibility of a pure metal crystal oscillates as the intensity of the magnetic field B is increasedThese results are sensitive to the domain wall configuration of the material and eddy currents.
The De Haas–Van Alphen effect, often abbreviated to DHVA, is a quantum mechanical effect in which the magnetic susceptibility of a pure metal crystal oscillates as the intensity of the magnetic field ""B"" is increasedThis behaviour is known as the De Haas–Van Alphen effect and relates the period of the susceptibility with the Fermi surface of the material.Additionally, these formulas may break down for confined systems that differ from the bulk, like quantum dots, or for high fields, as demonstrated in the De Haas-Van Alphen effect.
Several experimental techniques allow for the measurement of the electronic properties of a materialThis behaviour is known as the De Haas–Van Alphen effect and relates the period of the susceptibility with the Fermi surface of the material.
An analogue non-linear relation between magnetization and magnetic field happens for antiferromagnetic materialsThe DHVA effect comes from the orbital motion of itinerant electrons in the materialIt is named after Wander Johannes de Haas and his student Pieter Mvan AlphenIt can be used to determine the Fermi surface of a materialAn equivalent phenomenon at low magnetic fields is known as Landau diamagnetism.
Several experimental techniques allow for the measurement of the electronic properties of a materialCurrently, this effect is called the ""microwave permeability"" or ""network ferromagnetic resonance"" in the literatureOther quantities also oscillate, such as the electrical resistivity (Shubnikov–de Haas effect), specific heat, and sound attenuation and speedAn important effect in metals under strong magnetic fields,[SEP]What is the De Haas-Van Alphen effect?","['C', 'D', 'E']",1.0
"What is a ""coffee ring"" in physics?","The shape of particles in the liquid is responsible for coffee ring effect. The mechanism behind the formation of these and similar rings is known as the coffee ring effect or in some instances, the coffee stain effect, or simply ring stain. ==Flow mechanism== The coffee-ring pattern originates from the capillary flow induced by the evaporation of the drop: liquid evaporating from the edge is replenished by liquid from the interior. The phenomenon is named for the characteristic ring-like deposit along the perimeter of a spill of coffee. Mixtures of low boiling point and high boiling point solvents were shown to suppress the coffee ring effect, changing the shape of a deposited solute from a ring-like to a dot-like shape. thumb|Stains produced by the evaporation of coffee spills In physics, a ""coffee ring"" is a pattern left by a puddle of particle-laden liquid after it evaporates. The sealed rings resembled the shape of a doughnut, and the small hole in the middle of the ring enabled the coffee filter ring to be placed in the metal percolator basket around the protruding convection (percolator) tube. The coffee filter rings were designed for use in percolators, and each ring contained a pre-measured amount of coffee grounds that were sealed in a self-contained paper filter. When the liquid evaporates much faster than the particle movement near a three-phase contact line, coffee ring cannot be formed successfully. Recent advances have increased the application of coffee-ring assembly from colloidal particles to organized patterns of inorganic crystals. ==References== Category:Phase transitions Category:Fluid mechanics Category:Convection Category:Physical phenomena Category:Physical chemistry Category:Colloidal chemistry Interaction of the particles suspended in a droplet with the free surface of the droplet is important in creating a coffee ring. It can be suppressed by adding elongated particles, such as cellulose fibers, to the spherical particles that cause the coffee-ring effect. Reverse particle motion may also reduce the coffee-ring effect because of the capillary force near the contact line. The benefit of the pre-packed coffee filter rings was two-fold: First, because the amount of coffee contained in the rings was pre-measured, it negated the need to measure each scoop and then place it in the metal percolator basket. ""When the drop evaporates, the free surface collapses and traps the suspended particles ... eventually all the particles are captured by the free surface and stay there for the rest of their trip towards the edge of the drop.""Coffee-ring phenomenon explained in new theory. phys.org (December 20, 2016) This result means that surfactants can be used to manipulate the motion of the solute particles by changing the surface tension of the drop, rather than trying to control the bulk flow inside the drop. Control of the substrate wetting properties on slippery surfaces can prevent the pinning of the drop contact line, which will, therefore, suppress the coffee ring effect by reducing the number of particles deposited at the contact line. The reversal takes place when the capillary force prevails over the outward coffee-ring flow by the geometric constraints. ==Determinants of size and pattern== The lower-limit size of a coffee ring depends on the time scale competition between the liquid evaporation and the movement of suspended particles. Control of the substrate temperature was shown to be an effective way to suppress the coffee ring formed by droplets of water-based PEDOT:PSS solution. A coffee cup is a container, a cup, for serving coffee and coffee-based drinks. While many popular brewing methods and devices use percolation to make coffee, the term ""percolator"" narrowly refers to devices similar to the stove-top coffee pots developed by Hanson Goodrich mentioned above. After use, the coffee filter ring could be easily removed from the basket and discarded. ",A type of coffee that is made by boiling coffee grounds in water.,"A pattern left by a particle-laden liquid after it is spilled, named for the characteristic ring-like deposit along the perimeter of a spill of coffee or red wine.",A type of coffee that is made by mixing instant coffee with hot water.,A type of coffee that is made by pouring hot water over coffee grounds in a filter.,"A pattern left by a particle-laden liquid after it evaporates, named for the characteristic ring-like deposit along the perimeter of a spill of coffee or red wine.",E,kaggle200,"Iced coffee can be made from cold-brew coffee, for which coffee grounds are soaked for several hours and then strained. The next day, the grounds would be filtered out. The result was a very strong coffee concentrate that was mixed with milk and sweetened.
""Drip brew coffee,"" also known as filtered coffee, is made by letting hot water drip onto coffee grounds held in a coffee filter surrounded by a filter holder or brew basket. Drip brew makers can be simple filter holder types manually filled with hot water, or they can use automated systems as found in the popular electric drip coffee-maker. Strength varies according to the ratio of water to coffee and the fineness of the grind, but is typically weaker in taste and contains a lower concentration of caffeine than espresso, though often (due to size) more total caffeine. By convention, regular coffee brewed by this method is served by some restaurants in a brown or black pot (or a pot with a brown or black handle), while decaffeinated coffee is served in an orange pot (or a pot with an orange handle).
Brewed coffee is made by pouring hot water onto ground coffee beans, then allowing to brew. There are several methods for doing this, including using a filter, a percolator, and a French press. Terms used for the resulting coffee often reflect the method used, such as drip brewed coffee, filtered coffee, pour-over coffee, immersion brewed coffee, or simply coffee. Water seeps through the ground coffee, absorbing its constituent chemical compounds, and then passes through a filter. The used coffee grounds are retained in the filter, while the brewed coffee is collected in a vessel such as a carafe or pot.
In physics, a ""coffee ring"" is a pattern left by a puddle of particle-laden liquid after it evaporates. The phenomenon is named for the characteristic ring-like deposit along the perimeter of a spill of coffee. It is also commonly seen after spilling red wine. The mechanism behind the formation of these and similar rings is known as the coffee ring effect or in some instances, the coffee stain effect, or simply ring stain.","Coffee tea refers to herbal tea made from non-bean parts of the coffea (coffee plant), and may refer to: Coffee-leaf tea Coffee cherry tea Ground coffee, brewed in a coffee bag, like bagged tea, is referred to simply as ""coffee"", and is similar to filter coffee.
Yuanyang (drink), a drink combining coffee and tea
There are other, different, recipes for coffee containing egg.
Swedish Egg Coffee is an American drink (despite its name) made by mixing coffee grounds with an egg and simmering, like cowboy coffee; the egg makes the grounds sink, leaving smooth coffee.
Egg Brandy Coffee from Sri Lanka Cuban egg coffee, known locally as ""café a la criolla"", is made by mixing hot espresso coffee with the yolk of a raw egg, sugar and white rum.
Kopi Talua from Indonesia
In physics, a ""coffee ring"" is a pattern left by a puddle of particle-laden liquid after it evaporates. The phenomenon is named for the characteristic ring-like deposit along the perimeter of a spill of coffee. It is also commonly seen after spilling red wine. The mechanism behind the formation of these and similar rings is known as the coffee ring effect or in some instances, the coffee stain effect, or simply ring stain.","The phenomenon is named for the characteristic ring-like deposit along the perimeter of a spill of coffeeThe mechanism behind the formation of these and similar rings is known as the coffee ring effect or in some instances, the coffee stain effect, or simply ring stain.The mechanism behind the formation of these and similar rings is known as the coffee ring effect or in some instances, the coffee stain effect, or simply ring stainThe used coffee grounds are retained in the filter, while the brewed coffee is collected in a vessel such as a carafe or pot.
In physics, a ""coffee ring"" is a pattern left by a puddle of particle-laden liquid after it evaporatesCoffee tea refers to herbal tea made from non-bean parts of the coffea (coffee plant), and may refer to: Coffee-leaf tea Coffee cherry tea Ground coffee, brewed in a coffee bag, like bagged tea, is referred to simply as ""coffee"", and is similar to filter coffee.
Yuanyang (drink), a drink combining coffee and tea
There are other, different, recipes for coffee containing egg.
Swedish Egg Coffee is an American drink (despite its name) made by mixing coffee grounds with an egg and simmering, like cowboy coffee; the egg makes the grounds sink, leaving smooth coffee.
Egg Brandy Coffee from Sri Lanka Cuban egg coffee, known locally as ""café a la criolla"", is made by mixing hot espresso coffee with the yolk of a raw egg, sugar and white rum.
Kopi Talua from Indonesia
In physics, a ""coffee ring"" is a pattern left by a puddle of particle-laden liquid after it evaporatesWater seeps through the ground coffee, absorbing its constituent chemical compounds, and then passes through a filterTerms used for the resulting coffee often reflect the method used, such as drip brewed coffee, filtered coffee, pour-over coffee, immersion brewed coffee, or simply coffeeThe result was a very strong coffee concentrate that was mixed with milk and sweetened.
""Drip brew coffee,"" also known as filtered coffee, is made by letting hot water drip onto coffee grounds held in a coffee filter surrounded by a filter holder or brew basketDrip brew makers","The phenomenon is named for the characteristic ring-like deposit along the perimeter of a spill of coffeeThe mechanism behind the formation of these and similar rings is known as the coffee ring effect or in some instances, the coffee stain effect, or simply ring stain.The mechanism behind the formation of these and similar rings is known as the coffee ring effect or in some instances, the coffee stain effect, or simply ring stainThe used coffee grounds are retained in the filter, while the brewed coffee is collected in a vessel such as a carafe or pot.
In physics, a ""coffee ring"" is a pattern left by a puddle of particle-laden liquid after it evaporatesCoffee tea refers to herbal tea made from non-bean parts of the coffea (coffee plant), and may refer to: Coffee-leaf tea Coffee cherry tea Ground coffee, brewed in a coffee bag, like bagged tea, is referred to simply as ""coffee"", and is similar to filter coffee.
Yuanyang (drink), a drink combining coffee and tea
There are other, different, recipes for coffee containing egg.
Swedish Egg Coffee is an American drink (despite its name) made by mixing coffee grounds with an egg and simmering, like cowboy coffee; the egg makes the grounds sink, leaving smooth coffee.
Egg Brandy Coffee from Sri Lanka Cuban egg coffee, known locally as ""café a la criolla"", is made by mixing hot espresso coffee with the yolk of a raw egg, sugar and white rum.
Kopi Talua from Indonesia
In physics, a ""coffee ring"" is a pattern left by a puddle of particle-laden liquid after it evaporatesWater seeps through the ground coffee, absorbing its constituent chemical compounds, and then passes through a filterTerms used for the resulting coffee often reflect the method used, such as drip brewed coffee, filtered coffee, pour-over coffee, immersion brewed coffee, or simply coffeeThe result was a very strong coffee concentrate that was mixed with milk and sweetened.
""Drip brew coffee,"" also known as filtered coffee, is made by letting hot water drip onto coffee grounds held in a coffee filter surrounded by a filter holder or brew basketDrip brew makers[SEP]What is a ""coffee ring"" in physics?","['E', 'B', 'A']",1.0
What is the significance of probability amplitudes in quantum mechanics?,"In quantum mechanics, a probability amplitude is a complex number used for describing the behaviour of systems. Interpretation of values of a wave function as the probability amplitude is a pillar of the Copenhagen interpretation of quantum mechanics. This strengthens the probabilistic interpretation explicated above. ==Amplitudes in operators== The concept of amplitudes described above is relevant to quantum state vectors. Probability amplitudes provide a relationship between the quantum state vector of a system and the results of observations of that system, a link was first proposed by Max Born, in 1926. In quantum physics, the scattering amplitude is the probability amplitude of the outgoing spherical wave relative to the incoming plane wave in a stationary-state scattering process.Quantum Mechanics: Concepts and Applications By Nouredine Zettili, 2nd edition, page 623. The correct explanation is, however, by the association of probability amplitudes to each event. A discrete probability amplitude may be considered as a fundamental frequency in the Probability Frequency domain (spherical harmonics) for the purposes of simplifying M-theory transformation calculations. == Examples == Take the simplest meaningful example of the discrete case: a quantum system that can be in two possible states: for example, the polarization of a photon. In other words, the probability amplitudes for the second measurement of depend on whether it comes before or after a measurement of , and the two observables do not commute. ===Mathematical=== In a formal setup, any system in quantum mechanics is described by a state, which is a vector , residing in an abstract complex vector space, called a Hilbert space. These numerical weights are called probability amplitudes, and this relationship used to calculate probabilities from given pure quantum states (such as wave functions) is called the Born rule. It gives to both amplitude and density function a physical dimension, unlike a dimensionless probability. Generally, it is the case when the motion of a particle is described in the position space, where the corresponding probability amplitude function is the wave function. In other words the probability amplitudes are zero for all the other eigenstates, and remain zero for the future measurements. Under the standard Copenhagen interpretation, the normalized wavefunction gives probability amplitudes for the position of the particle. This is key to understanding the importance of this interpretation, because for a given the particle's constant mass, initial and the potential, the Schrödinger equation fully determines subsequent wavefunction, and the above then gives probabilities of locations of the particle at all subsequent times. ==In the context of the double-slit experiment== Probability amplitudes have special significance because they act in quantum mechanics as the equivalent of conventional probabilities, with many analogous laws, as described above. Therefore, if the system is known to be in some eigenstate of (all probability amplitudes zero except for one eigenstate), then when is observed the probability amplitudes are changed. Clearly, the sum of the probabilities, which equals the sum of the absolute squares of the probability amplitudes, must equal 1. According to this interpretation, the purpose of a quantum-mechanical theory is to predict the relative probabilities of various alternative histories (for example, of a particle). === Ensemble interpretation === The ensemble interpretation, also called the statistical interpretation, can be viewed as a minimalist interpretation. Due to this trivial fix this case was hardly ever considered by physicists. then an integral over is simply a sumIf is countable, then an integral is the sum of an infinite series. and defines the value of the probability measure on the set }, in other words, the probability that the quantum system is in the state . The probability amplitudes are unaffected by either measurement, and the observables are said to commute. In quantum mechanics, the expectation value is the probabilistic expected value of the result (measurement) of an experiment. ",Probability amplitudes are used to determine the mass of particles in quantum mechanics.,Probability amplitudes have no significance in quantum mechanics.,Probability amplitudes are used to determine the velocity of particles in quantum mechanics.,"Probability amplitudes act as the equivalent of conventional probabilities in classical mechanics, with many analogous laws.","Probability amplitudes act as the equivalent of conventional probabilities in quantum mechanics, with many analogous laws.",E,kaggle200,"This equation represents the probability amplitude of a photon propagating from to via an array of slits. Using a wavefunction representation for probability amplitudes, and defining the probability amplitudes as
The squared modulus is applied in signal processing, to relate the Fourier transform and the power spectrum, and also in quantum mechanics, relating probability amplitudes and probability densities.
Probability amplitudes have special significance because they act in quantum mechanics as the equivalent of conventional probabilities, with many analogous laws, as described above. For example, in the classic double-slit experiment, electrons are fired randomly at two slits, and the probability distribution of detecting electrons at all parts on a large screen placed behind the slits, is questioned. An intuitive answer is that , where is the probability of that event. This is obvious if one assumes that an electron passes through either slit. When nature does not have a way to distinguish which slit the electron has gone through (a much more stringent condition than simply ""it is not observed""), the observed probability distribution on the screen reflects the interference pattern that is common with light waves. If one assumes the above law to be true, then this pattern cannot be explained. The particles cannot be said to go through either slit and the simple explanation does not work. The correct explanation is, however, by the association of probability amplitudes to each event. This is an example of the case A as described in the previous article. The complex amplitudes which represent the electron passing each slit ( and ) follow the law of precisely the form expected: . This is the principle of quantum superposition. The probability, which is the modulus squared of the probability amplitude, then, follows the interference pattern under the requirement that amplitudes are complex:
where the coefficients are complex probability amplitudes, such that the sum of their squares is unity (normalization):","Quantum mechanics allows the calculation of properties and behaviour of physical systems. It is typically applied to microscopic systems: molecules, atoms and sub-atomic particles. It has been demonstrated to hold for complex molecules with thousands of atoms, but its application to human beings raises philosophical problems, such as Wigner's friend, and its application to the universe as a whole remains speculative. Predictions of quantum mechanics have been verified experimentally to an extremely high degree of accuracy.A fundamental feature of the theory is that it usually cannot predict with certainty what will happen, but only give probabilities. Mathematically, a probability is found by taking the square of the absolute value of a complex number, known as a probability amplitude. This is known as the Born rule, named after physicist Max Born. For example, a quantum particle like an electron can be described by a wave function, which associates to each point in space a probability amplitude. Applying the Born rule to these amplitudes gives a probability density function for the position that the electron will be found to have when an experiment is performed to measure it. This is the best the theory can do; it cannot say for certain where the electron will be found. The Schrödinger equation relates the collection of probability amplitudes that pertain to one moment of time to the collection of probability amplitudes that pertain to another.
Probability amplitudes have special significance because they act in quantum mechanics as the equivalent of conventional probabilities, with many analogous laws, as described above. For example, in the classic double-slit experiment, electrons are fired randomly at two slits, and the probability distribution of detecting electrons at all parts on a large screen placed behind the slits, is questioned. An intuitive answer is that P(through either slit) = P(through first slit) + P(through second slit), where P(event) is the probability of that event. This is obvious if one assumes that an electron passes through either slit. When no measurement apparatus that determines through which slit the electrons travel is installed, the observed probability distribution on the screen reflects the interference pattern that is common with light waves. If one assumes the above law to be true, then this pattern cannot be explained. The particles cannot be said to go through either slit and the simple explanation does not work. The correct explanation is, however, by the association of probability amplitudes to each event. The complex amplitudes which represent the electron passing each slit (ψfirst and ψsecond) follow the law of precisely the form expected: ψtotal = ψfirst + ψsecond. This is the principle of quantum superposition. The probability, which is the modulus squared of the probability amplitude, then, follows the interference pattern under the requirement that amplitudes are complex: Here, φ1 and φ2 are the arguments of ψfirst and ψsecond respectively. A purely real formulation has too few dimensions to describe the system's state when superposition is taken into account. That is, without the arguments of the amplitudes, we cannot describe the phase-dependent interference. The crucial term first second cos {\textstyle 2\left|\psi _{\text{first}}\right|\left|\psi _{\text{second}}\right|\cos(\varphi _{1}-\varphi _{2})} is called the ""interference term"", and this would be missing if we had added the probabilities.
The path integral formulation of quantum mechanics actually refers not to path integrals in this sense but to functional integrals, that is, integrals over a space of paths, of a function of a possible path. However, path integrals in the sense of this article are important in quantum mechanics; for example, complex contour integration is often used in evaluating probability amplitudes in quantum scattering theory.","The Schrödinger equation relates the collection of probability amplitudes that pertain to one moment of time to the collection of probability amplitudes that pertain to another.
Probability amplitudes have special significance because they act in quantum mechanics as the equivalent of conventional probabilities, with many analogous laws, as described aboveUsing a wavefunction representation for probability amplitudes, and defining the probability amplitudes as
The squared modulus is applied in signal processing, to relate the Fourier transform and the power spectrum, and also in quantum mechanics, relating probability amplitudes and probability densities.
Probability amplitudes have special significance because they act in quantum mechanics as the equivalent of conventional probabilities, with many analogous laws, as described aboveFor example, a quantum particle like an electron can be described by a wave function, which associates to each point in space a probability amplitudeMathematically, a probability is found by taking the square of the absolute value of a complex number, known as a probability amplitudeThe correct explanation is, however, by the association of probability amplitudes to each eventThe probability, which is the modulus squared of the probability amplitude, then, follows the interference pattern under the requirement that amplitudes are complex:
where the coefficients are complex probability amplitudes, such that the sum of their squares is unity (normalization):The probability, which is the modulus squared of the probability amplitude, then, follows the interference pattern under the requirement that amplitudes are complex: Here, φ1 and φ2 are the arguments of ψfirst and ψsecond respectivelyHowever, path integrals in the sense of this article are important in quantum mechanics; for example, complex contour integration is often used in evaluating probability amplitudes in quantum scattering theoryApplying the Born rule to these amplitudes gives a probability density function for the position that the electron will be found to have when an ex","The Schrödinger equation relates the collection of probability amplitudes that pertain to one moment of time to the collection of probability amplitudes that pertain to another.
Probability amplitudes have special significance because they act in quantum mechanics as the equivalent of conventional probabilities, with many analogous laws, as described aboveUsing a wavefunction representation for probability amplitudes, and defining the probability amplitudes as
The squared modulus is applied in signal processing, to relate the Fourier transform and the power spectrum, and also in quantum mechanics, relating probability amplitudes and probability densities.
Probability amplitudes have special significance because they act in quantum mechanics as the equivalent of conventional probabilities, with many analogous laws, as described aboveFor example, a quantum particle like an electron can be described by a wave function, which associates to each point in space a probability amplitudeMathematically, a probability is found by taking the square of the absolute value of a complex number, known as a probability amplitudeThe correct explanation is, however, by the association of probability amplitudes to each eventThe probability, which is the modulus squared of the probability amplitude, then, follows the interference pattern under the requirement that amplitudes are complex:
where the coefficients are complex probability amplitudes, such that the sum of their squares is unity (normalization):The probability, which is the modulus squared of the probability amplitude, then, follows the interference pattern under the requirement that amplitudes are complex: Here, φ1 and φ2 are the arguments of ψfirst and ψsecond respectivelyHowever, path integrals in the sense of this article are important in quantum mechanics; for example, complex contour integration is often used in evaluating probability amplitudes in quantum scattering theoryApplying the Born rule to these amplitudes gives a probability density function for the position that the electron will be found to have when an ex[SEP]What is the significance of probability amplitudes in quantum mechanics?","['E', 'D', 'A']",1.0
What is the relationship between the amplitude of a sound wave and its loudness?,"Loudness, a subjective measure, is often confused with physical measures of sound strength such as sound pressure, sound pressure level (in decibels), sound intensity or sound power. The relation of physical attributes of sound to perceived loudness consists of physical, physiological and psychological components. A more complex signal also creates more nerve firings and so sounds louder (for the same wave amplitude) than a simpler sound, such as a sine wave. ===Timbre=== thumb|Figure 4. Sound waves are often simplified to a description in terms of sinusoidal plane waves, which are characterized by these generic properties: * Frequency, or its inverse, wavelength * Amplitude, sound pressure or Intensity * Speed of sound * Direction Sound that is perceptible by humans has frequencies from about 20 Hz to 20,000 Hz. Loudness perception Loudness is perceived as how ""loud"" or ""soft"" a sound is and relates to the totalled number of auditory nerve stimulations over short cyclic time periods, most likely over the duration of theta wave cycles. Sound can also be viewed as an excitation of the hearing mechanism that results in the perception of sound. Historically, loudness was measured using an ""ear-balance"" audiometer in which the amplitude of a sine wave was adjusted by the user to equal the perceived loudness of the sound being evaluated. In acoustics, loudness is the subjective perception of sound pressure. In physics, sound is a vibration that propagates as an acoustic wave, through a transmission medium such as a gas, liquid or solid. The behavior of sound propagation is generally affected by three things: * A complex relationship between the density and pressure of the medium. In this case, sound is a sensation. ==Physics== Sound can propagate through a medium such as air, water and solids as longitudinal waves and also as a transverse wave in solids. That is, the softest sound that is audible to these listeners is louder than the softest sound audible to normal listeners. ==Compensation== The loudness control associated with a loudness compensation feature on some consumer stereos alters the frequency response curve to correspond roughly with the equal loudness characteristic of the ear. In human physiology and psychology, sound is the reception of such waves and their perception by the brain. Loudness recruitment posits that loudness grows more rapidly for certain listeners than normal listeners with changes in level. A more precise model known as the Inflected Exponential function, indicates that loudness increases with a higher exponent at low and high levels and with a lower exponent at moderate levels. In different industries, loudness may have different meanings and different measurement standards. Sounds at low levels (often perceived by those without hearing loss as relatively quiet) are no longer audible to the hearing impaired, but sounds at high levels often are perceived as having the same loudness as they would for an unimpaired listener. Loudspeaker acoustics is a subfield of acoustical engineering concerned with the reproduction of sound and the parameters involved in doing so in actual equipment. Thus, the speed of sound is proportional to the square root of the ratio of the bulk modulus of the medium to its density. A complete model of the perception of loudness will include the integration of SPL by frequency. ",The amplitude of a sound wave is related to its loudness.,The amplitude of a sound wave is directly proportional to its frequency.,The amplitude of a sound wave is not related to its loudness.,The amplitude of a sound wave is not related to its frequency.,The amplitude of a sound wave is inversely related to its loudness.,A,kaggle200,"Particle displacement or displacement amplitude is a measurement of distance of the movement of a sound particle from its equilibrium position in a medium as it transmits a sound wave.
If this wave is a sound wave, the ear hears the frequency associated with ""f"" and the amplitude of this sound varies with the beat frequency.
Amplitude is the size (magnitude) of the pressure variations in a sound wave, and primarily determines the loudness with which the sound is perceived. In a sinusoidal function such as formula_1, ""C"" represents the amplitude of the sound wave.
Sound is measured based on the amplitude and frequency of a sound wave. Amplitude measures how forceful the wave is. The energy in a sound wave is measured in decibels (dB), the measure of loudness, or intensity of a sound; this measurement describes the amplitude of a sound wave. Decibels are expressed in a logarithmic scale. On the other hand, pitch describes the frequency of a sound and is measured in hertz (Hz).","Sound waves are what physicists call longitudinal waves, which consist of propagating regions of high pressure (compression) and corresponding regions of low pressure (rarefaction).
Waveform Waveform is a description of the general shape of the sound wave. Waveforms are sometimes described by the sum of sinusoids, via Fourier analysis.
Amplitude Amplitude is the size (magnitude) of the pressure variations in a sound wave, and primarily determines the loudness with which the sound is perceived. In a sinusoidal function such as sin (2πft) , C represents the amplitude of the sound wave.
A 5 to 30 second sound is recorded which could be a message or any sound at all.
A sound wave is created out of it using various software and printed on paper.
The print is inked it on the client's body line by line.
After inking, the picture of the sound wave is uploaded to the internet.
People can use a sound wave app to scan and hear the audio message in it.
Sound is measured based on the amplitude and frequency of a sound wave. Amplitude measures how forceful the wave is. The energy in a sound wave is measured in decibels (dB), the measure of loudness, or intensity of a sound; this measurement describes the amplitude of a sound wave. Decibels are expressed in a logarithmic scale. On the other hand, pitch describes the frequency of a sound and is measured in hertz (Hz).The main instrument to measure sounds in the air is the Sound Level Meter. There are many different varieties of instruments that are used to measure noise - Noise Dosimeters are often used in occupational environments, noise monitors are used to measure environmental noise and noise pollution, and recently smartphone-based sound level meter applications (apps) are being used to crowdsource and map recreational and community noise.A-weighting is applied to a sound spectrum to represent the sound that humans are capable of hearing at each frequency. Sound pressure is thus expressed in terms of dBA. 0 dBA is the softest level that a person can hear. Normal speaking voices are around 65 dBA. A rock concert can be about 120 dBA.","Amplitude measures how forceful the wave isWaveforms are sometimes described by the sum of sinusoids, via Fourier analysis.
Amplitude Amplitude is the size (magnitude) of the pressure variations in a sound wave, and primarily determines the loudness with which the sound is perceivedThe energy in a sound wave is measured in decibels (dB), the measure of loudness, or intensity of a sound; this measurement describes the amplitude of a sound wave- Particle displacement or displacement amplitude is a measurement of distance of the movement of a sound particle from its equilibrium position in a medium as it transmits a sound wave.
If this wave is a sound wave, the ear hears the frequency associated with ""f"" and the amplitude of this sound varies with the beat frequency.
Amplitude is the size (magnitude) of the pressure variations in a sound wave, and primarily determines the loudness with which the sound is perceivedSound pressure is thus expressed in terms of dBAIn a sinusoidal function such as sin (2πft) , C represents the amplitude of the sound wave.
A 5 to 30 second sound is recorded which could be a message or any sound at all.
A sound wave is created out of it using various software and printed on paper.
The print is inked it on the client's body line by line.
After inking, the picture of the sound wave is uploaded to the internet.
People can use a sound wave app to scan and hear the audio message in it.
Sound is measured based on the amplitude and frequency of a sound waveDecibels are expressed in a logarithmic scaleIn a sinusoidal function such as formula_1, ""C"" represents the amplitude of the sound wave.
Sound is measured based on the amplitude and frequency of a sound waveSound waves are what physicists call longitudinal waves, which consist of propagating regions of high pressure (compression) and corresponding regions of low pressure (rarefaction).
Waveform Waveform is a description of the general shape of the sound waveThere are many different varieties of instruments that are used to measure noise - Noise Dosimeters are often used in occupational environ","Amplitude measures how forceful the wave isWaveforms are sometimes described by the sum of sinusoids, via Fourier analysis.
Amplitude Amplitude is the size (magnitude) of the pressure variations in a sound wave, and primarily determines the loudness with which the sound is perceivedThe energy in a sound wave is measured in decibels (dB), the measure of loudness, or intensity of a sound; this measurement describes the amplitude of a sound wave- Particle displacement or displacement amplitude is a measurement of distance of the movement of a sound particle from its equilibrium position in a medium as it transmits a sound wave.
If this wave is a sound wave, the ear hears the frequency associated with ""f"" and the amplitude of this sound varies with the beat frequency.
Amplitude is the size (magnitude) of the pressure variations in a sound wave, and primarily determines the loudness with which the sound is perceivedSound pressure is thus expressed in terms of dBAIn a sinusoidal function such as sin (2πft) , C represents the amplitude of the sound wave.
A 5 to 30 second sound is recorded which could be a message or any sound at all.
A sound wave is created out of it using various software and printed on paper.
The print is inked it on the client's body line by line.
After inking, the picture of the sound wave is uploaded to the internet.
People can use a sound wave app to scan and hear the audio message in it.
Sound is measured based on the amplitude and frequency of a sound waveDecibels are expressed in a logarithmic scaleIn a sinusoidal function such as formula_1, ""C"" represents the amplitude of the sound wave.
Sound is measured based on the amplitude and frequency of a sound waveSound waves are what physicists call longitudinal waves, which consist of propagating regions of high pressure (compression) and corresponding regions of low pressure (rarefaction).
Waveform Waveform is a description of the general shape of the sound waveThere are many different varieties of instruments that are used to measure noise - Noise Dosimeters are often used in occupational environ[SEP]What is the relationship between the amplitude of a sound wave and its loudness?","['A', 'C', 'E']",1.0
What are coherent turbulent structures?,"By defining and identifying coherent structure in this manner, turbulent flows can be decomposed into coherent structures and incoherent structures depending on their coherence, particularly their correlations with their vorticity. Turbulent flows are complex multi-scale and chaotic motions that need to be classified into more elementary components, referred to coherent turbulent structures. Hence, similarly organized events in an ensemble average of organized events can be defined as a coherent structure, and whatever events not identified as similar or phase and space aligned in the ensemble average is an incoherent turbulent structure. Other attempts at defining a coherent structure can be done through examining the correlation between their momenta or pressure and their turbulent flows. Furthermore, a coherent structure is defined as a turbulent flow whose vorticity expression, which is usually stochastic, contains orderly components that can be described as being instantaneously coherent over the spatial extent of the flow structure. Such a structure must have temporal coherence, i.e. it must persist in its form for long enough periods that the methods of time-averaged statistics can be applied. Although such approximations depart from reality, they contain sufficient parameters needed to understand turbulent coherent structures in a highly conceptual degree.Hussain, A. K. M. F. ""Coherent structures- reality and myth"" Phys. Fluids 26, 2816, doi: 10.1063/1.864048. (1983) ==History and Discovery== The presence of organized motions and structures in turbulent shear flows was apparent for a long time, and has been additionally implied by mixing length hypothesis even before the concept was explicitly stated in literature. With a much better understanding of coherent structures, it is now possible to discover and recognize many coherent structures in previous flow-visualization pictures collected of various turbulent flows taken decades ago. Out of the three categories, coherent structures typically arise from instabilities in laminar or turbulent states. In other words, underlying the three-dimensional chaotic vorticity expressions typical of turbulent flows, there is an organized component of that vorticity which is phase-correlated over the entire space of the structure. The contours of these properties not only locate where exactly coherent structure quantities have their peaks and saddles, but also identify where the incoherent turbulent structures are when overlaid on their directional gradients. For example, in order for a structure to be evolving, and hence dominant, its coherent vorticity, coherent Reynolds stress, and production terms should be larger than the time averaged values of the flow structures. ==Formation== Coherent structures form due to some sort of instability, e.g. the Kelvin–Helmholtz instability. The instantaneously space and phase correlated vorticity found within the coherent structure expressions can be defined as coherent vorticity, hence making coherent vorticity the main characteristic identifier for coherent structures. Most coherent structures are studied only within the confined forms of simple wall turbulence, which approximates the coherence to be steady, fully developed, incompressible, and with a zero pressure gradient in the boundary layer. There are also coherent motions at much smaller scales such as hairpin vortices and typical eddies, which are typically known as coherent substructures, as in coherent structures which can be broken up into smaller more elementary substructures. ==Characteristics== Although a coherent structure is by definition characterized by high levels of coherent vorticity, Reynolds stress, production, and heat and mass transportation, it does not necessarily require a high level of kinetic energy. Another characteristic inherent in turbulent flows is their intermittency, but intermittency is a very poor identifier of the boundaries of a coherent structure, hence it is generally accepted that the best way to characterize the boundary of a structure is by identifying and defining the boundary of the coherent vorticity. Some coherent structures, such as vortex rings, etc. can be large-scale motions comparable to the extent of the shear flow. It is also possible that structures do not decay and instead distort by splitting into substructures or interacting with other coherent structures. ==Categories of Coherent Structures== ===Lagrangian Coherent Structures=== 400px|right|thumb|Attracting (red) and repelling (blue) LCSs extracted from a two-dimensional turbulence experiment (Image: Manikandan Mathur) Lagrangian coherent structures (LCSs) are influential material surfaces that create clearly recognizable patterns in passive tracer distributions advected by an unsteady flow. Coherent structures are typically studied on very large scales, but can be broken down into more elementary structures with coherent properties of their own, such examples include hairpin vortices. In addition, spatial contours can be drawn describe the shape, size, and strength of the coherent structures, depicting not only the mechanics but also the dynamical evolution of coherent structures. ","Coherent turbulent structures are the most elementary components of complex multi-scale and chaotic motions in turbulent flows, which do not have temporal coherence and persist in their form for long enough periods that the methods of time-averaged statistics can be applied.","Coherent turbulent structures are the most elementary components of complex multi-scale and chaotic motions in turbulent flows, which have temporal coherence and persist in their form for very short periods that the methods of time-averaged statistics cannot be applied.","Coherent turbulent structures are more elementary components of complex multi-scale and chaotic motions in turbulent flows, which have temporal coherence and persist in their form for long enough periods that the methods of time-averaged statistics can be applied.","Coherent turbulent structures are the most complex and chaotic motions in turbulent flows, which have temporal coherence and persist in their form for long enough periods that the methods of time-averaged statistics can be applied.","Coherent turbulent structures are the most complex and chaotic motions in turbulent flows, which do not have temporal coherence and persist in their form for very short periods that the methods of time-averaged statistics cannot be applied.",C,kaggle200,"By defining and identifying coherent structure in this manner, turbulent flows can be decomposed into coherent structures and incoherent structures depending on their coherence, particularly their correlations with their vorticity. Hence, similarly organized events in an ensemble average of organized events can be defined as a coherent structure, and whatever events not identified as similar or phase and space aligned in the ensemble average is an incoherent turbulent structure.
Hairpin vortices are found on top of turbulent bulges of the turbulent wall, wrapping around the turbulent wall in hairpin shaped loops, where the name originates. The hairpin-shaped vortices are believed to be one of the most important and elementary sustained flow patterns in turbulent boundary layers. Hairpins are perhaps the simplest structures, and models that represent large scale turbulent boundary layers are often constructed by breaking down individual hairpin vortices, which could explain most of the features of wall turbulence. Although hairpin vortices form the basis of simple conceptual models of flow near a wall, actual turbulent flows may contain a hierarchy of competing vortices, each with their own degree of asymmetry and disturbances.
Out of the three categories, coherent structures typically arise from instabilities in laminar or turbulent states. After an initial triggering, their growth is determined by evolutionary changes due to non-linear interactions with other coherent structures, or their decay onto incoherent turbulent structures. Observed rapid changes lead to the belief that there must be a regenerative cycle that takes place during decay. For example, after a structure decays, the result may be that the flow is now turbulent and becomes susceptible to a new instability determined by the new flow state, leading to a new coherent structure being formed. It is also possible that structures do not decay and instead distort by splitting into substructures or interacting with other coherent structures.
Turbulent flows are complex multi-scale and chaotic motions that need to be classified into more elementary components, referred to coherent turbulent structures. Such a structure must have temporal coherence, i.e. it must persist in its form for long enough periods that the methods of time-averaged statistics can be applied. Coherent structures are typically studied on very large scales, but can be broken down into more elementary structures with coherent properties of their own, such examples include hairpin vortices. Hairpins and coherent structures have been studied and noticed in data since the 1930s, and have been since cited in thousands of scientific papers and reviews.","A turbulent flow is a flow regime in fluid dynamics where fluid velocity varies significantly and irregularly in both position and time. Furthermore, a coherent structure is defined as a turbulent flow whose vorticity expression, which is usually stochastic, contains orderly components that can be described as being instantaneously coherent over the spatial extent of the flow structure. In other words, underlying the three-dimensional chaotic vorticity expressions typical of turbulent flows, there is an organized component of that vorticity which is phase-correlated over the entire space of the structure. The instantaneously space and phase correlated vorticity found within the coherent structure expressions can be defined as coherent vorticity, hence making coherent vorticity the main characteristic identifier for coherent structures. Another characteristic inherent in turbulent flows is their intermittency, but intermittency is a very poor identifier of the boundaries of a coherent structure, hence it is generally accepted that the best way to characterize the boundary of a structure is by identifying and defining the boundary of the coherent vorticity.By defining and identifying coherent structure in this manner, turbulent flows can be decomposed into coherent structures and incoherent structures depending on their coherence, particularly their correlations with their vorticity. Hence, similarly organized events in an ensemble average of organized events can be defined as a coherent structure, and whatever events not identified as similar or phase and space aligned in the ensemble average is an incoherent turbulent structure. Other attempts at defining a coherent structure can be done through examining the correlation between their momenta or pressure and their turbulent flows. However, it often leads to false indications of turbulence, since pressure and velocity fluctuations over a fluid could be well correlated in the absence of any turbulence or vorticity. Some coherent structures, such as vortex rings, etc. can be large-scale motions comparable to the extent of the shear flow. There are also coherent motions at much smaller scales such as hairpin vortices and typical eddies, which are typically known as coherent substructures, as in coherent structures which can be broken up into smaller more elementary substructures.
Flow visualization experiments, using smoke and dye as tracers, have been historically used to simulate coherent structures and verify theories, but computer models are now the dominant tools widely used in the field to verify and understand the formation, evolution, and other properties of such structures. The kinematic properties of these motions include size, scale, shape, vorticity, energy, and the dynamic properties govern the way coherent structures grow, evolve, and decay. Most coherent structures are studied only within the confined forms of simple wall turbulence, which approximates the coherence to be steady, fully developed, incompressible, and with a zero pressure gradient in the boundary layer. Although such approximations depart from reality, they contain sufficient parameters needed to understand turbulent coherent structures in a highly conceptual degree.
Turbulent flows are complex multi-scale and chaotic motions that need to be classified into more elementary components, referred to coherent turbulent structures. Such a structure must have temporal coherence, i.e. it must persist in its form for long enough periods that the methods of time-averaged statistics can be applied. Coherent structures are typically studied on very large scales, but can be broken down into more elementary structures with coherent properties of their own, such examples include hairpin vortices. Hairpins and coherent structures have been studied and noticed in data since the 1930s, and have been since cited in thousands of scientific papers and reviews.","Furthermore, a coherent structure is defined as a turbulent flow whose vorticity expression, which is usually stochastic, contains orderly components that can be described as being instantaneously coherent over the spatial extent of the flow structureAlthough such approximations depart from reality, they contain sufficient parameters needed to understand turbulent coherent structures in a highly conceptual degree.
Turbulent flows are complex multi-scale and chaotic motions that need to be classified into more elementary components, referred to coherent turbulent structures- By defining and identifying coherent structure in this manner, turbulent flows can be decomposed into coherent structures and incoherent structures depending on their coherence, particularly their correlations with their vorticityMost coherent structures are studied only within the confined forms of simple wall turbulence, which approximates the coherence to be steady, fully developed, incompressible, and with a zero pressure gradient in the boundary layerSome coherent structures, such as vortex rings, etc Other attempts at defining a coherent structure can be done through examining the correlation between their momenta or pressure and their turbulent flowsAnother characteristic inherent in turbulent flows is their intermittency, but intermittency is a very poor identifier of the boundaries of a coherent structure, hence it is generally accepted that the best way to characterize the boundary of a structure is by identifying and defining the boundary of the coherent vorticity.By defining and identifying coherent structure in this manner, turbulent flows can be decomposed into coherent structures and incoherent structures depending on their coherence, particularly their correlations with their vorticityCoherent structures are typically studied on very large scales, but can be broken down into more elementary structures with coherent properties of their own, such examples include hairpin vorticesHence, similarly organized events in an ensemble average of organized events can be defined as a cohere","Furthermore, a coherent structure is defined as a turbulent flow whose vorticity expression, which is usually stochastic, contains orderly components that can be described as being instantaneously coherent over the spatial extent of the flow structureAlthough such approximations depart from reality, they contain sufficient parameters needed to understand turbulent coherent structures in a highly conceptual degree.
Turbulent flows are complex multi-scale and chaotic motions that need to be classified into more elementary components, referred to coherent turbulent structures- By defining and identifying coherent structure in this manner, turbulent flows can be decomposed into coherent structures and incoherent structures depending on their coherence, particularly their correlations with their vorticityMost coherent structures are studied only within the confined forms of simple wall turbulence, which approximates the coherence to be steady, fully developed, incompressible, and with a zero pressure gradient in the boundary layerSome coherent structures, such as vortex rings, etc Other attempts at defining a coherent structure can be done through examining the correlation between their momenta or pressure and their turbulent flowsAnother characteristic inherent in turbulent flows is their intermittency, but intermittency is a very poor identifier of the boundaries of a coherent structure, hence it is generally accepted that the best way to characterize the boundary of a structure is by identifying and defining the boundary of the coherent vorticity.By defining and identifying coherent structure in this manner, turbulent flows can be decomposed into coherent structures and incoherent structures depending on their coherence, particularly their correlations with their vorticityCoherent structures are typically studied on very large scales, but can be broken down into more elementary structures with coherent properties of their own, such examples include hairpin vorticesHence, similarly organized events in an ensemble average of organized events can be defined as a cohere[SEP]What are coherent turbulent structures?","['C', 'D', 'E']",1.0
What is the main factor that determines the occurrence of each type of supernova?,"A supernova is first categorized as either a Type I or Type II, then subcategorized based on more specific traits. As they are formed from rare, very massive stars, the rate of Type Ib and Ic supernova occurrence is much lower than the corresponding rate for Type II supernovae. Type II supernova progenitors include stars with at least 10 solar masses that are in the final stages of their evolution. Type Ia supernova progenitors are white dwarf stars that are close to the Chandrasekhar limit of about 1.44 solar masses and are accreting matter from a binary companion star. A Type II supernova (plural: supernovae or supernovas) results from the rapid collapse and violent explosion of a massive star. Type Ib and Type Ic supernovae are categories of supernovae that are caused by the stellar core collapse of massive stars. Type Ic supernovae are distinguished from Type Ib in that the former also lack lines of helium at 587.6 nm. ==Formation== right|thumb|240px|The onion-like layers of an evolved, massive star (not to scale). Type II supernovae are distinguished from other types of supernovae by the presence of hydrogen in their spectra. The presence of these lines is used to distinguish this category of supernova from a Type I supernova. This is a list of supernova candidates, or stars that astronomers have suggested are supernova progenitors. This is a list of supernovae that are of historical significance. Type Ia supernova Supernova Primo z=1.55 ESA, The Hubble eXtreme Deep Field, 25 September 2012 ==Most distant supernovae by type== Most distant by type Type Name Distance Notes Supernova any type Type I supernova any type Type Ia supernova SN UDS10Wil z=1.914 Type Ib supernova Type Ic supernova Type II supernova any type Type II-P supernova Type II-L supernova Type IIb supernova Type IIn supernova ==See also== *List of largest cosmic structures *List of the most distant astronomical objects *List of supernovae ==References== ==External links== * Up to date list of the most distant known supernovae at the Open Supernova Catalog *Most distant Most distant supernovae Supernovae, most distant These include supernovae that were observed prior to the availability of photography, and individual events that have been the subject of a scientific paper that contributed to supernova theory. If they accumulate more mass from another star, or some other source, they may become Type Ia supernovae. There exist several categories of Type II supernova explosions, which are categorized based on the resulting light curve—a graph of luminosity versus time—following the explosion. By ignoring the first second of the explosion, and assuming that an explosion is started, astrophysicists have been able to make detailed predictions about the elements produced by the supernova and of the expected light curve from the supernova. ==Light curves for Type II-L and Type II-P supernovae== right|thumb|280px|This graph of the luminosity as a function of time shows the characteristic shapes of the light curves for a Type II-L and II-P supernova. When the luminosity of a Type II supernova is plotted over a period of time, it shows a characteristic rise to a peak brightness followed by a decline. Because of the underlying mechanism, the resulting supernova is also described as a core-collapse supernova. The two types are usually referred to as stripped core-collapse supernovae. ==Spectra== When a supernova is observed, it can be categorized in the Minkowski–Zwicky supernova classification scheme based upon the absorption lines that appear in its spectrum. However, due to the similarity of the spectra of Type Ib and Ic supernovae, the latter can form a source of contamination of supernova surveys and must be carefully removed from the observed samples before making distance estimates. ==See also== * Type Ia supernova * Type II supernova ==References== ==External links== *List of all known Type Ib and Ic supernovae at The Open Supernova Catalog. ",The star's distance from Earth,The star's age,The star's temperature,The star's luminosity,The progenitor star's metallicity,E,kaggle200,"The star's surface gravity, log""g"" = 4.15, and its low levels of lithium helped derive the star's age, and revealed that it most likely evolved away from the zero age main sequence.
Well-timed leaks about a star's purported romantic adventures helped the studios to create and to sustain the public's interest in the studios' star actors. As well, the movie studios' publicity agents acted as unnamed ""well-informed inside sources"" and provided misinformation and rumors to counteract whispers about celebrity secrets, such as homosexuality or an out-of-wedlock child, which could have severely damaged not only the reputation of the movie star in question but also the movie star's box office viability.
The apparent brightness of a star is expressed in terms of its apparent magnitude. It is a function of the star's luminosity, its distance from Earth, the extinction effect of interstellar dust and gas, and the altering of the star's light as it passes through Earth's atmosphere. Intrinsic or absolute magnitude is directly related to a star's luminosity, and is the apparent magnitude a star would be if the distance between the Earth and the star were 10 parsecs (32.6 light-years).
The supernova classification type is closely tied to the type of star at the time of the collapse. The occurrence of each type of supernova depends dramatically on the metallicity, and hence the age of the host galaxy.","The star of S (denoted stS ) is the union of the stars of each simplex in S. For a single simplex s, the star of s is the set of simplices having s as a face. The star of S is generally not a simplicial complex itself, so some authors define the closed star of S (denoted StS ) as ClstS the closure of the star of S.
Progenitor The supernova classification type is closely tied to the type of star at the time of the collapse. The occurrence of each type of supernova depends on the progenitor star's metallicity, since this affects the strength of the stellar wind and thereby the rate at which the star loses mass.Type Ia supernovae are produced from white dwarf stars in binary star systems and occur in all galaxy types. Core collapse supernovae are only found in galaxies undergoing current or very recent star formation, since they result from short-lived massive stars. They are most commonly found in type Sc spirals, but also in the arms of other spiral galaxies and in irregular galaxies, especially starburst galaxies.Type Ib and Ic supernovae are hypothesized to have been produced by core collapse of massive stars that have lost their outer layer of hydrogen and helium, either via strong stellar winds or mass transfer to a companion. They normally occur in regions of new star formation, and are extremely rare in elliptical galaxies. The progenitors of type IIn supernovae also have high rates of mass loss in the period just prior to their explosions. Type Ic supernovae have been observed to occur in regions that are more metal-rich and have higher star-formation rates than average for their host galaxies. The table shows the progenitor for the main types of core collapse supernova, and the approximate proportions that have been observed in the local neighbourhood.
The type locality is Barranca de Cobre, Chihuahua, Mexico, and the type material is conserved at the Royal Ontario Museum, Toronto, Canada, M25369, and at the National Museum of Natural History, Washington, DC, US.","The occurrence of each type of supernova depends dramatically on the metallicity, and hence the age of the host galaxy.The occurrence of each type of supernova depends on the progenitor star's metallicity, since this affects the strength of the stellar wind and thereby the rate at which the star loses mass.Type Ia supernovae are produced from white dwarf stars in binary star systems and occur in all galaxy typesType Ic supernovae have been observed to occur in regions that are more metal-rich and have higher star-formation rates than average for their host galaxiesThey are most commonly found in type Sc spirals, but also in the arms of other spiral galaxies and in irregular galaxies, especially starburst galaxies.Type Ib and Ic supernovae are hypothesized to have been produced by core collapse of massive stars that have lost their outer layer of hydrogen and helium, either via strong stellar winds or mass transfer to a companionIntrinsic or absolute magnitude is directly related to a star's luminosity, and is the apparent magnitude a star would be if the distance between the Earth and the star were 10 parsecs (32.6 light-years).
The supernova classification type is closely tied to the type of star at the time of the collapseThe table shows the progenitor for the main types of core collapse supernova, and the approximate proportions that have been observed in the local neighbourhood.
The type locality is Barranca de Cobre, Chihuahua, Mexico, and the type material is conserved at the Royal Ontario Museum, Toronto, Canada, M25369, and at the National Museum of Natural History, Washington, DC, USThe progenitors of type IIn supernovae also have high rates of mass loss in the period just prior to their explosionsIt is a function of the star's luminosity, its distance from Earth, the extinction effect of interstellar dust and gas, and the altering of the star's light as it passes through Earth's atmosphereThe star of S is generally not a simplicial complex itself, so some authors define the closed star of S (denoted StS ) as ClstS the closure of the star of S.
Progenit","The occurrence of each type of supernova depends dramatically on the metallicity, and hence the age of the host galaxy.The occurrence of each type of supernova depends on the progenitor star's metallicity, since this affects the strength of the stellar wind and thereby the rate at which the star loses mass.Type Ia supernovae are produced from white dwarf stars in binary star systems and occur in all galaxy typesType Ic supernovae have been observed to occur in regions that are more metal-rich and have higher star-formation rates than average for their host galaxiesThey are most commonly found in type Sc spirals, but also in the arms of other spiral galaxies and in irregular galaxies, especially starburst galaxies.Type Ib and Ic supernovae are hypothesized to have been produced by core collapse of massive stars that have lost their outer layer of hydrogen and helium, either via strong stellar winds or mass transfer to a companionIntrinsic or absolute magnitude is directly related to a star's luminosity, and is the apparent magnitude a star would be if the distance between the Earth and the star were 10 parsecs (32.6 light-years).
The supernova classification type is closely tied to the type of star at the time of the collapseThe table shows the progenitor for the main types of core collapse supernova, and the approximate proportions that have been observed in the local neighbourhood.
The type locality is Barranca de Cobre, Chihuahua, Mexico, and the type material is conserved at the Royal Ontario Museum, Toronto, Canada, M25369, and at the National Museum of Natural History, Washington, DC, USThe progenitors of type IIn supernovae also have high rates of mass loss in the period just prior to their explosionsIt is a function of the star's luminosity, its distance from Earth, the extinction effect of interstellar dust and gas, and the altering of the star's light as it passes through Earth's atmosphereThe star of S is generally not a simplicial complex itself, so some authors define the closed star of S (denoted StS ) as ClstS the closure of the star of S.
Progenit[SEP]What is the main factor that determines the occurrence of each type of supernova?","['E', 'D', 'B']",1.0
What is the Erlangen program?,"In mathematics, the Erlangen program is a method of characterizing geometries based on group theory and projective geometry. There arises the question of reading the Erlangen program from the abstract group, to the geometry. Has a section on the Erlangen program. Books such as those by H.S.M. Coxeter routinely used the Erlangen program approach to help 'place' geometries. :The original German text of the Erlangen program can be viewed at the University of Michigan online collection at , and also at in HTML format. * * Lizhen Ji and Athanase Papadopoulos (editors) (2015) Sophus Lie and Felix Klein: The Erlangen program and its impact in mathematics and physics, IRMA Lectures in Mathematics and Theoretical Physics 23, European Mathematical Society Publishing House, Zürich. (See Klein geometry for more details.) ==Influence on later work== The long-term effects of the Erlangen program can be seen all over pure mathematics (see tacit use at congruence (geometry), for example); and the idea of transformations and of synthesis using groups of symmetry has become standard in physics. :A central information page on the Erlangen program maintained by John Baez is at . *Sharpe, Richard W. (1997) Differential geometry: Cartan's generalization of Klein's Erlangen program Vol. 166. * Thomas Hawkins (1984) ""The Erlanger Program of Felix Klein: Reflections on Its Place In the History of Mathematics"", Historia Mathematica 11:442-70\. In mathematical logic, the Erlangen program also served as an inspiration for Alfred Tarski in his analysis of logical notions.Luca Belotti, Tarski on Logical Notions, Synthese, 404-413, 2003. ==References== *Klein, Felix (1872) ""A comparative review of recent researches in geometry"". Erlang is an open source programming language. Erlang ( ) is a general-purpose, concurrent, functional high-level programming language, and a garbage-collected runtime system. In his book Structuralism (1970) Jean Piaget says, ""In the eyes of contemporary structuralist mathematicians, like Bourbaki, the Erlangen program amounts to only a partial victory for structuralism, since they want to subordinate all mathematics, not just geometry, to the idea of structure."" The Erlangen program can therefore still be considered fertile, in relation with dualities in physics. Erlang was designed with the aim of improving the development of telephony applications. 237); the point is elaborated in Jean- Pierre Marquis (2009), From a Geometrical Point of View: A Study of the History of Category Theory, Springer, Relations of the Erlangen program with work of Charles Ehresmann on groupoids in geometry is considered in the article below by Pradines.Jean Pradines, In Ehresmann's footsteps: from group geometries to groupoid geometries (English summary) Geometry and topology of manifolds, 87–157, Banach Center Publ., 76, Polish Acad. Sci., Warsaw, 2007. Since the open source release, Erlang has been used by several firms worldwide, including Nortel and T-Mobile. In the seminal paper which introduced categories, Saunders Mac Lane and Samuel Eilenberg stated: ""This may be regarded as a continuation of the Klein Erlanger Program, in the sense that a geometrical space with its group of transformations is generalized to a category with its algebra of mappings.""S. Eilenberg and S. Mac Lane, A general theory of natural equivalences, Trans. Amer. Math. Soc., 58:231–294, 1945. Erlang/OTP is supported and maintained by the Open Telecom Platform (OTP) product unit at Ericsson. ==History== The name Erlang, attributed to Bjarne Däcker, has been presumed by those working on the telephony switches (for whom the language was designed) to be a reference to Danish mathematician and engineer Agner Krarup Erlang and a syllabic abbreviation of ""Ericsson Language"". ","The Erlangen program is a method of characterizing geometries based on statistics and probability, published by Felix Klein in 1872 as Vergleichende Betrachtungen über neuere geometrische Forschungen.","The Erlangen program is a method of characterizing geometries based on group theory and projective geometry, published by Felix Klein in 1872 as Vergleichende Betrachtungen über neuere geometrische Forschungen.","The Erlangen program is a method of characterizing geometries based on algebra and trigonometry, published by Felix Klein in 1872 as Vergleichende Betrachtungen über neuere geometrische Forschungen.","The Erlangen program is a method of characterizing geometries based on geometry and topology, published by Felix Klein in 1872 as Vergleichende Betrachtungen über neuere geometrische Forschungen.","The Erlangen program is a method of characterizing geometries based on calculus and differential equations, published by Felix Klein in 1872 as Vergleichende Betrachtungen über neuere geometrische Forschungen.",B,kaggle200,"Quite often, it appears there are two or more distinct geometries with isomorphic automorphism groups. There arises the question of reading the Erlangen program from the ""abstract"" group, to the geometry.
Relations of the Erlangen program with work of Charles Ehresmann on groupoids in geometry is considered in the article below by Pradines.
The Erlangen program can therefore still be considered fertile, in relation with dualities in physics.
In mathematics, the Erlangen program is a method of characterizing geometries based on group theory and projective geometry. It was published by Felix Klein in 1872 as ""Vergleichende Betrachtungen über neuere geometrische Forschungen."" It is named after the University Erlangen-Nürnberg, where Klein worked.","In the seminal paper which introduced categories, Saunders Mac Lane and Samuel Eilenberg stated: ""This may be regarded as a continuation of the Klein Erlanger Program, in the sense that a geometrical space with its group of transformations is generalized to a category with its algebra of mappings.""Relations of the Erlangen program with work of Charles Ehresmann on groupoids in geometry is considered in the article below by Pradines.In mathematical logic, the Erlangen program also served as an inspiration for Alfred Tarski in his analysis of logical notions.
Quite often, it appears there are two or more distinct geometries with isomorphic automorphism groups. There arises the question of reading the Erlangen program from the abstract group, to the geometry.
In mathematics, the Erlangen program is a method of characterizing geometries based on group theory and projective geometry. It was published by Felix Klein in 1872 as Vergleichende Betrachtungen über neuere geometrische Forschungen. It is named after the University Erlangen-Nürnberg, where Klein worked.","There arises the question of reading the Erlangen program from the ""abstract"" group, to the geometry.
Relations of the Erlangen program with work of Charles Ehresmann on groupoids in geometry is considered in the article below by Pradines.
The Erlangen program can therefore still be considered fertile, in relation with dualities in physics.
In mathematics, the Erlangen program is a method of characterizing geometries based on group theory and projective geometryThere arises the question of reading the Erlangen program from the abstract group, to the geometry.
In mathematics, the Erlangen program is a method of characterizing geometries based on group theory and projective geometryIt is named after the University Erlangen-Nürnberg, where Klein workedIn the seminal paper which introduced categories, Saunders Mac Lane and Samuel Eilenberg stated: ""This may be regarded as a continuation of the Klein Erlanger Program, in the sense that a geometrical space with its group of transformations is generalized to a category with its algebra of mappings.""Relations of the Erlangen program with work of Charles Ehresmann on groupoids in geometry is considered in the article below by Pradines.In mathematical logic, the Erlangen program also served as an inspiration for Alfred Tarski in his analysis of logical notions.
Quite often, it appears there are two or more distinct geometries with isomorphic automorphism groupsIt was published by Felix Klein in 1872 as ""Vergleichende Betrachtungen über neuere geometrische Forschungen."" It is named after the University Erlangen-Nürnberg, where Klein worked.It was published by Felix Klein in 1872 as Vergleichende Betrachtungen über neuere geometrische Forschungen- Quite often, it appears there are two or more distinct geometries with isomorphic automorphism groups","There arises the question of reading the Erlangen program from the ""abstract"" group, to the geometry.
Relations of the Erlangen program with work of Charles Ehresmann on groupoids in geometry is considered in the article below by Pradines.
The Erlangen program can therefore still be considered fertile, in relation with dualities in physics.
In mathematics, the Erlangen program is a method of characterizing geometries based on group theory and projective geometryThere arises the question of reading the Erlangen program from the abstract group, to the geometry.
In mathematics, the Erlangen program is a method of characterizing geometries based on group theory and projective geometryIt is named after the University Erlangen-Nürnberg, where Klein workedIn the seminal paper which introduced categories, Saunders Mac Lane and Samuel Eilenberg stated: ""This may be regarded as a continuation of the Klein Erlanger Program, in the sense that a geometrical space with its group of transformations is generalized to a category with its algebra of mappings.""Relations of the Erlangen program with work of Charles Ehresmann on groupoids in geometry is considered in the article below by Pradines.In mathematical logic, the Erlangen program also served as an inspiration for Alfred Tarski in his analysis of logical notions.
Quite often, it appears there are two or more distinct geometries with isomorphic automorphism groupsIt was published by Felix Klein in 1872 as ""Vergleichende Betrachtungen über neuere geometrische Forschungen."" It is named after the University Erlangen-Nürnberg, where Klein worked.It was published by Felix Klein in 1872 as Vergleichende Betrachtungen über neuere geometrische Forschungen- Quite often, it appears there are two or more distinct geometries with isomorphic automorphism groups[SEP]What is the Erlangen program?","['B', 'C', 'D']",1.0
What is emissivity?,"Emissivity and emittivity are both dimensionless quantities given in the range of 0 to 1, representing the comparative/relative emittance with respect to a blackbody operating in similar conditions, but emissivity refers to a material property (of a homogeneous material), while emittivity refers to specific samples or objects. Low emissivity (low e or low thermal emissivity) refers to a surface condition that emits low levels of radiant thermal (heat) energy. Emissivity of a body at a given temperature is the ratio of the total emissive power of a body to the total emissive power of a perfectly black body at that temperature. The emissivity of the surface of a material is its effectiveness in emitting energy as thermal radiation. The term emissivity is generally used to describe a simple, homogeneous surface such as silver. Emissivity measurements for many surfaces are compiled in many handbooks and texts. In common use, especially building applications, the temperature range of approximately -40 to +80 degrees Celsius is the focus, but in aerospace and industrial process engineering, much broader ranges are of practical concern. ==Definition== Emissivity is the value given to materials based on the ratio of heat emitted compared to a perfect black body, on a scale from zero to one. Thermal emittance or thermal emissivity (\varepsilon) is the ratio of the radiant emittance of heat of a specific object or surface to that of a standard black body. The emissivity of a surface depends on its chemical composition and geometrical structure. Emissivity of a planet is determined by the nature of its surface and atmosphere. However, the form of emissivity that most commonly used is the hemispherical total emissivity, which considers emissions as totaled over all wavelengths, directions, and polarizations, given a particular temperature. # Most emissitivies in the chart above were recorded at room temperature, . ==Closely related properties== ===Absorptance=== There is a fundamental relationship (Gustav Kirchhoff's 1859 law of thermal radiation) that equates the emissivity of a surface with its absorption of incident radiation (the ""absorptivity"" of a surface). The thermal emissivity of various surfaces is listed in the following table. Hemispherical emissivity can also be expressed as a weighted average of the directional spectral emissivities as described in textbooks on ""radiative heat transfer"". ==Emissivities of common surfaces== Emissivities ε can be measured using simple devices such as Leslie's cube in conjunction with a thermal radiation detector such as a thermopile or a bolometer. On this site, the focus is on available data, references and links to resources related to spectral emissivity as it is measured & used in thermal radiation thermometry and thermography (thermal imaging). See However, wavelength- and subwavelength-scale particles, metamaterials, and other nanostructures may have an emissivity greater than 1. == Practical applications == Emissivities are important in a variety of contexts: ; Insulated windows: Warm surfaces are usually cooled directly by air, but they also cool themselves by emitting thermal radiation. The calibration of these instruments involves the emissivity of the surface that's being measured. ==Mathematical definitions== In its most general form, emissivity can specified for a particular wavelength, direction, and polarization. Some specific forms of emissivity are detailed below. ===Hemispherical emissivity=== Hemispherical emissivity of a surface, denoted ε, is defined as : \varepsilon = \frac{M_\mathrm{e}}{M_\mathrm{e}^\circ}, where * Me is the radiant exitance of that surface; * Me° is the radiant exitance of a black body at the same temperature as that surface. ===Spectral hemispherical emissivity=== Spectral hemispherical emissivity in frequency and spectral hemispherical emissivity in wavelength of a surface, denoted εν and ελ, respectively, are defined as : \begin{align} \varepsilon_ u &= \frac{M_{\mathrm{e}, u}}{M_{\mathrm{e}, u}^\circ}, \\\ \varepsilon_\lambda &= \frac{M_{\mathrm{e},\lambda}}{M_{\mathrm{e},\lambda}^\circ}, \end{align} where * Me,ν is the spectral radiant exitance in frequency of that surface; * Me,ν° is the spectral radiant exitance in frequency of a black body at the same temperature as that surface; * Me,λ is the spectral radiant exitance in wavelength of that surface; * Me,λ° is the spectral radiant exitance in wavelength of a black body at the same temperature as that surface. ===Directional emissivity=== Directional emissivity of a surface, denoted εΩ, is defined as : \varepsilon_\Omega = \frac{L_{\mathrm{e},\Omega}}{L_{\mathrm{e},\Omega}^\circ}, where * Le,Ω is the radiance of that surface; * Le,Ω° is the radiance of a black body at the same temperature as that surface. ===Spectral directional emissivity=== Spectral directional emissivity in frequency and spectral directional emissivity in wavelength of a surface, denoted εν,Ω and ελ,Ω, respectively, are defined as : \begin{align} \varepsilon_{ u,\Omega} &= \frac{L_{\mathrm{e},\Omega, u}}{L_{\mathrm{e},\Omega, u}^\circ}, \\\ \varepsilon_{\lambda,\Omega} &= \frac{L_{\mathrm{e},\Omega,\lambda}}{L_{\mathrm{e},\Omega,\lambda}^\circ}, \end{align} where * Le,Ω,ν is the spectral radiance in frequency of that surface; * Le,Ω,ν° is the spectral radiance in frequency of a black body at the same temperature as that surface; * Le,Ω,λ is the spectral radiance in wavelength of that surface; * Le,Ω,λ° is the spectral radiance in wavelength of a black body at the same temperature as that surface. Similar terms, emittance and thermal emittance, are used to describe thermal radiation measurements on complex surfaces such as insulation products. === Measurement of Emittance === Emittance of a surface can be measured directly or indirectly from the emitted energy from that surface. Similarly, pure water absorbs very little visible light, but water is nonetheless a strong infrared absorber and has a correspondingly high emissivity. ===Emittance=== Emittance (or emissive power) is the total amount of thermal energy emitted per unit area per unit time for all possible wavelengths. ",Emissivity is a measure of how well a surface resists deformation under stress.,Emissivity is a measure of how well a surface conducts heat.,Emissivity is a measure of how well a surface absorbs and emits thermal radiation.,Emissivity is a measure of how well a surface reflects visible light.,Emissivity is a measure of how well a surface absorbs and emits sound waves.,C,kaggle200,"The emissivity of a material (usually written ε or e) is the relative ability of its surface to emit energy by radiation. A black body has an emissivity of 1 and a perfect reflector has an emissivity of 0.
Real objects never behave as full-ideal black bodies, and instead the emitted radiation at a given frequency is a fraction of what the ideal emission would be. The emissivity of a material specifies how well a real body radiates energy as compared with a black body. This emissivity depends on factors such as temperature, emission angle, and wavelength. However, it is typical in engineering to assume that a surface's spectral emissivity and absorptivity do not depend on wavelength so that the emissivity is a constant. This is known as the ""gray body"" assumption.
There is a fundamental relationship (Gustav Kirchhoff's 1859 law of thermal radiation) that equates the emissivity of a surface with its absorption of incident radiation (the ""absorptivity"" of a surface). Kirchhoff's law is rigorously applicable with regard to the spectral directional definitions of emissivity and absorptivity. The relationship explains why emissivities cannot exceed 1, since the largest absorptivity—corresponding to complete absorption of all incident light by a truly black object—is also 1. Mirror-like, metallic surfaces that reflect light will thus have low emissivities, since the reflected light isn't absorbed. A polished silver surface has an emissivity of about 0.02 near room temperature. Black soot absorbs thermal radiation very well; it has an emissivity as large as 0.97, and hence soot is a fair approximation to an ideal black body.
A sensor with an adjustable emissivity setting can also be used to calibrate the sensor for a given surface or to measure the emissivity of a surface. When the temperature of a surface is accurately known (e.g. by measuring with a contact thermometer), then the sensor's emissivity setting can be adjusted until the temperature measurement by the IR method matches the measured temperature by the contact method; the emissivity setting will indicate the emissivity of the surface, which can be taken into account for later measurements of similar surfaces (only).","Absorptance There is a fundamental relationship (Gustav Kirchhoff's 1859 law of thermal radiation) that equates the emissivity of a surface with its absorption of incident radiation (the ""absorptivity"" of a surface). Kirchhoff's law is rigorously applicable with regard to the spectral directional definitions of emissivity and absorptivity. The relationship explains why emissivities cannot exceed 1, since the largest absorptivity—corresponding to complete absorption of all incident light by a truly black object—is also 1. Mirror-like, metallic surfaces that reflect light will thus have low emissivities, since the reflected light isn't absorbed. A polished silver surface has an emissivity of about 0.02 near room temperature. Black soot absorbs thermal radiation very well; it has an emissivity as large as 0.97, and hence soot is a fair approximation to an ideal black body.With the exception of bare, polished metals, the appearance of a surface to the eye is not a good guide to emissivities near room temperature. For example, white paint absorbs very little visible light. However, at an infrared wavelength of 10×10−6 metre, paint absorbs light very well, and has a high emissivity. Similarly, pure water absorbs very little visible light, but water is nonetheless a strong infrared absorber and has a correspondingly high emissivity.
In its most general form, emissivity can specified for a particular wavelength, direction, and polarization. However, the form of emissivity that most commonly used is the hemispherical total emissivity, which considers emissions as totaled over all wavelengths, directions, and polarizations, given a particular temperature.: 60 Some specific forms of emissivity are detailed below.
Hemispherical emissivity Hemispherical emissivity of a surface, denoted ε, is defined as ε=MeMe∘, where Me is the radiant exitance of that surface; Me° is the radiant exitance of a black body at the same temperature as that surface.
Directional emissivity Directional emissivity of a surface, denoted εΩ, is defined as εΩ=Le,ΩLe,Ω∘, where Le,Ω is the radiance of that surface; Le,Ω° is the radiance of a black body at the same temperature as that surface.","- The emissivity of a material (usually written ε or e) is the relative ability of its surface to emit energy by radiationThe emissivity of a material specifies how well a real body radiates energy as compared with a black bodyThis emissivity depends on factors such as temperature, emission angle, and wavelengthAbsorptance There is a fundamental relationship (Gustav Kirchhoff's 1859 law of thermal radiation) that equates the emissivity of a surface with its absorption of incident radiation (the ""absorptivity"" of a surface) However, the form of emissivity that most commonly used is the hemispherical total emissivity, which considers emissions as totaled over all wavelengths, directions, and polarizations, given a particular temperature.: 60 Some specific forms of emissivity are detailed below.
Hemispherical emissivity Hemispherical emissivity of a surface, denoted ε, is defined as ε=MeMe∘, where Me is the radiant exitance of that surface; Me° is the radiant exitance of a black body at the same temperature as that surface.
Directional emissivity Directional emissivity of a surface, denoted εΩ, is defined as εΩ=Le,ΩLe,Ω∘, where Le,Ω is the radiance of that surface; Le,Ω° is the radiance of a black body at the same temperature as that surfaceThe relationship explains why emissivities cannot exceed 1, since the largest absorptivity—corresponding to complete absorption of all incident light by a truly black object—is also 1by measuring with a contact thermometer), then the sensor's emissivity setting can be adjusted until the temperature measurement by the IR method matches the measured temperature by the contact method; the emissivity setting will indicate the emissivity of the surface, which can be taken into account for later measurements of similar surfaces (only).Kirchhoff's law is rigorously applicable with regard to the spectral directional definitions of emissivity and absorptivityHowever, it is typical in engineering to assume that a surface's spectral emissivity and absorptivity do not depend on wavelength so that the emissivity is a constantBlack soot absorbs","- The emissivity of a material (usually written ε or e) is the relative ability of its surface to emit energy by radiationThe emissivity of a material specifies how well a real body radiates energy as compared with a black bodyThis emissivity depends on factors such as temperature, emission angle, and wavelengthAbsorptance There is a fundamental relationship (Gustav Kirchhoff's 1859 law of thermal radiation) that equates the emissivity of a surface with its absorption of incident radiation (the ""absorptivity"" of a surface) However, the form of emissivity that most commonly used is the hemispherical total emissivity, which considers emissions as totaled over all wavelengths, directions, and polarizations, given a particular temperature.: 60 Some specific forms of emissivity are detailed below.
Hemispherical emissivity Hemispherical emissivity of a surface, denoted ε, is defined as ε=MeMe∘, where Me is the radiant exitance of that surface; Me° is the radiant exitance of a black body at the same temperature as that surface.
Directional emissivity Directional emissivity of a surface, denoted εΩ, is defined as εΩ=Le,ΩLe,Ω∘, where Le,Ω is the radiance of that surface; Le,Ω° is the radiance of a black body at the same temperature as that surfaceThe relationship explains why emissivities cannot exceed 1, since the largest absorptivity—corresponding to complete absorption of all incident light by a truly black object—is also 1by measuring with a contact thermometer), then the sensor's emissivity setting can be adjusted until the temperature measurement by the IR method matches the measured temperature by the contact method; the emissivity setting will indicate the emissivity of the surface, which can be taken into account for later measurements of similar surfaces (only).Kirchhoff's law is rigorously applicable with regard to the spectral directional definitions of emissivity and absorptivityHowever, it is typical in engineering to assume that a surface's spectral emissivity and absorptivity do not depend on wavelength so that the emissivity is a constantBlack soot absorbs[SEP]What is emissivity?","['C', 'B', 'D']",1.0
Who was the first person to describe the pulmonary circulation system?,"The Greek physician Galen (129 – c. 210 CE) provided the next insights into pulmonary circulation. Several figures such as Hippocrates and al-Nafis receive credit for accurately predicting or developing specific elements of the modern model of pulmonary circulation: Hippocrates for being the first to describe pulmonary circulation as a discrete system separable from systemic circulation as a whole and al-Nafis for making great strides over the understanding of those before him and towards a rigorous model. Greek physician Erasistratus (315 – 240 BCE) agreed with Hippocrates and Aristotle that the heart was the origin of all of the vessels in the body but proposed a system in which air was drawn into the lungs and traveled to the left ventricle via pulmonary veins. The researchers argue that its author, Qusta ibn Luqa, is the best candidate for the discoverer of pulmonary circulation on a similar basis to arguments in favour of al-Nafis generally. However, Avicenna's description of pulmonary circulation reflected the incorrect views of Galen. Hippocrates was the first to describe pulmonary circulation as a discrete system, separable from systemic circulation, in his Corpus Hippocraticum, which is often regarded as the foundational text of modern medicine. The Arab physician, Ibn al-Nafis, wrote the Commentary on Anatomy in Avicenna's Canon in 1242 in which he provided possibly the first known description of the system that remains substantially congruent with modern understandings, in spite of its flaws. Greek philosopher and scientist Aristotle (384 – 322 BCE) followed Hippocrates and proposed that the heart had three ventricles, rather than two, that all connected to the lungs. However, like Aristotle and Galen, al-Nafis still believed in the quasi- mythical concept of vital spirit and that it was formed in the left ventricle from a mixture of blood and air. Galen's theory included a new description of pulmonary circulation: air was inhaled into the lungs where it became the pneuma. * Vascular resistance * Pulmonary shunt ==History== thumb|The opening page of one of Ibn al-Nafis's medical works The pulmonary circulation is archaically known as the ""lesser circulation"" which is still used in non-English literature. The next addition to the historical understanding of pulmonary circulation arrived with the Ancient Greeks. Other sources credit Greek philosopher Hippocrates (460 – 370 BCE), Spanish physician Michael Servetus (c. 1509 – 1553 CE), Arab physician Ibn al-Nafis (1213 – 1288 CE), and Syrian physician Qusta ibn Luqa. He was one of the first to begin to accurately describe the anatomy of the heart and to describe the involvement of the lungs in circulation. Italian physician Realdo Colombo (c. 1515 – 1559 CE) published a book, De re anatomica libri XV, in 1559 that accurately described pulmonary circulation. The Flemish physician Andreas Vesalius (1514 – 1564 CE) published corrections to Galen's view of circulatory anatomy, questioning the existence of interventricular pores, in his book De humani corporis fabrica libri septem in 1543. Finally, in 1628, the influential British physician William Harvey (1578 – 1657 AD) provided at the time the most complete and accurate description of pulmonary circulation of any scholar worldwide in his treatise Exercitatio Anatomica de Motu Cordis et Sanguinis in Animalibus. Galen contradicted Erasistratus before him by proposing that arteries carried both air and blood, rather than air alone (which was essentially correct, leaving aside that blood vessels carry constituents of air and not air itself). The Egyptians knew that air played an important role in circulation but did not yet have a conception of the role of the lungs. Physician Alcmaeon (520 – 450 BCE) proposed that the brain, not the heart, was the connection point for all of the vessels in the body. ",Galen,Avicenna,Hippocrates,Aristotle,Ibn al-Nafis,E,kaggle200,"The pulmonary circulation is archaically known as the ""lesser circulation"" which is still used in non-English literature.
In 1242, the Arabian physician Ibn al-Nafis became the first person to accurately describe the process of pulmonary circulation, for which he has been described as the ""Arab Father of Circulation"". Ibn al-Nafis stated in his ""Commentary on Anatomy in Avicenna's Canon"":
The discovery of the pulmonary circulation has been attributed to many scientists with credit distributed in varying ratios by varying sources. In much of modern medical literature, the discovery is credited to English physician William Harvey (1578 – 1657 CE) based on the comprehensive completeness and correctness of his model, despite its relative recency. Other sources credit Greek philosopher Hippocrates (460 – 370 BCE), Spanish physician Michael Servetus (c. 1509 – 1553 CE), Arab physician Ibn al-Nafis (1213 – 1288 CE), and Syrian physician Qusta ibn Luqa. Several figures such as Hippocrates and al-Nafis receive credit for accurately predicting or developing specific elements of the modern model of pulmonary circulation: Hippocrates for being the first to describe pulmonary circulation as a discrete system separable from systemic circulation as a whole and al-Nafis for making great strides over the understanding of those before him and towards a rigorous model. There is a great deal of subjectivity involved in deciding at which point a complex system is ""discovered"", as it is typically elucidated in piecemeal form so that the very first description, most complete or accurate description, and the most significant forward leaps in understanding are all considered acts of discovery of varying significance.
It took centuries for other scientists and physicians to reach conclusions that were similar to and then more accurate than those of al-Nafis and ibn Luqa. This later progress, constituting the gap between medieval and modern understanding, occurred throughout Europe. Italian polymath Leonardo da Vinci (1452 – 1519 CE) was one of the first to propose that the heart was just a muscle, rather than a vessel of spirits and air, but he still subscribed to Galen's ideas of circulation and defended the existence of interventricular pores. The Flemish physician Andreas Vesalius (1514 – 1564 CE) published corrections to Galen's view of circulatory anatomy, questioning the existence of interventricular pores, in his book ""De humani corporis fabrica libri septem"" in 1543. Spanish Michael Servetus, after him, was the first European physician to accurately describe pulmonary circulation. His assertions largely matched those of al-Nafis. In subsequent centuries, he has frequently been credited with the discovery, but some historians have propounded the idea that he potentially had access to Ibn al-Nafis's work while writing his own texts. Servetus published his findings in ""Christianismi Restituto"" (1553): a theological work that was considered heretical by Catholics and Calvinists alike. As a result, both book and author were burned at the stake and only a few copies of his work survived. Italian physician Realdo Colombo (c. 1515 – 1559 CE) published a book, ""De re anatomica libri XV,"" in 1559 that accurately described pulmonary circulation. It is still a matter of debate among historians as to whether Colombo reached his conclusions alone or based them to an unknown degree on the works of al-Nafis and Servetus. Finally, in 1628, the influential British physician William Harvey (1578 – 1657 AD) provided at the time the most complete and accurate description of pulmonary circulation of any scholar worldwide in his treatise ""Exercitatio Anatomica de Motu Cordis et Sanguinis in Animalibus"". At the macroscopic level, his model is still recognizable in and reconcilable with modern understandings of pulmonary circulation.","Pulmonary Circulation is a peer-reviewed medical journal covering the fields of pulmonary circulation and pulmonary vascular disease. It was established in 2011 and is published by Sage Publications on behalf of the Pulmonary Vascular Research Institute, of which it is an official journal. The editors-in-chief are Jason X.-J. Yuan and Nicholas W. Morrell.
Urofacial (Ochoa) syndrome received the Ochoa name because of the first person to describe it in 1987, Bernardo Ochoa.
The next addition to the historical understanding of pulmonary circulation arrived with the Ancient Greeks. Physician Alcmaeon (520 – 450 BCE) proposed that the brain, not the heart, was the connection point for all of the vessels in the body. He believed that the function of these vessels was to bring the ""spirit"" (""pneuma"") and air to the brain. Empedocles (492 – 432 BCE), a philosopher, proposed a series of pipes, impermeable to blood but continuous with blood vessels, that carried the pneuma throughout the body. He proposed that this spirit was internalized by pulmonary respiration.Hippocrates was the first to describe pulmonary circulation as a discrete system, separable from systemic circulation, in his Corpus Hippocraticum, which is often regarded as the foundational text of modern medicine. Hippocrates developed the view that the liver and spleen produced blood, and that this traveled to the heart to be cooled by the lungs that surrounded it. He described the heart as having two ventricles connected by an interventricular septum, and depicted the heart as the nexus point of all of the vessels of the body. He proposed that some vessels carried only blood and that others carried only air. He hypothesized that these air-carrying vessels were divisible into the pulmonary veins, which carried in air to the left ventricle, and the pulmonary artery, which carried in air to the right ventricle and blood to the lungs. He also proposed the existence of two atria of the heart functioning to capture air. He was one of the first to begin to accurately describe the anatomy of the heart and to describe the involvement of the lungs in circulation. His descriptions built substantially on previous and contemporaneous efforts but, by modern standards, his conceptions of pulmonary circulation and of the functions of the parts of the heart were still largely inaccurate.Greek philosopher and scientist Aristotle (384 – 322 BCE) followed Hippocrates and proposed that the heart had three ventricles, rather than two, that all connected to the lungs. Greek physician Erasistratus (315 – 240 BCE) agreed with Hippocrates and Aristotle that the heart was the origin of all of the vessels in the body but proposed a system in which air was drawn into the lungs and traveled to the left ventricle via pulmonary veins. It was transformed there into the pneuma and distributed throughout the body by arteries, which contained only air. In this system, veins distributed blood throughout the body, and thus blood did not circulate, but rather was consumed by the organs.The Greek physician Galen (129 – c. 210 CE) provided the next insights into pulmonary circulation. Though many of his theories, like those of his predecessors, were marginally or completely incorrect, his theory of pulmonary circulation dominated the medical community's understanding for hundreds of years after his death. Galen contradicted Erasistratus before him by proposing that arteries carried both air and blood, rather than air alone (which was essentially correct, leaving aside that blood vessels carry constituents of air and not air itself). He proposed that the liver was the originating point of all blood vessels. He also theorized that the heart was not a pumping muscle but rather an organ through which blood passed. Galen's theory included a new description of pulmonary circulation: air was inhaled into the lungs where it became the pneuma. Pulmonary veins transmitted this pneuma to the left ventricle of the heart to cool the blood simultaneously arriving there. This mixture of pneuma, blood, and cooling produced the vital spirits that could then be transported throughout the body via arteries. Galen further proposed that the heat of the blood arriving in the heart produced noxious vapors that were expelled through the same pulmonary veins that first brought the pneuma. He wrote that the right ventricle played a different role to the left: it transported blood to the lungs where the impurities were vented out so that clean blood could be distributed throughout the body. Though Galen's description of the anatomy of the heart was more complete than those of his predecessors, it included several mistakes. Most notably, Galen believed that blood flowed between the two ventricles of the heart through small, invisible pores in the interventricular septum.The next significant developments in the understanding of pulmonary circulation did not arrive until centuries later. Persian polymath Avicenna (c. 980 – 1037 CE) wrote a medical encyclopedia entitled The Canon of Medicine. In it, he translated and compiled contemporary medical knowledge and added some new information of his own. However, Avicenna's description of pulmonary circulation reflected the incorrect views of Galen.The Arab physician, Ibn al-Nafis, wrote the Commentary on Anatomy in Avicenna's Canon in 1242 in which he provided possibly the first known description of the system that remains substantially congruent with modern understandings, in spite of its flaws. Ibn al-Nafis made two key improvements on Galen's ideas. First, he disproved the existence of the pores in the interventricular septum that Galen had believed allowed blood to flow between the left and right ventricles. Second, he surmised that the only way for blood to get from the right to the left ventricle in the absence of interventricular pores was a system like pulmonary circulation. He also described the anatomy of the lungs in clear and basically correct detail, which his predecessors had not. However, like Aristotle and Galen, al-Nafis still believed in the quasi-mythical concept of vital spirit and that it was formed in the left ventricle from a mixture of blood and air. Despite the enormity of Ibn al-Nafis's improvements on the theories that preceded him, his commentary on The Canon was not widely known to Western scholars until the manuscript was discovered in Berlin, Germany, in 1924. As a result, the ongoing debate among Western scholars as to how credit for the discovery should be apportioned failed to include Ibn al-Nafis until, at earliest, the mid-20th century (shortly after which he came to enjoy a share of this credit). In 2021, several researchers described a text predating the work of al-Nafis, fargh- beyn-roh va nafs, in which there is a comparable report on pulmonary circulation. The researchers argue that its author, Qusta ibn Luqa, is the best candidate for the discoverer of pulmonary circulation on a similar basis to arguments in favour of al-Nafis generally.It took centuries for other scientists and physicians to reach conclusions that were similar to and then more accurate than those of al-Nafis and ibn Luqa. This later progress, constituting the gap between medieval and modern understanding, occurred throughout Europe. Italian polymath Leonardo da Vinci (1452 – 1519 CE) was one of the first to propose that the heart was just a muscle, rather than a vessel of spirits and air, but he still subscribed to Galen's ideas of circulation and defended the existence of interventricular pores. The Flemish physician Andreas Vesalius (1514 – 1564 CE) published corrections to Galen's view of circulatory anatomy, questioning the existence of interventricular pores, in his book De humani corporis fabrica libri septem in 1543. Spanish Michael Servetus, after him, was the first European physician to accurately describe pulmonary circulation. His assertions largely matched those of al-Nafis. In subsequent centuries, he has frequently been credited with the discovery, but some historians have propounded the idea that he potentially had access to Ibn al-Nafis's work while writing his own texts. Servetus published his findings in Christianismi Restituto (1553): a theological work that was considered heretical by Catholics and Calvinists alike. As a result, both book and author were burned at the stake and only a few copies of his work survived. Italian physician Realdo Colombo (c. 1515 – 1559 CE) published a book, De re anatomica libri XV, in 1559 that accurately described pulmonary circulation. It is still a matter of debate among historians as to whether Colombo reached his conclusions alone or based them to an unknown degree on the works of al-Nafis and Servetus. Finally, in 1628, the influential British physician William Harvey (1578 – 1657 AD) provided at the time the most complete and accurate description of pulmonary circulation of any scholar worldwide in his treatise Exercitatio Anatomica de Motu Cordis et Sanguinis in Animalibus. At the macroscopic level, his model is still recognizable in and reconcilable with modern understandings of pulmonary circulation.","- The pulmonary circulation is archaically known as the ""lesser circulation"" which is still used in non-English literature.
In 1242, the Arabian physician Ibn al-Nafis became the first person to accurately describe the process of pulmonary circulation, for which he has been described as the ""Arab Father of Circulation""Several figures such as Hippocrates and al-Nafis receive credit for accurately predicting or developing specific elements of the modern model of pulmonary circulation: Hippocrates for being the first to describe pulmonary circulation as a discrete system separable from systemic circulation as a whole and al-Nafis for making great strides over the understanding of those before him and towards a rigorous modelHe was one of the first to begin to accurately describe the anatomy of the heart and to describe the involvement of the lungs in circulationIbn al-Nafis stated in his ""Commentary on Anatomy in Avicenna's Canon"":
The discovery of the pulmonary circulation has been attributed to many scientists with credit distributed in varying ratios by varying sourcesFinally, in 1628, the influential British physician William Harvey (1578 – 1657 AD) provided at the time the most complete and accurate description of pulmonary circulation of any scholar worldwide in his treatise Exercitatio Anatomica de Motu Cordis et Sanguinis in AnimalibusGreek physician Erasistratus (315 – 240 BCE) agreed with Hippocrates and Aristotle that the heart was the origin of all of the vessels in the body but proposed a system in which air was drawn into the lungs and traveled to the left ventricle via pulmonary veinsHe proposed that this spirit was internalized by pulmonary respiration.Hippocrates was the first to describe pulmonary circulation as a discrete system, separable from systemic circulation, in his Corpus Hippocraticum, which is often regarded as the foundational text of modern medicine1515 – 1559 CE) published a book, De re anatomica libri XV, in 1559 that accurately described pulmonary circulationAt the macroscopic level, his model is still recognizable in and reconcilabl","- The pulmonary circulation is archaically known as the ""lesser circulation"" which is still used in non-English literature.
In 1242, the Arabian physician Ibn al-Nafis became the first person to accurately describe the process of pulmonary circulation, for which he has been described as the ""Arab Father of Circulation""Several figures such as Hippocrates and al-Nafis receive credit for accurately predicting or developing specific elements of the modern model of pulmonary circulation: Hippocrates for being the first to describe pulmonary circulation as a discrete system separable from systemic circulation as a whole and al-Nafis for making great strides over the understanding of those before him and towards a rigorous modelHe was one of the first to begin to accurately describe the anatomy of the heart and to describe the involvement of the lungs in circulationIbn al-Nafis stated in his ""Commentary on Anatomy in Avicenna's Canon"":
The discovery of the pulmonary circulation has been attributed to many scientists with credit distributed in varying ratios by varying sourcesFinally, in 1628, the influential British physician William Harvey (1578 – 1657 AD) provided at the time the most complete and accurate description of pulmonary circulation of any scholar worldwide in his treatise Exercitatio Anatomica de Motu Cordis et Sanguinis in AnimalibusGreek physician Erasistratus (315 – 240 BCE) agreed with Hippocrates and Aristotle that the heart was the origin of all of the vessels in the body but proposed a system in which air was drawn into the lungs and traveled to the left ventricle via pulmonary veinsHe proposed that this spirit was internalized by pulmonary respiration.Hippocrates was the first to describe pulmonary circulation as a discrete system, separable from systemic circulation, in his Corpus Hippocraticum, which is often regarded as the foundational text of modern medicine1515 – 1559 CE) published a book, De re anatomica libri XV, in 1559 that accurately described pulmonary circulationAt the macroscopic level, his model is still recognizable in and reconcilabl[SEP]Who was the first person to describe the pulmonary circulation system?","['E', 'C', 'D']",1.0
What is the fate of a carbocation formed in crystalline naphthalene?,"Naphthalene is an organic compound with formula . The naphthalene anions are strong reducing agents. Unlike benzene, the carbon–carbon bonds in naphthalene are not of the same length. Alkylated naphthalenes are chemical compounds made by the alkylation of naphthalene or its derivatives with an olefin. Oxidation with in the presence of vanadium pentoxide as catalyst gives phthalic anhydride: :C10H8 \+ 4.5 O2 → C6H4(CO)2O + 2 CO2 \+ 2 H2O This reaction is the basis of the main use of naphthalene. However, the solid shows semiconducting character below 100 K. ==Chemical properties== ===Reactions with electrophiles=== In electrophilic aromatic substitution reactions, naphthalene reacts more readily than benzene. As an aromatic hydrocarbon, naphthalene's structure consists of a fused pair of benzene rings. Naphthalene can be hydrogenated under high pressure in the presence of metal catalysts to give 1,2,3,4-tetrahydronaphthalene(), also known as tetralin. This theorem would describe naphthalene as an aromatic benzene unit bonded to a diene but not extensively conjugated to it (at least in the ground state), which is consistent with two of its three resonance structures. :400px|Resonance structures of naphthalene Because of this resonance, the molecule has bilateral symmetry across the plane of the shared carbon pair, as well as across the plane that bisects bonds C2-C3 and C6-C7, and across the plane of the carbon atoms. This difference, established by X-ray diffraction, is consistent with the valence bond model in naphthalene and in particular, with the theorem of cross-conjugation. As such, naphthalene is classified as a benzenoid polycyclic aromatic hydrocarbon (PAH). The point group symmetry of naphthalene is D2h. ===Electrical conductivity=== Pure crystalline naphthalene is a moderate insulator at room temperature, with resistivity of about 1012 Ω m. Where required, crude naphthalene can be further purified by recrystallization from any of a variety of solvents, resulting in 99% naphthalene by weight, referred to as 80 °C (melting point).. Naphtha ( or ) is a flammable liquid hydrocarbon mixture. Naphtholactam is an organic compound derived from naphthalene. The single largest use of naphthalene is the industrial production of phthalic anhydride, although more phthalic anhydride is made from o-xylene. ===Fumigant=== Naphthalene has been used as a fumigant. Exposure to large amounts of naphthalene may cause confusion, nausea, vomiting, diarrhea, blood in the urine, and jaundice (yellow coloration of the skin due to dysfunction of the liver). The crude naphthalene resulting from this process is about 95% naphthalene by weight. The structure of two fused benzene rings was proposed by Emil Erlenmeyer in 1866, and confirmed by Carl Gräbe three years later.C. Graebe (1869) ""Ueber die Constitution des Naphthalins"" (On the structure of naphthalene), Annalen der Chemie und Pharmacie, 149 : 20–28. ==Physical properties== A naphthalene molecule can be viewed as the fusion of a pair of benzene rings. He proposed the name naphthaline, as it had been derived from a kind of naphtha (a broad term encompassing any volatile, flammable liquid hydrocarbon mixture, including coal tar). ","The carbocation remains positively charged, trapped in the solid.","The carbocation undergoes spontaneous bond breaking, yielding a carbon-helium ion.","The carbocation forms a bond with helium, becoming a stable compound.","The carbocation undergoes decay, forming a negatively charged ion.","The carbocation gains an electron from surrounding molecules, becoming an electrically neutral radical.",E,kaggle200,"In the first step of S1 mechanism, a carbocation is formed which is planar and hence attack of nucleophile (second step) may occur from either side to give a racemic product, but actually complete racemization does not take place. This is because the nucleophilic species attacks the carbocation even before the departing halides ion has moved sufficiently away from the carbocation. The negatively charged halide ion shields the carbocation from being attacked on the front side, and backside attack, which leads to inversion of configuration, is preferred. Thus the actual product no doubt consists of a mixture of enantiomers but the enantiomers with inverted configuration would predominate and complete racemization does not occurs.
According to the IUPAC, a ""carbocation"" is any cation containing an even number of electrons in which a significant portion of the positive charge resides on a carbon atom. Prior to the observation of five-coordinate carbocations by Olah and coworkers, ""carbocation"" and ""carbonium ion"" were used interchangeably. Olah proposed a redefinition of ""carbonium ion"" as a carbocation featuring any type of three-center two-electron bonding, while a ""carbenium ion"" was newly coined to refer to a carbocation containing only two-center two-electron bonds with a three-coordinate positive carbon. Subsequently, others have used the term ""carbonium ion"" more narrowly to refer to species that are derived (at least formally) from electrophilic attack of H or R on an alkane, in analogy to other main group onium species, while a carbocation that contains any type of three-centered bonding is referred to as a ""non-classical carbocation"". In this usage, 2-norbornyl cation is not a carbonium ion, because it is formally derived from protonation of an alkene (norbornene) rather than an alkane, although it is a non-classical carbocation due to its bridged structure. The IUPAC acknowledges the three divergent definitions of carbonium ion and urges care in the usage of this term. For the remainder of this article, the term ""carbonium ion"" will be used in this latter restricted sense, while ""non-classical carbocation"" will be used to refer to any carbocation with C–C and/or C–H σ-bonds delocalized by bridging.
In the course of this organic reaction, protonation of one of the –OH groups occurs and a carbocation is formed. If the –OH groups are not alike (i.e. the pinacol is asymmetrical), then the one which creates a more stable carbocation participates in the reaction. Subsequently, an alkyl group from the adjacent carbon migrates to the carbocation center.
On occasion, a halonium atom will rearrange to a carbocation. This usually occurs only when that carbocation is an allylic or a benzylic carbocation.","According to the IUPAC, a carbocation is any cation containing an even number of electrons in which a significant portion of the positive charge resides on a carbon atom. Prior to the observation of five-coordinate carbocations by Olah and coworkers, carbocation and carbonium ion were used interchangeably. Olah proposed a redefinition of carbonium ion as a carbocation featuring any type of three-center two-electron bonding, while a carbenium ion was newly coined to refer to a carbocation containing only two-center two-electron bonds with a three-coordinate positive carbon. Subsequently, others have used the term carbonium ion more narrowly to refer to species that are derived (at least formally) from electrophilic attack of H+ or R+ on an alkane, in analogy to other main group onium species, while a carbocation that contains any type of three-centered bonding is referred to as a non-classical carbocation. In this usage, 2-norbornyl cation is not a carbonium ion, because it is formally derived from protonation of an alkene (norbornene) rather than an alkane, although it is a non-classical carbocation due to its bridged structure. The IUPAC acknowledges the three divergent definitions of carbonium ion and urges care in the usage of this term. For the remainder of this article, the term carbonium ion will be used in this latter restricted sense, while non-classical carbocation will be used to refer to any carbocation with C–C and/or C–H σ-bonds delocalized by bridging.
On occasion, a halonium atom will rearrange to a carbocation. This usually occurs only when that carbocation is an allylic or a benzylic carbocation.
In the course of this organic reaction, protonation of one of the –OH groups occurs and a carbocation is formed. If the –OH groups are not alike (i.e. the pinacol is asymmetrical), then the one which creates a more stable carbocation participates in the reaction. Subsequently, an alkyl group from the adjacent carbon migrates to the carbocation center. The driving force for this rearrangement step is believed to be the relative stability of the resultant oxonium ion. Although the initial carbocation is already tertiary, the oxygen can stabilize the positive charge much more favorably due to the complete octet configuration at all centers. It can also be seen as the -OH's lone pairs pushing an alkyl group off as seen in the asymmetrical pinacol example. The migration of alkyl groups in this reaction occurs in accordance with their usual migratory aptitude, i.e.phenyl carbocation > hydride > tertiary carbocation (if formed by migration) > secondary carbocation (if formed by migration) > methyl carbocation. {Why carbocation? Because every migratory group leaves by taking electron pair with it.} The conclusion is that the group which stabilizes the carbocation more effectively is migrated.","Subsequently, an alkyl group from the adjacent carbon migrates to the carbocation center.
On occasion, a halonium atom will rearrange to a carbocationSubsequently, an alkyl group from the adjacent carbon migrates to the carbocation centerThis usually occurs only when that carbocation is an allylic or a benzylic carbocation.This usually occurs only when that carbocation is an allylic or a benzylic carbocation.
In the course of this organic reaction, protonation of one of the –OH groups occurs and a carbocation is formedFor the remainder of this article, the term carbonium ion will be used in this latter restricted sense, while non-classical carbocation will be used to refer to any carbocation with C–C and/or C–H σ-bonds delocalized by bridging.
On occasion, a halonium atom will rearrange to a carbocationFor the remainder of this article, the term ""carbonium ion"" will be used in this latter restricted sense, while ""non-classical carbocation"" will be used to refer to any carbocation with C–C and/or C–H σ-bonds delocalized by bridging.
In the course of this organic reaction, protonation of one of the –OH groups occurs and a carbocation is formedthe pinacol is asymmetrical), then the one which creates a more stable carbocation participates in the reactionPrior to the observation of five-coordinate carbocations by Olah and coworkers, carbocation and carbonium ion were used interchangeablyThe migration of alkyl groups in this reaction occurs in accordance with their usual migratory aptitude, i.e.phenyl carbocation > hydride > tertiary carbocation (if formed by migration) > secondary carbocation (if formed by migration) > methyl carbocationThus the actual product no doubt consists of a mixture of enantiomers but the enantiomers with inverted configuration would predominate and complete racemization does not occurs.
According to the IUPAC, a ""carbocation"" is any cation containing an even number of electrons in which a significant portion of the positive charge resides on a carbon atomAlthough the initial carbocation is already tertiary, the oxygen can stabilize the positiv","Subsequently, an alkyl group from the adjacent carbon migrates to the carbocation center.
On occasion, a halonium atom will rearrange to a carbocationSubsequently, an alkyl group from the adjacent carbon migrates to the carbocation centerThis usually occurs only when that carbocation is an allylic or a benzylic carbocation.This usually occurs only when that carbocation is an allylic or a benzylic carbocation.
In the course of this organic reaction, protonation of one of the –OH groups occurs and a carbocation is formedFor the remainder of this article, the term carbonium ion will be used in this latter restricted sense, while non-classical carbocation will be used to refer to any carbocation with C–C and/or C–H σ-bonds delocalized by bridging.
On occasion, a halonium atom will rearrange to a carbocationFor the remainder of this article, the term ""carbonium ion"" will be used in this latter restricted sense, while ""non-classical carbocation"" will be used to refer to any carbocation with C–C and/or C–H σ-bonds delocalized by bridging.
In the course of this organic reaction, protonation of one of the –OH groups occurs and a carbocation is formedthe pinacol is asymmetrical), then the one which creates a more stable carbocation participates in the reactionPrior to the observation of five-coordinate carbocations by Olah and coworkers, carbocation and carbonium ion were used interchangeablyThe migration of alkyl groups in this reaction occurs in accordance with their usual migratory aptitude, i.e.phenyl carbocation > hydride > tertiary carbocation (if formed by migration) > secondary carbocation (if formed by migration) > methyl carbocationThus the actual product no doubt consists of a mixture of enantiomers but the enantiomers with inverted configuration would predominate and complete racemization does not occurs.
According to the IUPAC, a ""carbocation"" is any cation containing an even number of electrons in which a significant portion of the positive charge resides on a carbon atomAlthough the initial carbocation is already tertiary, the oxygen can stabilize the positiv[SEP]What is the fate of a carbocation formed in crystalline naphthalene?","['E', 'A', 'D']",1.0
What is the main focus of the Environmental Science Center at Qatar University?,"The Environmental Science Center is a research center at Qatar University and was established in 1980 to promote environmental studies across the state of Qatar with main focus on marine science, atmospheric and biological sciences. The center also has 12 labs equipped with state-of-arts instruments. == See also == * Qatar University * Qatar University Library * Mariam Al Maadeed * Center for Advanced Materials (CAM) == External links == * Research and Graduate Studies Office at Qatar University * Qatar University Newsroom == References == Category:1980 establishments in Qatar Category:Organisations based in Doha Category:Research institutes in Qatar Category:Educational institutions established in 1980 Category:Qatar University Category:Education by subject Category:Human impact on the environment Category:Oceans Category:Fishing Category:Earth sciences Category:Nature Category:Biology For the past 18 years, ESC monitored and studied Hawksbill turtle nesting sites in Qatar. == History == * in 1980 it was named Scientific and Applied Research Center (SARC). * in 2005 it was restructured and renamed Environmental Studies Center (ESC). * in 2015, the business name was changed to Environmental Science Center (ESC) to better reflect the research-driven objectives. == Research clusters == The ESC has 3 major research clusters that cover areas of strategic importance to Qatar. According to the Qatar Foundation, its initiatives are oriented towards education, science and research, and community development. The Scientific Center of Kuwait, located in Salmiya, Kuwait, serves as a center for environmental education in the Persian Gulf region. The clusters are: * Atmospheric sciences cluster * Earth sciences cluster * Marine sciences cluster with 2 majors: ** Terrestrial Ecology ** Physical and Chemical Oceanography == UNESCO Chair in marine sciences == The first of its kind in the Arabian Gulf region, United Nations Educational, Scientific and Cultural Organization (UNESCO) have announced the establishment of the UNESCO Chair in marine sciences at QU's Environmental Science Center. It aims to build the educational, life and social experience of students. ===Student Clubs=== Student clubs are divided into three categories: *Departmental and College clubs such as the Statistics Club *Talent and skill clubs such as the Voice Club and the Poetry Club *Clubs and public associations, such as the Book Club == Research centers == Research is conducted in and across colleges and is buoyed by an increased research budget, a multimillion-dollar Research Complex and partnerships. ;18 centers of research # Biomedical Research Center (BRC) # Center for Advanced Materials (CAM) # Environmental Science Center (ESC) # Social and Economic Survey Research Institute (SESRI) # Laboratory Animal Research Center (LARC) # Qatar University Young Scientists Center (QUYSC) # Ibn Khaldon Center for Humanities and Social Sciences # Central Lab Unit (CLU) # Center for Entrepreneurship (CFE) # Center for Sustainable Development (CSD) # Centre for Law and Development (CLD) # Early Childhood Center # Gas Processing Center (GPC) # Gulf Studies Center (GSC) # KINDI Center for Computing Research (KINDI) # National Center for Educational Development (NCED) # Qatar Mobility Innovation Center (QMIC) # Qatar Transportation and Traffic Safety Center (QTTSC) == Notable alumni == *Noor Al Mazroei, chef and activist *Abdulla bin Abdulaziz bin Turki Al Subaie, Qatari Minister of Municipality *Moza bint Nasser, consort of Hamad bin Khalifa Al Thani *Mohammed bin Abdulrahman bin Jassim Al Thani, Qatari Prime Minister *Jawaher bint Hamad bin Suhaim Al Thani, wife of the Emir of Qatar *Mariam Al Maadeed, Qatari scientist, Vice President for Research and Graduate Studies at Qatar University *Nasser Al-Khelaifi, businessman, president of Paris Saint-Germain *Saad Al Mohannadi, Qatari President of Public Works Authority Ashgal *Amal Al-Malki, academic *Abdulrahman bin Hamad bin Jassim bin Hamad Al Thani, Qatari Minister of Culture == See also == * Qatar University Library * Qatar University Stadium * Education in Qatar ==References== Category:Universities in Qatar Category:Educational institutions established in 1973 Category:Organisations based in Doha Category:1973 establishments in Qatar It is the largest college by both number of programs and student population at Qatar University, with a total of 2,383 students; 1,933 Arts majors and 450 Science majors. A QAR 20 million Scientific and Applied Research Center is under construction. ==Colleges and Departments== ===College of Arts and Sciences=== thumb|The Women's College of Arts and Sciences at Qatar University in 2008 The College of Arts and Sciences was established in 2004 through the merging of two former colleges; the College of Humanities and Social Sciences, and the College of Science. Qatar University (; transliterated: Jami'at Qatar) is a public research university located on the northern outskirts of Doha, Qatar. US Education department investigated Georgetown University, Texas A&M;, and Cornell and Rutgers over their funding from Qatar. == Science and research == A program known as the Qatar Science Leadership Program was initiated in 2008 in order to help develop aspiring applied science students. Departments: *Department of Arabic Language **History *Department of Biological & Environmental Sciences **Biological Sciences **Environmental Sciences *Department of Chemistry & Earth Sciences **Chemistry Program accredited by the CSC *Department of English Literature and Linguistics *Department of Health Sciences **Biomedical Program accredited by the NAACLS **Human Nutrition Program **Public health *Department of Humanities *Department of Mass Communication **Mass Communication Program *Department of Mathematics, Statistics & Physics *Department of Social Sciences **Social Work **Psychology **Sociology **International Affairs **Policy, Planning and Development **Statistics *Sport Science Programs: *Arabic for Non-Native Speakers Program ===College of Business & Economics=== thumb|Men's College of Business & Economics at Qatar University in 2008 Founded in 1985, it has begun work on a new QR 185 million facility to accommodate its student body and provide resources.QU 2008/2009 Brochure Dr. Nitham M. Hindi was appointed as Dean in August 2010. The center will be housed and managed by the College of Engineering and its funding will be obtained from different sources including Qatar University, companies and government agencies. The services provided by the center have been designed to address the necessities and challenges of both Qatar University and the Qatari Industry. Research topics include Arabic language computer technologies, computer security and data analysis. ===Environmental initiatives=== In the environmental sciences, Qatar Foundation founded the Qatar Green Building Council in 2009, and the Qatar Environmental & Energy Research Institute (QEERI). ===Medicine initiatives=== In 2012, the Qatar Biomedical Research Institute (QBRI) was established to develop translational biomedical research and biotechnology, focusing on diabetes, cancer and cardiovascular diseases. The Program offers a Bachelor of Science degree which allows for one of 3 concentrations: *Sport Management *Exercise and Fitness *Physical Education ==Honors Program== Qatar University's Honors Program was established in 2009. to provide academic opportunities for high- achieving students. These centers sit alongside the Qatar Faculty of Islamic Studies which began its first graduate classes in the 2007–2008 academic year. For courses which are not offered as Honors, students may propose an ""Honors Contract"" to specify honors-level objectives and goals to be monitored by a sponsoring professor. ==Qatar University student clubs== Qatar University is the biggest and most popular university in Qatar, as stated by UniRank. The college began with a total of 150 students (93 women and 57 men) and was later expanded to become the University of Qatar in 1977 with four new colleges : Education, Humanities & Social Sciences, Sharia & Law & Islamic Studies, and Science. Qatar Foundation for Education, Science and Community Development () is a state-led non-profit organization in Qatar, founded in 1995 by then-emir Hamad bin Khalifa Al Thani and his second wife Moza bint Nasser Al-Missned. ","Environmental studies, with a main focus on marine science, atmospheric and political sciences.","Environmental studies, with a main focus on marine science, atmospheric and physical sciences.","Environmental studies, with a main focus on marine science, atmospheric and social sciences.","Environmental studies, with a main focus on marine science, atmospheric and biological sciences.","Environmental studies, with a main focus on space science, atmospheric and biological sciences.",D,kaggle200,"The main focus of comprehensive crawls is to automatically harvest the biggest number of Czech web resources. The list of URLs is from organisation CZ.NIC.
Another subgenre is called , in which sexual gratification of the player is the main focus of the game.
The application of mediatization theory to the study of religion was initiated by Stig Hjarvard with a main focus on Northern Europe.
The bare midriff, with flat, toned abs, became the trend in Hollywood in the 2000s. In the Spring-Summer 2015 Haute Couture show by Chanel in Paris, midriff-baring tops were a main focus.","Atmospheric science is the study of the Earth's atmosphere and its various inner-working physical processes. Meteorology includes atmospheric chemistry and atmospheric physics with a major focus on weather forecasting. Climatology is the study of atmospheric changes (both long and short-term) that define average climates and their change over time, due to both natural and anthropogenic climate variability. Aeronomy is the study of the upper layers of the atmosphere, where dissociation and ionization are important. Atmospheric science has been extended to the field of planetary science and the study of the atmospheres of the planets and natural satellites of the Solar System.
McNairn received a Bachelor of Environmental Studies from the University of Waterloo, in 1987, a Masters in Soil Science from the University of Guelph, in 1991, and a Ph.D. in Geography from Université Laval in 1999.
2000s–2010s The bare midriff, with flat, toned abs, became the trend in Hollywood in the 2000s. In the Spring-Summer 2015 Haute Couture show by Chanel in Paris, midriff-baring tops were a main focus.","Meteorology includes atmospheric chemistry and atmospheric physics with a major focus on weather forecastingAtmospheric science is the study of the Earth's atmosphere and its various inner-working physical processesAtmospheric science has been extended to the field of planetary science and the study of the atmospheres of the planets and natural satellites of the Solar System.
McNairn received a Bachelor of Environmental Studies from the University of Waterloo, in 1987, a Masters in Soil Science from the University of Guelph, in 1991, and a Ph.DClimatology is the study of atmospheric changes (both long and short-term) that define average climates and their change over time, due to both natural and anthropogenic climate variabilityAeronomy is the study of the upper layers of the atmosphere, where dissociation and ionization are importantin Geography from Université Laval in 1999.
2000s–2010s The bare midriff, with flat, toned abs, became the trend in Hollywood in the 2000s- The main focus of comprehensive crawls is to automatically harvest the biggest number of Czech web resourcesThe list of URLs is from organisation CZ.NIC.
Another subgenre is called , in which sexual gratification of the player is the main focus of the game.
The application of mediatization theory to the study of religion was initiated by Stig Hjarvard with a main focus on Northern Europe.
The bare midriff, with flat, toned abs, became the trend in Hollywood in the 2000sIn the Spring-Summer 2015 Haute Couture show by Chanel in Paris, midriff-baring tops were a main focusIn the Spring-Summer 2015 Haute Couture show by Chanel in Paris, midriff-baring tops were a main focus.","Meteorology includes atmospheric chemistry and atmospheric physics with a major focus on weather forecastingAtmospheric science is the study of the Earth's atmosphere and its various inner-working physical processesAtmospheric science has been extended to the field of planetary science and the study of the atmospheres of the planets and natural satellites of the Solar System.
McNairn received a Bachelor of Environmental Studies from the University of Waterloo, in 1987, a Masters in Soil Science from the University of Guelph, in 1991, and a Ph.DClimatology is the study of atmospheric changes (both long and short-term) that define average climates and their change over time, due to both natural and anthropogenic climate variabilityAeronomy is the study of the upper layers of the atmosphere, where dissociation and ionization are importantin Geography from Université Laval in 1999.
2000s–2010s The bare midriff, with flat, toned abs, became the trend in Hollywood in the 2000s- The main focus of comprehensive crawls is to automatically harvest the biggest number of Czech web resourcesThe list of URLs is from organisation CZ.NIC.
Another subgenre is called , in which sexual gratification of the player is the main focus of the game.
The application of mediatization theory to the study of religion was initiated by Stig Hjarvard with a main focus on Northern Europe.
The bare midriff, with flat, toned abs, became the trend in Hollywood in the 2000sIn the Spring-Summer 2015 Haute Couture show by Chanel in Paris, midriff-baring tops were a main focusIn the Spring-Summer 2015 Haute Couture show by Chanel in Paris, midriff-baring tops were a main focus.[SEP]What is the main focus of the Environmental Science Center at Qatar University?","['B', 'C', 'D']",0.3333333333333333
What is the purpose of obtaining surgical resection specimens?,"Resection may refer to: *Resection (surgery), the removal by surgery of all or part of an organ or other body structure *Segmental resection (or segmentectomy), the partial removal of an organ or other body structure *Position resection, a means of establishing a location by measuring angles only to known points *Resection (free stationing), a means of establishing a position and orientation of a total station by measuring angles and distances to known points *DNA end resection, the process of cutting away the 5' side of a blunt end of double-stranded DNA ** Resection is the removal of all or part of an internal organ and/or connective tissue. A segmental resection specifically removes an independent vascular region of an organ such as a hepatic segment, a bronchopulmonary segment or a renal lobe. The resection margin is the edge of the removed tissue; it is important that this shows free of cancerous cells on examination by a pathologist. ==References== * == External links == * Segmental resection entry in the public domain NCI Dictionary of Cancer Terms Category:Surgical procedures and techniques Category:Surgical removal procedures Segmental resection (or segmentectomy) is a surgical procedure to remove part of an organ or gland, as a sub-type of a resection, which might involve removing the whole body part. Surgery is a medical specialty that uses manual and/or instrumental techniques to physically reach into a subject's body in order to investigate or treat pathological conditions such as a disease or injury, to alter bodily functions (e.g. bariatric surgery such as gastric bypass), to improve appearance (cosmetic surgery), or to remove/replace unwanted tissues (body fat, glands, scars or skin tags) or foreign bodies. Resectoscope may refer to: * Cystoscope, with a cauterization loop to avail for resection of tissue * Hysteroscope, with a cauterization loop to avail for resection of tissue ""Principles of Surgical Oncology"" in Pazdur R, Wagman LD, Camphausen KA, Hoskins WJ (Eds) Cancer Management: A Multidisciplinary Approach . 11 ed. 2008. or other tissue. * resection – partial removal of an organ or other bodily structure. * reconnection of organs, tissues, etc., particularly if severed. ** Microsurgery involves the use of an operating microscope for the surgeon to see and manipulate small structures. The approach to the surgical site may involve several layers of incision and dissection, as in abdominal surgery, where the incision must traverse skin, subcutaneous tissue, three layers of muscle and then the peritoneum. * Based on purpose: ** Exploratory surgery is performed to establish or aid a diagnosis. If these results are satisfactory, the person requiring surgery signs a consent form and is given a surgical clearance. He invented several surgical instruments for purposes such as inspection of the interior of the urethra and for removing foreign bodies from the throat, the ear, and other body organs. In common colloquialism, the term ""surgery"" can also refer to the facility where surgery is performed, or, in British English, simply the office/clinic of a physician, dentist or veterinarian. == Definitions == As a general rule, a procedure is considered surgical when it involves cutting of a person's tissues or closure of a previously sustained wound. Blood vessels may be clamped or cauterized to prevent bleeding, and retractors may be used to expose the site or keep the incision open. In lung cancer surgery, segmental resection refers to removing a section of a lobe of the lung. Reasons for reoperation include postoperative complications such as persistent bleeding, development of seroma or abscess, tissue necrosis or colonization requiring debridement, or oncologically unclear resection margins that demand more extensive resection. == Description of surgical procedure == === Location === Inpatient surgery is performed in a hospital, and the person undergoing surgery stays at least one night in the hospital after the surgery. Excision is the resection of only part of an organ, tissue or other body part (e.g. skin) without discriminating specific vascular territories. ** Endoscopic surgery uses optical instruments to relay the image from inside an enclosed body cavity to the outside, and the surgeon performs the procedure using specialized handheld instruments inserted through trocars placed through the body wall. * By equipment used: ** Laser surgery involves use of laser ablation to divide tissue instead of a scalpel, scissors or similar sharp-edged instruments. ","To remove an entire diseased area or organ for definitive surgical treatment of a disease, with pathological analysis of the specimen used to confirm the diagnosis.",To perform visual and microscopic tests on tissue samples using automated analysers and cultures.,To work in close collaboration with medical technologists and hospital administrations.,To administer a variety of tests of the biophysical properties of tissue samples.,To obtain bodily fluids such as blood and urine for laboratory analysis of disease diagnosis.,A,kaggle200,"Surgical resection of liver metastases from colorectal cancer has been found to be safe and cost-effective.
There are two major types of specimens submitted for surgical pathology analysis: biopsies and surgical resections. A biopsy is a small piece of tissue removed primarily for surgical pathology analysis, most often in order to render a definitive diagnosis. Types of biopsies include core biopsies, which are obtained through the use of large-bore needles, sometimes under the guidance of radiological techniques such as ultrasound, CT scan, or magnetic resonance imaging. Incisional biopsies are obtained through diagnostic surgical procedures that remove part of a suspicious lesion, whereas excisional biopsies remove the entire lesion, and are similar to therapeutic surgical resections. Excisional biopsies of skin lesions and gastrointestinal polyps are very common. The pathologist's interpretation of a biopsy is critical to establishing the diagnosis of a benign or malignant tumor, and can differentiate between different types and grades of cancer, as well as determining the activity of specific molecular pathways in the tumor. Surgical resection specimens are obtained by the therapeutic surgical removal of an entire diseased area or organ (and occasionally multiple organs). These procedures are often intended as definitive surgical treatment of a disease in which the diagnosis is already known or strongly suspected, but pathological analysis of these specimens remains important in confirming the previous diagnosis.
Clinical pathology is a medical specialty that is concerned with the diagnosis of disease based on the laboratory analysis of bodily fluids such as blood and urine, as well as tissues, using the tools of chemistry, clinical microbiology, hematology and molecular pathology. Clinical pathologists work in close collaboration with medical technologists, hospital administrations, and referring physicians. Clinical pathologists learn to administer a number of visual and microscopic tests and an especially large variety of tests of the biophysical properties of tissue samples involving automated analysers and cultures. Sometimes the general term ""laboratory medicine specialist"" is used to refer to those working in clinical pathology, including medical doctors, Ph.D.s and doctors of pharmacology. Immunopathology, the study of an organism's immune response to infection, is sometimes considered to fall within the domain of clinical pathology.
""Surgical resection"" specimens are obtained by the therapeutic surgical removal of an entire diseased area or organ (and occasionally multiple organs). These procedures are often intended as definitive surgical treatment of a disease in which the diagnosis is already known or strongly suspected. However, pathological analysis of these specimens is critically important in confirming the previous diagnosis, staging the extent of malignant disease, establishing whether or not the entire diseased area was removed (a process called ""determination of the surgical margin"", often using frozen section), identifying the presence of unsuspected concurrent diseases, and providing information for postoperative treatment, such as adjuvant chemotherapy in the case of cancer.","Surgical pathology Surgical pathology is one of the primary areas of practice for most anatomical pathologists. Surgical pathology involves the gross and microscopic examination of surgical specimens, as well as biopsies submitted by surgeons and non-surgeons such as general internists, medical subspecialists, dermatologists, and interventional radiologists. Often an excised tissue sample is the best and most definitive evidence of disease (or lack thereof) in cases where tissue is surgically removed from a patient. These determinations are usually accomplished by a combination of gross (i.e., macroscopic) and histologic (i.e., microscopic) examination of the tissue, and may involve evaluations of molecular properties of the tissue by immunohistochemistry or other laboratory tests.There are two major types of specimens submitted for surgical pathology analysis: biopsies and surgical resections. A biopsy is a small piece of tissue removed primarily for surgical pathology analysis, most often in order to render a definitive diagnosis. Types of biopsies include core biopsies, which are obtained through the use of large-bore needles, sometimes under the guidance of radiological techniques such as ultrasound, CT scan, or magnetic resonance imaging. Incisional biopsies are obtained through diagnostic surgical procedures that remove part of a suspicious lesion, whereas excisional biopsies remove the entire lesion, and are similar to therapeutic surgical resections. Excisional biopsies of skin lesions and gastrointestinal polyps are very common. The pathologist's interpretation of a biopsy is critical to establishing the diagnosis of a benign or malignant tumor, and can differentiate between different types and grades of cancer, as well as determining the activity of specific molecular pathways in the tumor. Surgical resection specimens are obtained by the therapeutic surgical removal of an entire diseased area or organ (and occasionally multiple organs). These procedures are often intended as definitive surgical treatment of a disease in which the diagnosis is already known or strongly suspected, but pathological analysis of these specimens remains important in confirming the previous diagnosis.
Clinical pathology Clinical pathology is a medical specialty that is concerned with the diagnosis of disease based on the laboratory analysis of bodily fluids such as blood and urine, as well as tissues, using the tools of chemistry, clinical microbiology, hematology and molecular pathology. Clinical pathologists work in close collaboration with medical technologists, hospital administrations, and referring physicians. Clinical pathologists learn to administer a number of visual and microscopic tests and an especially large variety of tests of the biophysical properties of tissue samples involving automated analysers and cultures. Sometimes the general term ""laboratory medicine specialist"" is used to refer to those working in clinical pathology, including medical doctors, Ph.D.s and doctors of pharmacology. Immunopathology, the study of an organism's immune response to infection, is sometimes considered to fall within the domain of clinical pathology.
Surgical resection specimens are obtained by the therapeutic surgical removal of an entire diseased area or organ (and occasionally multiple organs). These procedures are often intended as definitive surgical treatment of a disease in which the diagnosis is already known or strongly suspected. However, pathological analysis of these specimens is critically important in confirming the previous diagnosis, staging the extent of malignant disease, establishing whether or not the entire diseased area was removed (a process called ""determination of the surgical margin"", often using frozen section), identifying the presence of unsuspected concurrent diseases, and providing information for postoperative treatment, such as adjuvant chemotherapy in the case of cancer.","Surgical resection specimens are obtained by the therapeutic surgical removal of an entire diseased area or organ (and occasionally multiple organs)Often an excised tissue sample is the best and most definitive evidence of disease (or lack thereof) in cases where tissue is surgically removed from a patientHowever, pathological analysis of these specimens is critically important in confirming the previous diagnosis, staging the extent of malignant disease, establishing whether or not the entire diseased area was removed (a process called ""determination of the surgical margin"", often using frozen section), identifying the presence of unsuspected concurrent diseases, and providing information for postoperative treatment, such as adjuvant chemotherapy in the case of cancerHowever, pathological analysis of these specimens is critically important in confirming the previous diagnosis, staging the extent of malignant disease, establishing whether or not the entire diseased area was removed (a process called ""determination of the surgical margin"", often using frozen section), identifying the presence of unsuspected concurrent diseases, and providing information for postoperative treatment, such as adjuvant chemotherapy in the case of cancer.Surgical pathology involves the gross and microscopic examination of surgical specimens, as well as biopsies submitted by surgeons and non-surgeons such as general internists, medical subspecialists, dermatologists, and interventional radiologistsThese determinations are usually accomplished by a combination of gross (i.e., macroscopic) and histologic (i.e., microscopic) examination of the tissue, and may involve evaluations of molecular properties of the tissue by immunohistochemistry or other laboratory tests.There are two major types of specimens submitted for surgical pathology analysis: biopsies and surgical resectionsImmunopathology, the study of an organism's immune response to infection, is sometimes considered to fall within the domain of clinical pathology.
""Surgical resection"" specimens are obtained by the therapeutic surgica","Surgical resection specimens are obtained by the therapeutic surgical removal of an entire diseased area or organ (and occasionally multiple organs)Often an excised tissue sample is the best and most definitive evidence of disease (or lack thereof) in cases where tissue is surgically removed from a patientHowever, pathological analysis of these specimens is critically important in confirming the previous diagnosis, staging the extent of malignant disease, establishing whether or not the entire diseased area was removed (a process called ""determination of the surgical margin"", often using frozen section), identifying the presence of unsuspected concurrent diseases, and providing information for postoperative treatment, such as adjuvant chemotherapy in the case of cancerHowever, pathological analysis of these specimens is critically important in confirming the previous diagnosis, staging the extent of malignant disease, establishing whether or not the entire diseased area was removed (a process called ""determination of the surgical margin"", often using frozen section), identifying the presence of unsuspected concurrent diseases, and providing information for postoperative treatment, such as adjuvant chemotherapy in the case of cancer.Surgical pathology involves the gross and microscopic examination of surgical specimens, as well as biopsies submitted by surgeons and non-surgeons such as general internists, medical subspecialists, dermatologists, and interventional radiologistsThese determinations are usually accomplished by a combination of gross (i.e., macroscopic) and histologic (i.e., microscopic) examination of the tissue, and may involve evaluations of molecular properties of the tissue by immunohistochemistry or other laboratory tests.There are two major types of specimens submitted for surgical pathology analysis: biopsies and surgical resectionsImmunopathology, the study of an organism's immune response to infection, is sometimes considered to fall within the domain of clinical pathology.
""Surgical resection"" specimens are obtained by the therapeutic surgica[SEP]What is the purpose of obtaining surgical resection specimens?","['A', 'C', 'B']",1.0
What is the function of mammary glands in mammals?,"A mammary gland is an exocrine gland in humans and other mammals that produces milk to feed young offspring. The mammary glands are arranged in organs such as the breasts in primates (for example, humans and chimpanzees), the udder in ruminants (for example, cows, goats, sheep, and deer), and the dugs of other animals (for example, dogs and cats). The number and positioning of mammary glands varies widely in different mammals. These mammary glands are modified sweat glands. == Structure== The basic components of a mature mammary gland are the alveoli (hollow cavities, a few millimeters large), which are lined with milk-secreting cuboidal cells and surrounded by myoepithelial cells. The salivary glands in many vertebrates including mammals are exocrine glands that produce saliva through a system of ducts. They not only help to support mammary basic structure, but also serve as a communicating bridge between mammary epithelia and their local and global environment throughout this organ's development. ===Histology=== thumb|Normal histology of the breast. thumb|upright|Light micrograph of a human proliferating mammary gland during estrous cycle. In general most mammals develop mammary glands in pairs along these lines, with a number approximating the number of young typically birthed at a time. The development of the mammary gland occurs mainly after birth. Breast development results in prominent and developed structures on the chest known as breasts in primates, which serve primarily as mammary glands. As a result of estrous cycling, the mammary gland undergoes dynamic changes where cells proliferate and then regress in an ordered fashion. ====Pregnancy==== During pregnancy, the ductal systems undergo rapid proliferation and form alveolar structures within the branches to be used for milk production. One theory proposes that mammary glands evolved from glands that were used to keep the eggs of early mammals moistLactating on Eggs. Production of milk (lactation) from a male mammal's mammary glands is well- documented in the Dayak fruit bat and the Bismarck masked flying fox. Fauna Paraguay 0 0 25 to 27 25 to 27 Male mammals typically have rudimentary mammary glands and nipples, with a few exceptions: male mice do not have nipples, male marsupials do not have mammary glands, and male horses lack nipples and mammary glands. Under the influence of estrogen, stromal and fat tissue surrounding the ductal system in the mammary glands also grows. Mother's milk is milk produced by mammary glands located in the breast of a human female to feed a young child. In the case of prototherians, both males and females have functional mammary glands, but their mammary glands are without nipples. Mammary glands are true protein factories, and several labs have constructed transgenic animals, mainly goats and cows, to produce proteins for pharmaceutical use. Concerning metatherians and eutherians, only females have functional mammary glands. After delivery, lactation occurs within the mammary gland; lactation involves the secretion of milk by the luminal cells in the alveoli. These components of the extracellular matrix are strong determinants of duct morphogenesis. ===Biochemistry=== Estrogen and growth hormone (GH) are essential for the ductal component of mammary gland development, and act synergistically to mediate it. ",Mammary glands produce milk to feed the young.,Mammary glands help mammals draw air into the lungs.,Mammary glands help mammals breathe with lungs.,Mammary glands excrete nitrogenous waste as urea.,Mammary glands separate oxygenated and deoxygenated blood in the mammalian heart.,A,kaggle200,"In females, H19 is expressed postnatally during puberty and pregnancy in the mammary glands, and in the uterus during pregnancy.
A distinguishing characteristic of the class ""Mammalia"" is the presence of mammary glands. Mammary glands are modified sweat glands that produce milk, which is used to feed the young for some time after birth. Only mammals produce milk. Mammary glands are obvious in humans, because the female human body stores large amounts of fatty tissue near the nipples, resulting in prominent breasts. Mammary glands are present in all mammals, although they are normally redundant in males of the species.
Mammals are divided into 3 groups: prototherians, metatherians, and eutherians. In the case of prototherians, both males and females have functional mammary glands, but their mammary glands are without nipples. These mammary glands are modified sebaceous glands. Concerning metatherians and eutherians, only females have functional mammary glands. Their mammary glands can be termed as breasts or udders. In the case of breasts, each mammary gland has its own nipple (e.g., human mammary glands). In the case of udders, pairs of mammary glands comprise a single mass, with more than one nipple (or teat) hanging from it. For instance, cows and buffalo each have one udder with four teats, whereas sheep and goats each have two teats protruding from the udder. These mammary glands are modified sweat glands.
Male mammals typically have rudimentary mammary glands and nipples, with a few exceptions: male mice do not have nipples, male marsupials do not have mammary glands, and male horses lack nipples and mammary glands. The male Dayak fruit bat has lactating mammary glands. Male lactation occurs infrequently in some species.","Histology A mammary gland is a specific type of apocrine gland specialized for manufacture of colostrum when giving birth. Mammary glands can be identified as apocrine because they exhibit striking ""decapitation"" secretion. Many sources assert that mammary glands are modified sweat glands. Some authors dispute that and argue instead that they are sebaceous glands.
General The breasts of female humans vary from most other mammals that tend to have less conspicuous mammary glands. The number and positioning of mammary glands varies widely in different mammals. The protruding teats and accompanying glands can be located anywhere along the two milk lines. In general most mammals develop mammary glands in pairs along these lines, with a number approximating the number of young typically birthed at a time. The number of teats varies from 2 (in most primates) to 18 (in pigs). The Virginia opossum has 13, one of the few mammals with an odd number. The following table lists the number and position of teats and glands found in a range of mammals: Male mammals typically have rudimentary mammary glands and nipples, with a few exceptions: male mice do not have nipples, male marsupials do not have mammary glands, and male horses lack nipples and mammary glands. The male Dayak fruit bat has lactating mammary glands. Male lactation occurs infrequently in some species.Mammary glands are true protein factories, and several labs have constructed transgenic animals, mainly goats and cows, to produce proteins for pharmaceutical use. Complex glycoproteins such as monoclonal antibodies or antithrombin cannot be produced by genetically engineered bacteria, and the production in live mammals is much cheaper than the use of mammalian cell cultures.
Mammals are divided into 3 groups: prototherians, metatherians, and eutherians. In the case of prototherians, both males and females have functional mammary glands, but their mammary glands are without nipples. These mammary glands are modified sebaceous glands. Concerning metatherians and eutherians, only females have functional mammary glands. Their mammary glands can be termed as breasts or udders. In the case of breasts, each mammary gland has its own nipple (e.g., human mammary glands). In the case of udders, pairs of mammary glands comprise a single mass, with more than one nipple (or teat) hanging from it. For instance, cows and buffalo each have one udder with four teats, whereas sheep and goats each have two teats protruding from the udder. These mammary glands are modified sweat glands.","Mammary glands are modified sweat glands that produce milk, which is used to feed the young for some time after birthThe number and positioning of mammary glands varies widely in different mammalsTheir mammary glands can be termed as breasts or uddersMammary glands are obvious in humans, because the female human body stores large amounts of fatty tissue near the nipples, resulting in prominent breastsMammary glands are present in all mammals, although they are normally redundant in males of the species.
Mammals are divided into 3 groups: prototherians, metatherians, and eutheriansHistology A mammary gland is a specific type of apocrine gland specialized for manufacture of colostrum when giving birthConcerning metatherians and eutherians, only females have functional mammary glandsIn general most mammals develop mammary glands in pairs along these lines, with a number approximating the number of young typically birthed at a timeThese mammary glands are modified sweat glandsThese mammary glands are modified sebaceous glandsIn the case of udders, pairs of mammary glands comprise a single mass, with more than one nipple (or teat) hanging from itMammary glands can be identified as apocrine because they exhibit striking ""decapitation"" secretionMany sources assert that mammary glands are modified sweat glandsIn the case of breasts, each mammary gland has its own nipple (e.g., human mammary glands)Some authors dispute that and argue instead that they are sebaceous glands.
General The breasts of female humans vary from most other mammals that tend to have less conspicuous mammary glandsThese mammary glands are modified sweat glands.
Male mammals typically have rudimentary mammary glands and nipples, with a few exceptions: male mice do not have nipples, male marsupials do not have mammary glands, and male horses lack nipples and mammary glands- In females, H19 is expressed postnatally during puberty and pregnancy in the mammary glands, and in the uterus during pregnancy.
A distinguishing characteristic of the class ""Mammalia"" is the presence of mammary glandsIn the case of ","Mammary glands are modified sweat glands that produce milk, which is used to feed the young for some time after birthThe number and positioning of mammary glands varies widely in different mammalsTheir mammary glands can be termed as breasts or uddersMammary glands are obvious in humans, because the female human body stores large amounts of fatty tissue near the nipples, resulting in prominent breastsMammary glands are present in all mammals, although they are normally redundant in males of the species.
Mammals are divided into 3 groups: prototherians, metatherians, and eutheriansHistology A mammary gland is a specific type of apocrine gland specialized for manufacture of colostrum when giving birthConcerning metatherians and eutherians, only females have functional mammary glandsIn general most mammals develop mammary glands in pairs along these lines, with a number approximating the number of young typically birthed at a timeThese mammary glands are modified sweat glandsThese mammary glands are modified sebaceous glandsIn the case of udders, pairs of mammary glands comprise a single mass, with more than one nipple (or teat) hanging from itMammary glands can be identified as apocrine because they exhibit striking ""decapitation"" secretionMany sources assert that mammary glands are modified sweat glandsIn the case of breasts, each mammary gland has its own nipple (e.g., human mammary glands)Some authors dispute that and argue instead that they are sebaceous glands.
General The breasts of female humans vary from most other mammals that tend to have less conspicuous mammary glandsThese mammary glands are modified sweat glands.
Male mammals typically have rudimentary mammary glands and nipples, with a few exceptions: male mice do not have nipples, male marsupials do not have mammary glands, and male horses lack nipples and mammary glands- In females, H19 is expressed postnatally during puberty and pregnancy in the mammary glands, and in the uterus during pregnancy.
A distinguishing characteristic of the class ""Mammalia"" is the presence of mammary glandsIn the case of [SEP]What is the function of mammary glands in mammals?","['A', 'E', 'B']",1.0
What is the relationship between interstellar and cometary chemistry?,"The similarity between interstellar and cometary ices (as well as comparisons of gas phase compounds) have been invoked as indicators of a connection between interstellar and cometary chemistry. This is somewhat supported by the results of the analysis of the organics from the comet samples returned by the Stardust mission but the minerals also indicated a surprising contribution from high-temperature chemistry in the solar nebula. == Research == thumb|Transition from atomic to molecular gas at the border of the Orion molecular cloud Research is progressing on the way in which interstellar and circumstellar molecules form and interact, e.g. by including non-trivial quantum mechanical phenomena for synthesis pathways on interstellar particles. The authors describe the scientific nature of comets, as well as their varying roles and perceptions throughout history. This research could have a profound impact on our understanding of the suite of molecules that were present in the molecular cloud when our solar system formed, which contributed to the rich carbon chemistry of comets and asteroids and hence the meteorites and interstellar dust particles which fall to the Earth by the ton every day. The study of the abundance of elements and isotope ratios in Solar System objects, such as meteorites, is also called cosmochemistry, while the study of interstellar atoms and molecules and their interaction with radiation is sometimes called molecular astrophysics. They are also the most common class of carbon molecule in meteorites and in cometary and asteroidal dust (cosmic dust). This has prompted a still ongoing search for interstellar molecules which are either of direct biological importance – such as interstellar glycine, discovered in a comet within our solar system in 2009 – or which exhibit biologically relevant properties like chirality – an example of which (propylene oxide) was discovered in 2016 – alongside more basic astrochemical research. == Spectroscopy == One particularly important experimental tool in astrochemistry is spectroscopy through the use of telescopes to measure the absorption and emission of light from molecules and atoms in various environments. The theoretical importance granted to these spectroscopic results was greatly expanded upon the development of quantum mechanics, as the theory allowed for these results to be compared to atomic and molecular emission spectra which had been calculated a priori. === History of astrochemistry === While radio astronomy was developed in the 1930s, it was not until 1937 that any substantial evidence arose for the conclusive identification of an interstellar molecule – up until this point, the only chemical species known to exist in interstellar space were atomic. Comets have appeared in numerous works of fiction. The evolution of human understanding of comets is also detailed, and thinkers and astronomers such as Edmond Halley, Immanuel Kant, and William Huggins are discussed. The word ""astrochemistry"" may be applied to both the Solar System and the interstellar medium. The formation, atomic and chemical composition, evolution and fate of molecular gas clouds is of special interest, because it is from these clouds that solar systems form. == History == As an offshoot of the disciplines of astronomy and chemistry, the history of astrochemistry is founded upon the shared history of the two fields. By comparing astronomical observations with laboratory measurements, astrochemists can infer the elemental abundances, chemical composition, and temperatures of stars and interstellar clouds. In the thirty years afterwards, a small selection of other molecules were discovered in interstellar space: the most important being OH, discovered in 1963 and significant as a source of interstellar oxygen,) and H2CO (formaldehyde), discovered in 1969 and significant for being the first observed organic, polyatomic molecule in interstellar space The discovery of interstellar formaldehyde – and later, other molecules with potential biological significance, such as water or carbon monoxide – is seen by some as strong supporting evidence for abiogenetic theories of life: specifically, theories which hold that the basic molecular components of life came from extraterrestrial sources. In fact, CO is such a common interstellar molecule that it is used to map out molecular regions. The development of advanced observational and experimental spectroscopy has allowed for the detection of an ever-increasing array of molecules within solar systems and the surrounding interstellar medium. When it was discovered in 1939 it was not recognized as a comet and designated as asteroid 1939 TN. == References == == External links == * Orbital simulation from JPL (Java) / Horizons Ephemeris * 139P/Vaisala-Oterma – Seiichi Yoshida @ aerith.net *139P at Kronk's Cometography Category:Periodic comets 0139 Category:Discoveries by Liisi Oterma \+ Category:Comets in 2017 19391007 Astrochemistry overlaps with astrophysics and nuclear physics in characterizing the nuclear reactions which occur in stars, as well as the structure of stellar interiors. Comet is a 1985 popular-science book by Carl Sagan and Ann Druyan. In July 2015, scientists reported that upon the first touchdown of the Philae lander on comet 67/P surface, measurements by the COSAC and Ptolemy instruments revealed sixteen organic compounds, four of which were seen for the first time on a comet, including acetamide, acetone, methyl isocyanate and propionaldehyde. thumb|center|upright=4.5|The chemical diversity in the different types of astronomical object is noteworthy. ","Cometary chemistry is responsible for the formation of interstellar molecules, but there is no direct connection between the two.","Interstellar and cometary chemistry are the same thing, just with different names.","There is a possible connection between interstellar and cometary chemistry, as indicated by the similarity between interstellar and cometary ices and the analysis of organics from comet samples returned by the Stardust mission.","There is no relationship between interstellar and cometary chemistry, as they are two completely different phenomena.","Interstellar chemistry is responsible for the formation of comets, but there is no direct connection between the two.",C,kaggle200,"Several structures have been described as cometary knots or cometary globules that surround R Coronae Borealis, which is a peculiar star described as potentially the result of a white dwarf merger or final helium shell flash that periodically dims due to a build-up of carbon dust surrounding it, acting as a 'natural coronograph'.
Project Hyperion, one of the projects of Icarus Interstellar has looked into various feasibility issues of crewed interstellar travel. Its members continue to publish on crewed interstellar travel in collaboration with the Initiative for Interstellar Studies.
With the experiments onboard of the EXPOSE facilities, various aspects of astrobiology were investigated that could not be sufficiently approached by use of laboratory facilities on ground. The chemical set of experiments is designed to reach a better understanding of the role of interstellar, cometary and planetary chemistry in the origin of life. Comets and meteorites are interpreted as exogenous sources of prebiotic molecules on the early Earth. All data achieved from the astrobiological experiments on both EXPOSE missions will add to the understanding of the origin and evolution of life on Earth and on the possibility of its distribution in space or origin elsewhere.
Such IR observations have determined that in dense clouds (where there are enough particles to attenuate the destructive UV radiation) thin ice layers coat the microscopic particles, permitting some low-temperature chemistry to occur. Since hydrogen is by far the most abundant molecule in the universe, the initial chemistry of these ices is determined by the chemistry of the hydrogen. If the hydrogen is atomic, then the H atoms react with available O, C and N atoms, producing ""reduced"" species like HO, CH, and NH. However, if the hydrogen is molecular and thus not reactive, this permits the heavier atoms to react or remain bonded together, producing CO, CO, CN, etc. These mixed-molecular ices are exposed to ultraviolet radiation and cosmic rays, which results in complex radiation-driven chemistry. Lab experiments on the photochemistry of simple interstellar ices have produced amino acids. The similarity between interstellar and cometary ices (as well as comparisons of gas phase compounds) have been invoked as indicators of a connection between interstellar and cometary chemistry. This is somewhat supported by the results of the analysis of the organics from the comet samples returned by the Stardust mission but the minerals also indicated a surprising contribution from high-temperature chemistry in the solar nebula.","2017 interstellar meteor CNEOS 2017-03-09 (aka Interstellar meteor 2; IM2), a meteor with a mass of roughly 6.3 tons, burned up in the Earth's atmosphere on March 9, 2017. Similar to IM1, it has a high mechanical strength.In September 2022, astronomers Amir Siraj and Avi Loeb reported the discovery of a candidate interstellar meteor, CNEOS 2017-03-09 (aka Interstellar meteor 2; IM2), that impacted Earth in 2017 and is considered, based in part on the high material strength of the meteor, to be a possible interstellar object.
Cometary and interstellar dust streams The Helios zodiacal light measurements show excellent stability.
Such IR observations have determined that in dense clouds (where there are enough particles to attenuate the destructive UV radiation) thin ice layers coat the microscopic particles, permitting some low-temperature chemistry to occur. Since dihydrogen is by far the most abundant molecule in the universe, the initial chemistry of these ices is determined by the chemistry of the hydrogen. If the hydrogen is atomic, then the H atoms react with available O, C and N atoms, producing ""reduced"" species like H2O, CH4, and NH3. However, if the hydrogen is molecular and thus not reactive, this permits the heavier atoms to react or remain bonded together, producing CO, CO2, CN, etc. These mixed-molecular ices are exposed to ultraviolet radiation and cosmic rays, which results in complex radiation-driven chemistry. Lab experiments on the photochemistry of simple interstellar ices have produced amino acids. The similarity between interstellar and cometary ices (as well as comparisons of gas phase compounds) have been invoked as indicators of a connection between interstellar and cometary chemistry. This is somewhat supported by the results of the analysis of the organics from the comet samples returned by the Stardust mission but the minerals also indicated a surprising contribution from high-temperature chemistry in the solar nebula.","The similarity between interstellar and cometary ices (as well as comparisons of gas phase compounds) have been invoked as indicators of a connection between interstellar and cometary chemistryThe chemical set of experiments is designed to reach a better understanding of the role of interstellar, cometary and planetary chemistry in the origin of lifeThis is somewhat supported by the results of the analysis of the organics from the comet samples returned by the Stardust mission but the minerals also indicated a surprising contribution from high-temperature chemistry in the solar nebula.This is somewhat supported by the results of the analysis of the organics from the comet samples returned by the Stardust mission but the minerals also indicated a surprising contribution from high-temperature chemistry in the solar nebulaComets and meteorites are interpreted as exogenous sources of prebiotic molecules on the early EarthSimilar to IM1, it has a high mechanical strength.In September 2022, astronomers Amir Siraj and Avi Loeb reported the discovery of a candidate interstellar meteor, CNEOS 2017-03-09 (aka Interstellar meteor 2; IM2), that impacted Earth in 2017 and is considered, based in part on the high material strength of the meteor, to be a possible interstellar object.
Cometary and interstellar dust streams The Helios zodiacal light measurements show excellent stability.
Such IR observations have determined that in dense clouds (where there are enough particles to attenuate the destructive UV radiation) thin ice layers coat the microscopic particles, permitting some low-temperature chemistry to occur- Several structures have been described as cometary knots or cometary globules that surround R Coronae Borealis, which is a peculiar star described as potentially the result of a white dwarf merger or final helium shell flash that periodically dims due to a build-up of carbon dust surrounding it, acting as a 'natural coronograph'.
Project Hyperion, one of the projects of Icarus Interstellar has looked into various feasibility issues of crewed interstellar travelLab ex","The similarity between interstellar and cometary ices (as well as comparisons of gas phase compounds) have been invoked as indicators of a connection between interstellar and cometary chemistryThe chemical set of experiments is designed to reach a better understanding of the role of interstellar, cometary and planetary chemistry in the origin of lifeThis is somewhat supported by the results of the analysis of the organics from the comet samples returned by the Stardust mission but the minerals also indicated a surprising contribution from high-temperature chemistry in the solar nebula.This is somewhat supported by the results of the analysis of the organics from the comet samples returned by the Stardust mission but the minerals also indicated a surprising contribution from high-temperature chemistry in the solar nebulaComets and meteorites are interpreted as exogenous sources of prebiotic molecules on the early EarthSimilar to IM1, it has a high mechanical strength.In September 2022, astronomers Amir Siraj and Avi Loeb reported the discovery of a candidate interstellar meteor, CNEOS 2017-03-09 (aka Interstellar meteor 2; IM2), that impacted Earth in 2017 and is considered, based in part on the high material strength of the meteor, to be a possible interstellar object.
Cometary and interstellar dust streams The Helios zodiacal light measurements show excellent stability.
Such IR observations have determined that in dense clouds (where there are enough particles to attenuate the destructive UV radiation) thin ice layers coat the microscopic particles, permitting some low-temperature chemistry to occur- Several structures have been described as cometary knots or cometary globules that surround R Coronae Borealis, which is a peculiar star described as potentially the result of a white dwarf merger or final helium shell flash that periodically dims due to a build-up of carbon dust surrounding it, acting as a 'natural coronograph'.
Project Hyperion, one of the projects of Icarus Interstellar has looked into various feasibility issues of crewed interstellar travelLab ex[SEP]What is the relationship between interstellar and cometary chemistry?","['C', 'E', 'D']",1.0
What is the reason for recycling rare metals according to the United Nations?,"Recycling is an important part of creating more sustainable economies, reducing the cost and environmental impact of raw materials. However, the report found that less than a third of the crucial 60 metals studied in the report have an end- of-life recycling rate above 50 per cent, and 34 of them have a recycling rate of below 1 per cent.Clean technologies under threat from low metals reuse Environmental Data Interactive Exchange downloaded 22 September 2011 Green technologies would certainly benefit from greater metals recycling. Recycling Rates of Metals: A Status Report was the 2nd of six scientific assessments on global metals to be published by the International Resource Panel (IRP) of the United Nations Environment Programme. The IRP provides independent scientific assessments and expert advice on a variety of areas, including: • the volume of selected raw material reserves and how efficiently these resources are being used • the lifecycle-long environmental impacts of products and services created and consumed around the globe • options to meet human and economic needs with fewer or cleaner resources. ==About the report== As metal use has increased during the 20th and 21st centuries, there has been a substantial shift from metal resources being subterranean geological stores to becoming ‘above-ground’ stocks in use in society.Metal Stocks in Society: Scientific synthesis, 2010, International Resource Panel, UNEP Metals can be used over and over again, saving energy and minimising the negative environmental impacts associated with mining virgin material, so it makes sense to recycle these above-ground stocks. Recycling can be carried out on various raw materials. Stocks of these metals are often tied up in old gadgets, such as out-of- date mobile phones, which people often leave in a cupboard and forget about.Essential 'green' metals are being thrown away, by Michael Marshall New Scientist 31 May 2011 The report’s authors concluded that appropriate recycling infrastructure should be developed, supported by policy instruments such as research and development, economic incentives and capacity-building activities. == References == == External links == * www.resourcepanel.org * www.unep.org Category:United Nations Environment Programme Iron and steel are the world's most recycled materials, and among the easiest materials to reprocess, as they can be separated magnetically from the waste stream. Not all materials are easily recycled, and processing recyclable into the correct waste stream requires considerable energy. Any grade of steel can be recycled to top quality new metal, with no 'downgrading' from prime to lower quality materials as steel is recycled repeatedly. 42% of crude steel produced is recycled material. ===Other metals=== For information about recycling other, less common metals, refer to: *Bismuth recycling *Lead recycling ==Plastic== ==Timber== thumb|A tidy stack of pallets awaits reuse or recycling. To reach this higher temperature, much more energy is needed, leading to the high environmental benefits of aluminium recycling. This process does not produce any change in the metal, so aluminium can be recycled indefinitely. China Metal Recycling (Holdings) Limited () was a company the largest recycler of scrap metal in Mainland China by revenue.China Metal Recycling Seeks $200 Million H.K. IPO Based in Guangzhou, Guangdong, it was mainly engaged in collecting scrap steel, scrap copper and other scrap metals and processing them using equipment to produce recycled scrap metals for its customers.China Metal Recycling Holdings Limited Its recycling facilities were located in Guangdong, Jiangsu and Hong Kong.China Metal Recycling (Holdings) Ltd The company was wound up and de-listed after accounting fraud surfaced. ==History== The company was established in 2000. Also, the energy saved by recycling one aluminium can is enough to run a television for three hours. ===Copper=== ===Iron and steel=== thumb|Steel crushed and baled for recycling. This mission is underpinned by five key objectives: * To be an effective voice for the metals recycling industry in the UK. Similarly, asphalt roof shingles can be recycled for use in new asphalt pavements. ==Concrete== ==Glass== ==Metals== ===Aluminium=== Aluminium is one of the most efficient and widely recycled materials.DRLP Fact SheetsEnvironmental Protection Agency Frequently Asked Questions about Recycling and Waste Management Aluminium is shredded and ground into small pieces or crushed into bales. Recycling aluminium saves 95% of the energy cost of processing new aluminium. Recycling is via a steelworks: scrap is either remelted in an electric arc furnace (90-100% scrap), or used as part of the charge in a Basic Oxygen Furnace (around 25% scrap). At the same time, many recycle a wide range of related products, such as end of life vehicles, packaging, batteries, domestic appliances, building materials and electronic goods. While legislation was passed in 1988 requiring scrap metal recovery to be licensed as a ‘waste disposal’ activity, ten years later the first case was brought on whether certain grades of scrap metal should considered as waste. ==References== ==External links== * British Metals Recycling Association * The Bureau of International Recycling * The European Recycling Industries’ Confederation (EuRIC) Category:Huntingdonshire Category:Trade associations based in the United Kingdom Category:Organisations based in Cambridgeshire Category:Recycling in the United Kingdom Category:Recycling organizations Category:Organizations established in 2001 Category:2001 establishments in the United Kingdom Recycling timber has become popular due to its image as an environmentally friendly product, with consumers commonly believing that by purchasing recycled wood the demand for green timber will fall and ultimately benefit the environment. ","The demand for rare metals will quickly exceed the consumed tonnage in 2013, but recycling rare metals with a worldwide production higher than 100 000 t/year is a good way to conserve natural resources and energy.","The demand for rare metals will decrease in 2013, and recycling rare metals with a worldwide production lower than 100 000 t/year is a good way to conserve natural resources and energy.","The demand for rare metals will quickly exceed the consumed tonnage in 2013, but recycling rare metals with a worldwide production higher than 100 000 t/year is not a good way to conserve natural resources and energy.","The demand for rare metals will quickly exceed the consumed tonnage in 2013, but recycling rare metals with a worldwide production lower than 100 000 t/year is not a good way to conserve natural resources and energy.","The demand for rare metals will quickly exceed the consumed tonnage in 2013, and recycling rare metals with a worldwide production lower than 100 000 t/year is urgent and priority should be placed on it in order to conserve natural resources and energy.",E,kaggle200,"Artificial asteroid retrieval may provide scientists and engineers with information regarding asteroid composition, as asteroids are known to sometimes contain rare metals such as palladium and platinum. Attempts at asteroid retrieval include NASA’s Asteroid Redirect Missions from 2013. These efforts were canceled in 2017.
Chronometers often included other innovations to increase their efficiency and precision. Hard stones such as diamond, ruby, and sapphire were often used as jewel bearings to decrease friction and wear of the pivots and escapement. Chronometer makers also took advantage of the physical properties of rare metals such as gold, platinum, and palladium.
US575002A patent on 01.Dec.1897 to Alexander Lodyguine (Lodygin, Russia) describes filament made of rare metals, amongst them was tungsten. Lodygin later sold the patent rights to GE.
An urban mine is the stockpile of rare metals in the discarded waste electrical and electronic equipment (WEEE) of a society. Urban mining is the process of recovering these rare metals through mechanical and chemical treatments.","For the production of a HBr redox flow battery no rare metals like lithium or cobalt are required, but the hydrogen electrode requires a precious metal catalyst. Moreover, the energy density of the system is generally higher than other redox flow battery systems.
Metal filament, inert gas US575002A patent on 01.Dec.1897 to Alexander Lodyguine (Lodygin, Russia) describes filament made of rare metals, amongst them was tungsten. Lodygin invented a process where rare metals such as tungsten can be chemically treated and heat-vaporized onto an electrically heated thread-like wire (platinum, carbon, gold) acting as a temporary base or skeletal form. (US patent 575,002). Lodygin later sold the patent rights to GE.
In 2017, the Greenpeace USA published a study of 17 of the world's leading consumer electronics companies about their energy and resource consumption and the use of chemicals.
Rare metals and rare earth elements Electronic devices use thousands rare metals and rare earth elements (40 on average for a smartphone), these material are extracted and refined using water and energy-intensive processes. These metals are also used in the renewable energy industry meaning that consumer electronics are directly competing for the raw materials.","Urban mining is the process of recovering these rare metals through mechanical and chemical treatments.These metals are also used in the renewable energy industry meaning that consumer electronics are directly competing for the raw materialsLodygin later sold the patent rights to GE.
An urban mine is the stockpile of rare metals in the discarded waste electrical and electronic equipment (WEEE) of a society- Artificial asteroid retrieval may provide scientists and engineers with information regarding asteroid composition, as asteroids are known to sometimes contain rare metals such as palladium and platinumLodygin later sold the patent rights to GE.
In 2017, the Greenpeace USA published a study of 17 of the world's leading consumer electronics companies about their energy and resource consumption and the use of chemicals.
Rare metals and rare earth elements Electronic devices use thousands rare metals and rare earth elements (40 on average for a smartphone), these material are extracted and refined using water and energy-intensive processesLodygin invented a process where rare metals such as tungsten can be chemically treated and heat-vaporized onto an electrically heated thread-like wire (platinum, carbon, gold) acting as a temporary base or skeletal formFor the production of a HBr redox flow battery no rare metals like lithium or cobalt are required, but the hydrogen electrode requires a precious metal catalystMoreover, the energy density of the system is generally higher than other redox flow battery systems.
Metal filament, inert gas US575002A patent on 01.Dec.1897 to Alexander Lodyguine (Lodygin, Russia) describes filament made of rare metals, amongst them was tungstenHard stones such as diamond, ruby, and sapphire were often used as jewel bearings to decrease friction and wear of the pivots and escapementChronometer makers also took advantage of the physical properties of rare metals such as gold, platinum, and palladium.
US575002A patent on 01.Dec.1897 to Alexander Lodyguine (Lodygin, Russia) describes filament made of rare metals, amongst them was tungstenA","Urban mining is the process of recovering these rare metals through mechanical and chemical treatments.These metals are also used in the renewable energy industry meaning that consumer electronics are directly competing for the raw materialsLodygin later sold the patent rights to GE.
An urban mine is the stockpile of rare metals in the discarded waste electrical and electronic equipment (WEEE) of a society- Artificial asteroid retrieval may provide scientists and engineers with information regarding asteroid composition, as asteroids are known to sometimes contain rare metals such as palladium and platinumLodygin later sold the patent rights to GE.
In 2017, the Greenpeace USA published a study of 17 of the world's leading consumer electronics companies about their energy and resource consumption and the use of chemicals.
Rare metals and rare earth elements Electronic devices use thousands rare metals and rare earth elements (40 on average for a smartphone), these material are extracted and refined using water and energy-intensive processesLodygin invented a process where rare metals such as tungsten can be chemically treated and heat-vaporized onto an electrically heated thread-like wire (platinum, carbon, gold) acting as a temporary base or skeletal formFor the production of a HBr redox flow battery no rare metals like lithium or cobalt are required, but the hydrogen electrode requires a precious metal catalystMoreover, the energy density of the system is generally higher than other redox flow battery systems.
Metal filament, inert gas US575002A patent on 01.Dec.1897 to Alexander Lodyguine (Lodygin, Russia) describes filament made of rare metals, amongst them was tungstenHard stones such as diamond, ruby, and sapphire were often used as jewel bearings to decrease friction and wear of the pivots and escapementChronometer makers also took advantage of the physical properties of rare metals such as gold, platinum, and palladium.
US575002A patent on 01.Dec.1897 to Alexander Lodyguine (Lodygin, Russia) describes filament made of rare metals, amongst them was tungstenA[SEP]What is the reason for recycling rare metals according to the United Nations?","['E', 'A', 'D']",1.0
What is radiometric dating?,"Radiometric dating, radioactive dating or radioisotope dating is a technique which is used to date materials such as rocks or carbon, in which trace radioactive impurities were selectively incorporated when they were formed. Different methods of radiometric dating vary in the timescale over which they are accurate and the materials to which they can be applied. ==Fundamentals== ===Radioactive decay=== All ordinary matter is made up of combinations of chemical elements, each with its own atomic number, indicating the number of protons in the atomic nucleus. Together with stratigraphic principles, radiometric dating methods are used in geochronology to establish the geologic time scale.McRae, A. 1998. Radiometric dating is also used to date archaeological materials, including ancient artifacts. Radiometric Dating and the Geological Time Scale: Circular Reasoning or Reliable Tools? The use of radiometric dating was first published in 1907 by Bertram Boltwood and is now the principal source of information about the absolute age of rocks and other geological features, including the age of fossilized life forms or the age of Earth itself, and can also be used to date a wide range of natural and man-made materials. Radiometric Dating and the Geological Time Scale, TalkOrigins Archive Among the best-known techniques are radiocarbon dating, potassium–argon dating and uranium–lead dating. Uranium–uranium dating is a radiometric dating technique which compares two isotopes of uranium (U) in a sample: uranium-234 (234U) and uranium-238 (238U). The age that can be calculated by radiometric dating is thus the time at which the rock or mineral cooled to closure temperature. Uranium–thorium dating, also called thorium-230 dating, uranium-series disequilibrium dating or uranium-series dating, is a radiometric dating technique established in the 1960s which has been used since the 1970s to determine the age of calcium carbonate materials such as speleothem or coral. Unlike other radiometric dating techniques, those using the uranium decay series (except for those using the stable final isotopes 206Pb and 207Pb) compare the ratios of two radioactive unstable isotopes. Radiocarbon dating measurements produce ages in ""radiocarbon years"", which must be converted to calendar ages by a process called calibration. A related method is ionium–thorium dating, which measures the ratio of ionium (thorium-230) to thorium-232 in ocean sediment. ===Radiocarbon dating method=== Radiocarbon dating is also simply called carbon-14 dating. This ""wiggle-matching"" technique can lead to more precise dating than is possible with individual radiocarbon dates. Accurate radiometric dating generally requires that the parent has a long enough half-life that it will be present in significant amounts at the time of measurement (except as described below under ""Dating with short-lived extinct radionuclides""), the half-life of the parent is accurately known, and enough of the daughter product is produced to be accurately measured and distinguished from the initial amount of the daughter present in the material. An isochron plot is used to solve the age equation graphically and calculate the age of the sample and the original composition. ==Modern dating methods== Radiometric dating has been carried out since 1905 when it was invented by Ernest Rutherford as a method by which one might determine the age of the Earth. This method requires at least one of the isotope systems to be very precisely calibrated, such as the Pb-Pb system. ===Accuracy of radiometric dating=== The basic equation of radiometric dating requires that neither the parent nuclide nor the daughter product can enter or leave the material after its formation. As such, it provides a useful bridge in radiometric dating techniques between the ranges of 230Th/238U (accurate up to ca. 450,000 years) and U–Pb dating (accurate up to the age of the solar system, but problematic on samples younger than about 2 million years). ==See also == * Carbon dating * Chronological dating ==References== * * Category:Radiometric dating Category:Uranium This in turn corresponds to a difference in age of closure in the early solar system. ===The 26Al – 26Mg chronometer=== Another example of short-lived extinct radionuclide dating is the – chronometer, which can be used to estimate the relative ages of chondrules. decays to with a half-life of 720 000 years. Dating methods based on extinct radionuclides can also be calibrated with the U-Pb method to give absolute ages. ","Radiometric dating is a method of measuring geological time using geological sedimentation, discovered in the early 20th century.","Radiometric dating is a method of measuring geological time using radioactive decay, discovered in the early 20th century.","Radiometric dating is a method of measuring geological time using the position of rocks, discovered in the early 20th century.","Radiometric dating is a method of measuring geological time using the age of fossils, discovered in the early 20th century.","Radiometric dating is a method of measuring geological time using the cooling of the earth, discovered in the early 20th century.",B,kaggle200,"Radiometric dating, radioactive dating or radioisotope dating is a technique which is used to date materials such as rocks or carbon, in which trace radioactive impurities were selectively incorporated when they were formed. The method compares the abundance of a naturally occurring radioactive isotope within the material to the abundance of its decay products, which form at a known constant rate of decay. The use of radiometric dating was first published in 1907 by Bertram Boltwood and is now the principal source of information about the absolute age of rocks and other geological features, including the age of fossilized life forms or the age of Earth itself, and can also be used to date a wide range of natural and man-made materials.
Comminution dating is a developing radiometric dating technique based on the disequilibrium between uranium isotopes in fine-grained sediments.
Together with stratigraphic principles, radiometric dating methods are used in geochronology to establish the geologic time scale. Among the best-known techniques are radiocarbon dating, potassium–argon dating and uranium–lead dating. By allowing the establishment of geological timescales, it provides a significant source of information about the ages of fossils and the deduced rates of evolutionary change. Radiometric dating is also used to date archaeological materials, including ancient artifacts.
Different methods of radiometric dating vary in the timescale over which they are accurate and the materials to which they can be applied.","Radiometric dating, radioactive dating or radioisotope dating is a technique which is used to date materials such as rocks or carbon, in which trace radioactive impurities were selectively incorporated when they were formed. The method compares the abundance of a naturally occurring radioactive isotope within the material to the abundance of its decay products, which form at a known constant rate of decay. The use of radiometric dating was first published in 1907 by Bertram Boltwood and is now the principal source of information about the absolute age of rocks and other geological features, including the age of fossilized life forms or the age of Earth itself, and can also be used to date a wide range of natural and man-made materials.
Together with stratigraphic principles, radiometric dating methods are used in geochronology to establish the geologic time scale. Among the best-known techniques are radiocarbon dating, potassium–argon dating and uranium–lead dating. By allowing the establishment of geological timescales, it provides a significant source of information about the ages of fossils and the deduced rates of evolutionary change. Radiometric dating is also used to date archaeological materials, including ancient artifacts.
Different methods of radiometric dating vary in the timescale over which they are accurate and the materials to which they can be applied.","Radiometric dating, radioactive dating or radioisotope dating is a technique which is used to date materials such as rocks or carbon, in which trace radioactive impurities were selectively incorporated when they were formed- Radiometric dating, radioactive dating or radioisotope dating is a technique which is used to date materials such as rocks or carbon, in which trace radioactive impurities were selectively incorporated when they were formedThe use of radiometric dating was first published in 1907 by Bertram Boltwood and is now the principal source of information about the absolute age of rocks and other geological features, including the age of fossilized life forms or the age of Earth itself, and can also be used to date a wide range of natural and man-made materials.
Comminution dating is a developing radiometric dating technique based on the disequilibrium between uranium isotopes in fine-grained sediments.
Together with stratigraphic principles, radiometric dating methods are used in geochronology to establish the geologic time scaleThe use of radiometric dating was first published in 1907 by Bertram Boltwood and is now the principal source of information about the absolute age of rocks and other geological features, including the age of fossilized life forms or the age of Earth itself, and can also be used to date a wide range of natural and man-made materials.
Together with stratigraphic principles, radiometric dating methods are used in geochronology to establish the geologic time scaleRadiometric dating is also used to date archaeological materials, including ancient artifacts.
Different methods of radiometric dating vary in the timescale over which they are accurate and the materials to which they can be appliedRadiometric dating is also used to date archaeological materials, including ancient artifacts.
Different methods of radiometric dating vary in the timescale over which they are accurate and the materials to which they can be applied.Among the best-known techniques are radiocarbon dating, potassium–argon dating and uranium–lead datingThe method ","Radiometric dating, radioactive dating or radioisotope dating is a technique which is used to date materials such as rocks or carbon, in which trace radioactive impurities were selectively incorporated when they were formed- Radiometric dating, radioactive dating or radioisotope dating is a technique which is used to date materials such as rocks or carbon, in which trace radioactive impurities were selectively incorporated when they were formedThe use of radiometric dating was first published in 1907 by Bertram Boltwood and is now the principal source of information about the absolute age of rocks and other geological features, including the age of fossilized life forms or the age of Earth itself, and can also be used to date a wide range of natural and man-made materials.
Comminution dating is a developing radiometric dating technique based on the disequilibrium between uranium isotopes in fine-grained sediments.
Together with stratigraphic principles, radiometric dating methods are used in geochronology to establish the geologic time scaleThe use of radiometric dating was first published in 1907 by Bertram Boltwood and is now the principal source of information about the absolute age of rocks and other geological features, including the age of fossilized life forms or the age of Earth itself, and can also be used to date a wide range of natural and man-made materials.
Together with stratigraphic principles, radiometric dating methods are used in geochronology to establish the geologic time scaleRadiometric dating is also used to date archaeological materials, including ancient artifacts.
Different methods of radiometric dating vary in the timescale over which they are accurate and the materials to which they can be appliedRadiometric dating is also used to date archaeological materials, including ancient artifacts.
Different methods of radiometric dating vary in the timescale over which they are accurate and the materials to which they can be applied.Among the best-known techniques are radiocarbon dating, potassium–argon dating and uranium–lead datingThe method [SEP]What is radiometric dating?","['B', 'D', 'C']",1.0
What is the role of methane in Fischer-Tropsch processes?,"The Fischer–Tropsch process is a collection of chemical reactions that converts a mixture of carbon monoxide and hydrogen, known as syngas, into liquid hydrocarbons. Most important is the water-gas shift reaction, which provides a source of hydrogen at the expense of carbon monoxide: :H2O + CO -> H2 + CO2 For FT plants that use methane as the feedstock, another important reaction is dry reforming, which converts the methane into CO and H2: : CH4 + CO2 -> 2CO + 2H2 ===Process conditions=== Generally, the Fischer–Tropsch process is operated in the temperature range of . Fischer-Tropsch process is discussed as a step of producing carbon-neutral liquid hydrocarbon fuels from CO2 and hydrogen.Davis, S.J., Lewis, N.S., Shaner, M., Aggarwal, S., Arent, D., Azevedo, I.L., Benson, S.M., Bradley, T., Brouwer, J., Chiang, Y.M. and Clack, C.T., 2018. Methane functionalization is the process of converting methane in its gaseous state to another molecule with a functional group, typically methanol or acetic acid, through the use of transition metal catalysts. This reactivity can be important for synthesis gas derived from coal or biomass, which tend to have relatively low H2:CO ratios (< 1). === Design of the Fischer–Tropsch process reactor === Efficient removal of heat from the reactor is the basic need of FT reactors since these reactions are characterized by high exothermicity. Low-temperature Fischer–Tropsch (LTFT) uses an iron- or cobalt- based catalyst. The Fischer–Tropsch process is an important reaction in both coal liquefaction and gas to liquids technology for producing liquid hydrocarbons. The functionalization of methane in particular has been reported in four different methods that use homogeneous catalysts rather than heterogeneous catalysts. Hydrogen and carbon dioxide react over a cobalt-based catalyst, producing methane. The reaction depends on a delicate balance between methane pressure and catalyst concentration, and consequently more work is being done to further improve yields. ==References== Category:Organometallic chemistry Category:Organic chemistry Category:Chemistry Category:Methane The technology can be used to convert natural gas, biomass or coal into synthetic fuels. ===Shell middle distillate synthesis=== One of the largest implementations of Fischer–Tropsch technology is in Bintulu, Malaysia. The large abundance of methane in natural gas or shale gas deposits presents a large potential for its use as a feedstock in modern chemistry. Below this, methane is produced. Such efforts have had only limited success. ==Catalysts== Four metals are active as catalysts for the Fischer–Tropsch process: iron, cobalt, nickel, and ruthenium. Naturally occurring methane is mainly produced by the process of methanogenesis, a form of anaerobic respiration used by microorganisms as an energy source. thumb|Methane Aerobic methane production is a potential biological pathway for atmospheric methane (CH4) production under oxygenated conditions. The catalyst factory has a capacity of over 15 tons per year, and produces the unique proprietary Fischer–Tropsch catalysts developed by the company's R&D; division. Science, 360(6396), p.eaas9793 The process was first developed by Franz Fischer and Hans Tropsch at the Kaiser Wilhelm Institute for Coal Research in Mülheim an der Ruhr, Germany, in 1925. ==Reaction mechanism== The Fischer–Tropsch process involves a series of chemical reactions that produce a variety of hydrocarbons, ideally having the formula (CnH2n+2). The main strategy currently used to increase the reactivity of methane uses transition metal complexes to activate the carbon-hydrogen bonds. This way they can drive the reaction so as to minimize methane formation without producing many long-chained hydrocarbons. ",Methane is partially converted to carbon monoxide for utilization in Fischer-Tropsch processes.,Methane is used as a catalyst in Fischer-Tropsch processes.,Methane is not used in Fischer-Tropsch processes.,Methane is fully converted to carbon monoxide for utilization in Fischer-Tropsch processes.,Methane is a byproduct of Fischer-Tropsch processes.,A,kaggle200,"Vanadium nitrogenases have also been shown to catalyze the conversion of CO into alkanes through a reaction comparable to Fischer-Tropsch synthesis.
Naturally occurring methane is not utilized as a chemical feedstock, despite its abundance and low cost. Current technology makes prodigious use of methane by steam reforming to produce syngas, a mixture of carbon monoxide and hydrogen. This syngas is then used in Fischer-Tropsch reactions to make longer carbon chain products or methanol, one of the most important industrial chemical feedstocks. An intriguing method to convert these hydrocarbons involves C-H activation. Roy A. Periana, for example, reported that complexes containing late transition metals, such as Pt, Pd, Au, and Hg, react with methane (CH) in HSO to yield methyl bisulfate. The process has not however been implemented commercially.
Ethylene is produced by several methods in the petrochemical industry. A primary method is steam cracking (SC) where hydrocarbons and steam are heated to 750–950 °C. This process converts large hydrocarbons into smaller ones and introduces unsaturation. When ethane is the feedstock, ethylene is the product. Ethylene is separated from the resulting mixture by repeated compression and distillation. In Europe and Asia, ethylene is obtained mainly from cracking naphtha, gasoil and condensates with the coproduction of propylene, C4 olefins and aromatics (pyrolysis gasoline). Other technologies employed for the production of ethylene include oxidative coupling of methane, Fischer-Tropsch synthesis, methanol-to-olefins (MTO), and catalytic dehydrogenation.
In contrast to the situation for carbon monoxide and methanol, methane and carbon dioxide have limited uses as feedstocks to chemicals and fuels. This disparity contrasts with the relative abundance of methane and carbon dioxide. Methane is often partially converted to carbon monoxide for utilization in Fischer-Tropsch processes. Of interest for upgrading methane is its oxidative coupling:","Naturally occurring methane is not utilized as a chemical feedstock, despite its abundance and low cost. Current technology makes prodigious use of methane by steam reforming to produce syngas, a mixture of carbon monoxide and hydrogen. This syngas is then used in Fischer-Tropsch reactions to make longer carbon chain products or methanol, one of the most important industrial chemical feedstocks. An intriguing method to convert these hydrocarbons involves C-H activation. Roy A. Periana, for example, reported that complexes containing late transition metals, such as Pt, Pd, Au, and Hg, react with methane (CH4) in H2SO4 to yield methyl bisulfate. The process has not however been implemented commercially.
Industrial process Ethylene is produced by several methods in the petrochemical industry. A primary method is steam cracking (SC) where hydrocarbons and steam are heated to 750–950 °C. This process converts large hydrocarbons into smaller ones and introduces unsaturation. When ethane is the feedstock, ethylene is the product. Ethylene is separated from the resulting mixture by repeated compression and distillation. In Europe and Asia, ethylene is obtained mainly from cracking naphtha, gasoil and condensates with the coproduction of propylene, C4 olefins and aromatics (pyrolysis gasoline). Other technologies employed for the production of ethylene include oxidative coupling of methane, Fischer-Tropsch synthesis, methanol-to-olefins (MTO), and catalytic dehydrogenation.
Carbon monoxide and methanol are important chemical feedstocks. CO is utilized by myriad carbonylation reactions. Together with hydrogen, it is the feed for the Fischer–Tropsch process, which affords liquid fuels. Methanol is the precursor to acetic acid, dimethyl ether, formaldehyde, and many methyl compounds (esters, amines, halides). A larger scale application is methanol to olefins, which produces ethylene and propylene.In contrast to the situation for carbon monoxide and methanol, methane and carbon dioxide have limited uses as feedstocks to chemicals and fuels. This disparity contrasts with the relative abundance of methane and carbon dioxide. Methane is often partially converted to carbon monoxide for utilization in Fischer-Tropsch processes. Of interest for upgrading methane is its oxidative coupling: 2CH4 + O2 → C2H4 + 2H2OConversion of carbon dioxide to unsaturated hydrocarbons via electrochemical reduction is a hopeful avenue of research, but no stable and economic technology yet has been developed.","Methane is often partially converted to carbon monoxide for utilization in Fischer-Tropsch processesTogether with hydrogen, it is the feed for the Fischer–Tropsch process, which affords liquid fuelsThis syngas is then used in Fischer-Tropsch reactions to make longer carbon chain products or methanol, one of the most important industrial chemical feedstocksOther technologies employed for the production of ethylene include oxidative coupling of methane, Fischer-Tropsch synthesis, methanol-to-olefins (MTO), and catalytic dehydrogenation.
In contrast to the situation for carbon monoxide and methanol, methane and carbon dioxide have limited uses as feedstocks to chemicals and fuelsCurrent technology makes prodigious use of methane by steam reforming to produce syngas, a mixture of carbon monoxide and hydrogenOf interest for upgrading methane is its oxidative coupling:- Vanadium nitrogenases have also been shown to catalyze the conversion of CO into alkanes through a reaction comparable to Fischer-Tropsch synthesis.
Naturally occurring methane is not utilized as a chemical feedstock, despite its abundance and low costA larger scale application is methanol to olefins, which produces ethylene and propylene.In contrast to the situation for carbon monoxide and methanol, methane and carbon dioxide have limited uses as feedstocks to chemicals and fuelsOther technologies employed for the production of ethylene include oxidative coupling of methane, Fischer-Tropsch synthesis, methanol-to-olefins (MTO), and catalytic dehydrogenation.
Carbon monoxide and methanol are important chemical feedstocksNaturally occurring methane is not utilized as a chemical feedstock, despite its abundance and low costThis disparity contrasts with the relative abundance of methane and carbon dioxideThis process converts large hydrocarbons into smaller ones and introduces unsaturationOf interest for upgrading methane is its oxidative coupling: 2CH4 + O2 → C2H4 + 2H2OConversion of carbon dioxide to unsaturated hydrocarbons via electrochemical reduction is a hopeful avenue of research, but no stable and ","Methane is often partially converted to carbon monoxide for utilization in Fischer-Tropsch processesTogether with hydrogen, it is the feed for the Fischer–Tropsch process, which affords liquid fuelsThis syngas is then used in Fischer-Tropsch reactions to make longer carbon chain products or methanol, one of the most important industrial chemical feedstocksOther technologies employed for the production of ethylene include oxidative coupling of methane, Fischer-Tropsch synthesis, methanol-to-olefins (MTO), and catalytic dehydrogenation.
In contrast to the situation for carbon monoxide and methanol, methane and carbon dioxide have limited uses as feedstocks to chemicals and fuelsCurrent technology makes prodigious use of methane by steam reforming to produce syngas, a mixture of carbon monoxide and hydrogenOf interest for upgrading methane is its oxidative coupling:- Vanadium nitrogenases have also been shown to catalyze the conversion of CO into alkanes through a reaction comparable to Fischer-Tropsch synthesis.
Naturally occurring methane is not utilized as a chemical feedstock, despite its abundance and low costA larger scale application is methanol to olefins, which produces ethylene and propylene.In contrast to the situation for carbon monoxide and methanol, methane and carbon dioxide have limited uses as feedstocks to chemicals and fuelsOther technologies employed for the production of ethylene include oxidative coupling of methane, Fischer-Tropsch synthesis, methanol-to-olefins (MTO), and catalytic dehydrogenation.
Carbon monoxide and methanol are important chemical feedstocksNaturally occurring methane is not utilized as a chemical feedstock, despite its abundance and low costThis disparity contrasts with the relative abundance of methane and carbon dioxideThis process converts large hydrocarbons into smaller ones and introduces unsaturationOf interest for upgrading methane is its oxidative coupling: 2CH4 + O2 → C2H4 + 2H2OConversion of carbon dioxide to unsaturated hydrocarbons via electrochemical reduction is a hopeful avenue of research, but no stable and [SEP]What is the role of methane in Fischer-Tropsch processes?","['A', 'D', 'C']",1.0
What is a phageome?,"thumb|297x297px|Transmission electron micrograph of multiple bacteriophages attached to a bacterial cell wall A phageome is a community of bacteriophages and their metagenomes localized in a particular environment, similar to a microbiome. The phage group takes its name from bacteriophages, the bacteria- infecting viruses that the group used as experimental model organisms. Phageome is a subcategory of virome, which is all of the viruses that are associated with a host or environment. A bacteriophage, or phage for short, is a virus that has the ability to infect bacteria and archaea, and can replicate inside of them. A Bacillus phage is a member of a group of bacteriophages known to have bacteria in the genus Bacillus as host species. Bacteriophages, known as phages, are a form of virus that attach to bacterial cells and inject their genome into the cell. The phage group (sometimes called the American Phage Group) was an informal network of biologists centered on Max Delbrück that contributed heavily to bacterial genetics and the origins of molecular biology in the mid-20th century. The composition of phages that make up a healthy human gut phageome is currently debated, since different methods of research can lead to different results. == See also == *Virosphere == References == Category:Microbiology Category:Bacteriophages Category:Biology Category:Wikipedia Student Program Category:Microbiomes CrAss-like phage are a bacteriophage (virus that infects bacteria) family that was discovered in 2014 by cross assembling reads in human fecal metagenomes. It is important to note that many phages, especially temperate ones, carry genes that can affect the pathogenicity of the host. As antibacterials, phages may also affect the composition of microbiomes, by infecting and killing phage-sensitive strains of bacteria. Using co-occurrence analysis and CRISPR spacer similarities, the phage was predicted to infect Bacteroidota bacteria which are dominant members of the gut microbiome in most individuals. == Taxonomy == The crAss-like phage bacteriophage family is considered highly diverse and consists of four subfamilies- alpha, beta, delta, and gamma- and ten genera within the subfamilies. The genetic manipulation of phage genomes can also be a strategy to circumvent phage resistance. ==Safety aspects== Bacteriophages are bacterial viruses, evolved to infect bacterial cells. Based on initial sequence-based studies of crAss-like phage, the bacteriophage family was predicted to consist of phage with a diversity of lifestyles including lytic, lysogenic, and temperate – a combination of lytic and lysogenic. During the first year of life, crAss-like phage abundance and diversity within the gut microbiome significantly increases. In addition to Delbrück, important scientists associated with the phage group include: Salvador Luria, Alfred Hershey, Seymour Benzer, Charles Steinberg, Gunther Stent, James D. Watson, Frank Stahl, and Renato Dulbecco. ==Origins of the phage group: people, ideas, experiments and personal relationships== Bacteriophages had been a subject of experimental investigation since Félix d'Herelle had isolated and developed methods for detecting and culturing them, beginning in 1917. This helped to make research from different laboratories more easily comparable and replicable, helping to unify the field of bacterial genetics.History: The Phage Group , Cold Spring Harbor Laboratory, accessed May 4, 2007 ==Phage course at Cold Spring Harbor Laboratory and at Caltech== Apart from direct collaborations, the main legacy of the phage group resulted from the yearly summer phage course taught at Cold Spring Harbor Laboratory and taught sporadically at Caltech. Phage effects on the human microbiome also contribute to safety issues in phage therapy. The presence of crAss-like phage in the human gut microbiota is not yet associated with any health condition. == Discovery == The crAss (cross-assembly) software used to discover the first crAss-like phage, p-crAssphage (prototypical-crAssphage), relies on cross assembling reads from multiple metagenomes obtained from the same environment. It's hypothesized that crAss-like phage and their hosts use unique mechanisms or combinations of mechanisms to maintain their stable equilibrium. == Humans and crAss-like phage == CrAss-like phage have been identified as a highly abundant and near-universal member of the human gut microbiome. ","A community of viruses and their metagenomes localized in a particular environment, similar to a microbiome.","A community of bacteria and their metagenomes localized in a particular environment, similar to a microbiome.","A community of bacteriophages and their metagenomes localized in a particular environment, similar to a microbiome.","A community of fungi and their metagenomes localized in a particular environment, similar to a microbiome.","A community of archaea and their metagenomes localized in a particular environment, similar to a microbiome.",C,kaggle200,"Microbiome studies sometimes focus on the behaviour of a specific group of microbiota, generally in relation to or justified by a clear hypothesis. More and more terms like bacteriome, archaeome, mycobiome, or virome have started appearing in the scientific literature, but these terms do not refer to biomes (a regional ecosystem with a distinct assemblage of (micro) organisms, and physical environment often reflecting a certain climate and soil) as the microbiome itself. Consequently, it would be better to use the original terms (bacterial, archaeal, or fungal community). In contrast to the microbiota, which can be studied separately, the microbiome is always composed by all members, which interact with each other, live in the same habitat, and form their ecological niche together. The well-established term ""virome"" is derived from virus and genome and is used to describe viral shotgun metagenomes consisting of a collection of nucleic acids associated with a particular ecosystem or holobiont. ""Viral metagenomes"" can be suggested as a semantically and scientifically better term.
While a biome can cover large areas, a microbiome is a mix of organisms that coexist in a defined space on a much smaller scale. For example, the human microbiome is the collection of bacteria, viruses, and other microorganisms that are present on or in a human body.
MEGARes allows users to analyze antimicrobial resistance on a population-level, similar to a microbiome analysis, from a FASTA sequence or keywords in their search bar. Furthermore, users can access AmrPlusplus, a pipeline for resistome analysis of metagenomic datasets that can be integrated with the MEGARes database.
A phageome is a community of bacteriophages and their metagenomes localized in a particular environment, similar to a microbiome. The term was first used in an article by Modi ""et al"" in 2013 and has continued to be used in scientific articles that relate to bacteriophages and their metagenomes. A bacteriophage, or phage for short, is a virus that has the ability to infect bacteria and archaea, and can replicate inside of them. Phageome is a subcategory of virome, which is all of the viruses that are associated with a host or environment. Phages make up the majority of most viromes and are currently understood as being the most abundant organism. Oftentimes scientists will look only at a phageome instead of a virome while conducting research.","A microbiome (from Ancient Greek μικρός (mikrós) 'small', and βίος (bíos) 'life') is the community of microorganisms that can usually be found living together in any given habitat. It was defined more precisely in 1988 by Whipps et al. as ""a characteristic microbial community occupying a reasonably well-defined habitat which has distinct physio-chemical properties. The term thus not only refers to the microorganisms involved but also encompasses their theatre of activity"". In 2020, an international panel of experts published the outcome of their discussions on the definition of the microbiome. They proposed a definition of the microbiome based on a revival of the ""compact, clear, and comprehensive description of the term"" as originally provided by Whipps et al., but supplemented with two explanatory paragraphs. The first explanatory paragraph pronounces the dynamic character of the microbiome, and the second explanatory paragraph clearly separates the term microbiota from the term microbiome.
MEGARes allows users to analyze antimicrobial resistance on a population-level, similar to a microbiome analysis, from a FASTA sequence. Furthermore, users can access AMR++, a bioiinformatics pipeline for resistome analysis of metagenomic datasets that can be integrated with the MEGARes database.
A phageome is a community of bacteriophages and their metagenomes localized in a particular environment, similar to a microbiome. The term was first used in an article by Modi et al in 2013 and has continued to be used in scientific articles that relate to bacteriophages and their metagenomes. A bacteriophage, or phage for short, is a virus that has the ability to infect bacteria and archaea, and can replicate inside of them. Phageome is a subcategory of virome, which is all of the viruses that are associated with a host or environment. Phages make up the majority of most viromes and are currently understood as being the most abundant organism. Oftentimes scientists will look only at a phageome instead of a virome while conducting research.","Phageome is a subcategory of virome, which is all of the viruses that are associated with a host or environmentFurthermore, users can access AMR++, a bioiinformatics pipeline for resistome analysis of metagenomic datasets that can be integrated with the MEGARes database.
A phageome is a community of bacteriophages and their metagenomes localized in a particular environment, similar to a microbiomeFurthermore, users can access AmrPlusplus, a pipeline for resistome analysis of metagenomic datasets that can be integrated with the MEGARes database.
A phageome is a community of bacteriophages and their metagenomes localized in a particular environment, similar to a microbiomeA bacteriophage, or phage for short, is a virus that has the ability to infect bacteria and archaea, and can replicate inside of themOftentimes scientists will look only at a phageome instead of a virome while conducting research.Oftentimes scientists will look only at a phageome instead of a virome while conducting researchPhages make up the majority of most viromes and are currently understood as being the most abundant organismThe term was first used in an article by Modi ""et al"" in 2013 and has continued to be used in scientific articles that relate to bacteriophages and their metagenomesThe term was first used in an article by Modi et al in 2013 and has continued to be used in scientific articles that relate to bacteriophages and their metagenomesThe well-established term ""virome"" is derived from virus and genome and is used to describe viral shotgun metagenomes consisting of a collection of nucleic acids associated with a particular ecosystem or holobiontFor example, the human microbiome is the collection of bacteria, viruses, and other microorganisms that are present on or in a human body.
MEGARes allows users to analyze antimicrobial resistance on a population-level, similar to a microbiome analysis, from a FASTA sequence or keywords in their search baras ""a characteristic microbial community occupying a reasonably well-defined habitat which has distinct physio-chemical propertiesMore and m","Phageome is a subcategory of virome, which is all of the viruses that are associated with a host or environmentFurthermore, users can access AMR++, a bioiinformatics pipeline for resistome analysis of metagenomic datasets that can be integrated with the MEGARes database.
A phageome is a community of bacteriophages and their metagenomes localized in a particular environment, similar to a microbiomeFurthermore, users can access AmrPlusplus, a pipeline for resistome analysis of metagenomic datasets that can be integrated with the MEGARes database.
A phageome is a community of bacteriophages and their metagenomes localized in a particular environment, similar to a microbiomeA bacteriophage, or phage for short, is a virus that has the ability to infect bacteria and archaea, and can replicate inside of themOftentimes scientists will look only at a phageome instead of a virome while conducting research.Oftentimes scientists will look only at a phageome instead of a virome while conducting researchPhages make up the majority of most viromes and are currently understood as being the most abundant organismThe term was first used in an article by Modi ""et al"" in 2013 and has continued to be used in scientific articles that relate to bacteriophages and their metagenomesThe term was first used in an article by Modi et al in 2013 and has continued to be used in scientific articles that relate to bacteriophages and their metagenomesThe well-established term ""virome"" is derived from virus and genome and is used to describe viral shotgun metagenomes consisting of a collection of nucleic acids associated with a particular ecosystem or holobiontFor example, the human microbiome is the collection of bacteria, viruses, and other microorganisms that are present on or in a human body.
MEGARes allows users to analyze antimicrobial resistance on a population-level, similar to a microbiome analysis, from a FASTA sequence or keywords in their search baras ""a characteristic microbial community occupying a reasonably well-defined habitat which has distinct physio-chemical propertiesMore and m[SEP]What is a phageome?","['C', 'A', 'D']",1.0
What is organography?,"Organography (from Greek , organo, ""organ""; and , -graphy) is the scientific description of the structure and function of the organs of living things. ==History== Organography as a scientific study starts with Aristotle, who considered the parts of plants as ""organs"" and began to consider the relationship between different organs and different functions. Organology (from Ancient Greek () 'instrument' and (), 'the study of') is the science of musical instruments and their classifications. ==See also== * morphology (biology) ==References== ==External links== * Organography of plants, especially of the Archegoniata and Spermaphyta, by Dr. K. Goebel Category:Branches of biology “Organizing Organology.” ‘’Selected Reports in Ethnomusicology’’ 8 (1990): 1-34. Pp.3 DeVale defines organology as “the science of sound instruments”.DeVale, Sue Carole. Medical imaging is the technique and process of imaging the interior of a body for clinical analysis and medical intervention, as well as visual representation of the function of some organs or tissues (physiology). An organ-pipe scanner is a system used in some radar systems to provide scanning in azimuth or elevation without moving the antenna. Anatomical pathology is one of two branches of pathology, the other being clinical pathology, the diagnosis of disease through the laboratory analysis of bodily fluids or tissues. * Cytopathology – the examination of loose cells spread and stained on glass slides using cytology techniques * Electron microscopy – the examination of tissue with an electron microscope, which allows much greater magnification, enabling the visualization of organelles within the cells. The first paper in the journal written by Sue Carole DeVale entitled “Organizing Organology” attempted to provide a more comprehensive system for defining the study of organology, particularly within the context of ethnomusicology.DeVale, Sue Carole. Anatomical pathology (Commonwealth) or Anatomic pathology (U.S.) is a medical specialty that is concerned with the diagnosis of disease based on the macroscopic, microscopic, biochemical, immunologic and molecular examination of organs and tissues. Selected Reports in Ethnomusicology 8 (1990): 1-34. Pp.4-5 She also defines three primary branches-classificatory, analytical, and applied- that serve as the basis for the study of organology.DeVale, Sue Carole. Although imaging of removed organs and tissues can be performed for medical reasons, such procedures are usually considered part of pathology instead of medical imaging. As a field of scientific investigation, medical imaging constitutes a sub- discipline of biomedical engineering, medical physics or medicine depending on the context: Research and development in the area of instrumentation, image acquisition (e.g., radiography), modeling and quantification are usually the preserve of biomedical engineering, medical physics, and computer science; Research into the application and interpretation of medical images is usually the preserve of radiology and the medical sub-discipline relevant to medical condition or area of medical science (neuroscience, cardiology, psychiatry, psychology, etc.) under investigation. For much of the 18th and 19th centuries, little work was done on organology. In the 17th century Joachim Jung,Joachim Jung, Isagoge phytoscopica (1678) clearly articulated that plants are composed of different organ types such as root, stem and leaf, and he went on to define these organ types on the basis of form and position. “Organizing Organology.” “Organizing Organology.” “Organizing Organology.” “Organizing Organology.” “Organizing Organology.” ",Organography is the study of the stem and root of plants.,Organography is the scientific description of the structure and function of the organs of living things.,"Organography is the study of the development of organs from the ""growing points"" or apical meristems.",Organography is the study of the commonality of development between foliage leaves and floral leaves.,Organography is the study of the relationship between different organs and different functions in plants.,B,kaggle200,"Similar views were propounded at by Goethe in his well-known treatise. He wrote: ""The underlying relationship between the various external parts of the plant, such as the leaves, the calyx, the corolla, the stamens, which develop one after the other and, as it were, out of one another, has long been generally recognized by investigators, and has in fact been specially studied; and the operation by which one and the same organ presents itself to us in various forms has been termed Metamorphosis of Plants.""
In the following century Caspar Friedrich Wolff was able to follow the development of organs from the ""growing points"" or apical meristems. He noted the commonality of development between foliage leaves and floral leaves (e.g. petals) and wrote: ""In the whole plant, whose parts we wonder at as being, at the first glance, so extraordinarily diverse, I finally perceive and recognize nothing beyond leaves and stem (for the root may be regarded as a stem). Consequently all parts of the plant, except the stem, are modified leaves.""
Organography as a scientific study starts with Aristotle, who considered the parts of plants as ""organs"" and began to consider the relationship between different organs and different functions. In the 17th century Joachim Jung, clearly articulated that plants are composed of different organ types such as root, stem and leaf, and he went on to define these organ types on the basis of form and position.
Organography (from Greek , ""organo"", ""organ""; and , ""-graphy"") is the scientific description of the structure and function of the organs of living things.","In the following century Caspar Friedrich Wolff was able to follow the development of organs from the ""growing points"" or apical meristems. He noted the commonality of development between foliage leaves and floral leaves (e.g. petals) and wrote: ""In the whole plant, whose parts we wonder at as being, at the first glance, so extraordinarily diverse, I finally perceive and recognize nothing beyond leaves and stem (for the root may be regarded as a stem). Consequently all parts of the plant, except the stem, are modified leaves."" Similar views were propounded at by Goethe in his well-known treatise. He wrote: ""The underlying relationship between the various external parts of the plant, such as the leaves, the calyx, the corolla, the stamens, which develop one after the other and, as it were, out of one another, has long been generally recognized by investigators, and has in fact been specially studied; and the operation by which one and the same organ presents itself to us in various forms has been termed Metamorphosis of Plants.""
Organography as a scientific study starts with Aristotle, who considered the parts of plants as ""organs"" and began to consider the relationship between different organs and different functions. In the 17th century Joachim Jung, clearly articulated that plants are composed of different organ types such as root, stem and leaf, and he went on to define these organ types on the basis of form and position.
Organography (from Greek όργανο, organo, ""organ""; and -γραφή, -graphy) is the scientific description of the structure and function of the organs of living things.","In the 17th century Joachim Jung, clearly articulated that plants are composed of different organ types such as root, stem and leaf, and he went on to define these organ types on the basis of form and position.
Organography (from Greek όργανο, organo, ""organ""; and -γραφή, -graphy) is the scientific description of the structure and function of the organs of living thingsIn the 17th century Joachim Jung, clearly articulated that plants are composed of different organ types such as root, stem and leaf, and he went on to define these organ types on the basis of form and position.
Organography (from Greek , ""organo"", ""organ""; and , ""-graphy"") is the scientific description of the structure and function of the organs of living things.Consequently all parts of the plant, except the stem, are modified leaves.""
Organography as a scientific study starts with Aristotle, who considered the parts of plants as ""organs"" and began to consider the relationship between different organs and different functionsHe wrote: ""The underlying relationship between the various external parts of the plant, such as the leaves, the calyx, the corolla, the stamens, which develop one after the other and, as it were, out of one another, has long been generally recognized by investigators, and has in fact been specially studied; and the operation by which one and the same organ presents itself to us in various forms has been termed Metamorphosis of Plants.""
Organography as a scientific study starts with Aristotle, who considered the parts of plants as ""organs"" and began to consider the relationship between different organs and different functionsIn the following century Caspar Friedrich Wolff was able to follow the development of organs from the ""growing points"" or apical meristemsHe wrote: ""The underlying relationship between the various external parts of the plant, such as the leaves, the calyx, the corolla, the stamens, which develop one after the other and, as it were, out of one another, has long been generally recognized by investigators, and has in fact been specially studied; and the operati","In the 17th century Joachim Jung, clearly articulated that plants are composed of different organ types such as root, stem and leaf, and he went on to define these organ types on the basis of form and position.
Organography (from Greek όργανο, organo, ""organ""; and -γραφή, -graphy) is the scientific description of the structure and function of the organs of living thingsIn the 17th century Joachim Jung, clearly articulated that plants are composed of different organ types such as root, stem and leaf, and he went on to define these organ types on the basis of form and position.
Organography (from Greek , ""organo"", ""organ""; and , ""-graphy"") is the scientific description of the structure and function of the organs of living things.Consequently all parts of the plant, except the stem, are modified leaves.""
Organography as a scientific study starts with Aristotle, who considered the parts of plants as ""organs"" and began to consider the relationship between different organs and different functionsHe wrote: ""The underlying relationship between the various external parts of the plant, such as the leaves, the calyx, the corolla, the stamens, which develop one after the other and, as it were, out of one another, has long been generally recognized by investigators, and has in fact been specially studied; and the operation by which one and the same organ presents itself to us in various forms has been termed Metamorphosis of Plants.""
Organography as a scientific study starts with Aristotle, who considered the parts of plants as ""organs"" and began to consider the relationship between different organs and different functionsIn the following century Caspar Friedrich Wolff was able to follow the development of organs from the ""growing points"" or apical meristemsHe wrote: ""The underlying relationship between the various external parts of the plant, such as the leaves, the calyx, the corolla, the stamens, which develop one after the other and, as it were, out of one another, has long been generally recognized by investigators, and has in fact been specially studied; and the operati[SEP]What is organography?","['E', 'B', 'D']",0.5
What is the definition of anatomy?,"Anatomy () is the branch of biology concerned with the study of the structure of organisms and their parts. The term ""anatomy"" is commonly taken to refer to human anatomy. Anatomy is a branch of natural science that deals with the structural organization of living things. Anatomy is inherently tied to developmental biology, embryology, comparative anatomy, evolutionary biology, and phylogeny, as these are the processes by which anatomy is generated, both over immediate and long-term timescales. Anatomy is a complex and dynamic field that is constantly evolving as new discoveries are made. Anatomy is quite distinct from physiology and biochemistry, which deal respectively with the functions of those parts and the chemical processes involved. The discipline of anatomy can be subdivided into a number of branches, including gross or macroscopic anatomy and microscopic anatomy. Human anatomy is one of the essential basic sciences that are applied in medicine. Microscopic anatomy is the study of structures on a microscopic scale, along with histology (the study of tissues), and embryology (the study of an organism in its immature condition). Molecular anatomy is the subspecialty of microscopic anatomy concerned with the identification and description of molecular structures of cells, tissues, and organs in an organism. == References == Category:Anatomy The history of anatomy is characterized by a progressive understanding of the functions of the organs and structures of the human body. Education in the gross anatomy of humans is included training for most health professionals. ==Techniques of study== Gross anatomy is studied using both invasive and noninvasive methods with the goal of obtaining information about the macroscopic structure and organization of organs and organ systems. The discipline of anatomy is divided into macroscopic and microscopic parts. Regional anatomy is the study of the interrelationships of all of the structures in a specific body region, such as the abdomen. Human anatomy can be taught regionally or systemically; that is, respectively, studying anatomy by bodily regions such as the head and chest, or studying by specific systems, such as the nervous or respiratory systems. Anatomy can be studied using both invasive and non-invasive methods with the goal of obtaining information about the structure and organization of organs and systems. Gross anatomy is the study of anatomy at the visible or macroscopic level. Anatomy and physiology, which study the structure and function of organisms and their parts respectively, make a natural pair of related disciplines, and are often studied together. Clinical Anatomy is a peer-reviewed medical journal that covers anatomy in all its aspects—gross, histologic, developmental, and neurologic—as applied to medical practice.The Clinical Anatomy Overview page It is the official publication of the American Association of Clinical Anatomists, the British Association of Clinical Anatomists, the Australian and New Zealand Association of Clinical Anatomists, and the Anatomical Society of Southern Africa. * Gunther von Hagens True Anatomy for New Ways of Teaching. == Source == Category:Branches of biology Category:Morphology (biology) ",Anatomy is the rarely used term that refers to the superstructure of polymers such as fiber formation or to larger composite assemblies.,Anatomy is a branch of morphology that deals with the structure of organisms.,"Anatomy is the study of the effects of external factors upon the morphology of organisms under experimental conditions, such as the effect of genetic mutation.","Anatomy is the analysis of the patterns of the locus of structures within the body plan of an organism, and forms the basis of taxonomical categorization.",Anatomy is the study of the relationship between the structure and function of morphological features.,B,kaggle200,"Anatomy () is the branch of biology concerned with the study of the structure of organisms and their parts. Anatomy is a branch of natural science that deals with the structural organization of living things. It is an old science, having its beginnings in prehistoric times. Anatomy is inherently tied to developmental biology, embryology, comparative anatomy, evolutionary biology, and phylogeny, as these are the processes by which anatomy is generated, both over immediate and long-term timescales. Anatomy and physiology, which study the structure and function of organisms and their parts respectively, make a natural pair of related disciplines, and are often studied together. Human anatomy is one of the essential basic sciences that are applied in medicine.
Gross anatomy is the study of anatomy at the visible or macroscopic level. The counterpart to gross anatomy is the field of histology, which studies microscopic anatomy. Gross anatomy of the human body or other animals seeks to understand the relationship between components of an organism in order to gain a greater appreciation of the roles of those components and their relationships in maintaining the functions of life. The study of gross anatomy can be performed on deceased organisms using dissection or on living organisms using medical imaging. Education in the gross anatomy of humans is included training for most health professionals.
Surface anatomy (also called superficial anatomy and visual anatomy) is the study of the external features of the body of an animal. In birds this is termed ""topography"". Surface anatomy deals with anatomical features that can be studied by sight, without dissection. As such, it is a branch of gross anatomy, along with endoscopic and radiological anatomy. Surface anatomy is a descriptive science. In particular, in the case of human surface anatomy, these are the form and proportions of the human body and the surface landmarks which correspond to deeper structures hidden from view, both in static pose and in motion.
The discipline of anatomy can be subdivided into a number of branches, including gross or macroscopic anatomy and microscopic anatomy. Gross anatomy is the study of structures large enough to be seen with the naked eye, and also includes superficial anatomy or surface anatomy, the study by sight of the external body features. Microscopic anatomy is the study of structures on a microscopic scale, along with histology (the study of tissues), and embryology (the study of an organism in its immature condition).","Surface anatomy (also called superficial anatomy and visual anatomy) is the study of the external features of the body of an animal. In birds this is termed topography. Surface anatomy deals with anatomical features that can be studied by sight, without dissection. As such, it is a branch of gross anatomy, along with endoscopic and radiological anatomy. Surface anatomy is a descriptive science. In particular, in the case of human surface anatomy, these are the form and proportions of the human body and the surface landmarks which correspond to deeper structures hidden from view, both in static pose and in motion.
Gross anatomy is the study of anatomy at the visible or macroscopic level. The counterpart to gross anatomy is the field of histology, which studies microscopic anatomy. Gross anatomy of the human body or other animals seeks to understand the relationship between components of an organism in order to gain a greater appreciation of the roles of those components and their relationships in maintaining the functions of life. The study of gross anatomy can be performed on deceased organisms using dissection or on living organisms using medical imaging. Education in the gross anatomy of humans is included training for most health professionals.
Comparative morphology is analysis of the patterns of the locus of structures within the body plan of an organism, and forms the basis of taxonomical categorization.
Functional morphology is the study of the relationship between the structure and function of morphological features.
Experimental morphology is the study of the effects of external factors upon the morphology of organisms under experimental conditions, such as the effect of genetic mutation.
Anatomy is a ""branch of morphology that deals with the structure of organisms"".
Molecular morphology is a rarely used term, usually referring to the superstructure of polymers such as fiber formation or to larger composite assemblies. The term is commonly not applied to the spatial structure of individual molecules.
Gross morphology refers to the collective structures of an organism as a whole as a general description of the form and structure of an organism, taking into account all of its structures without specifying an individual structure.","- Anatomy () is the branch of biology concerned with the study of the structure of organisms and their partsAnatomy is a branch of natural science that deals with the structural organization of living thingsHuman anatomy is one of the essential basic sciences that are applied in medicine.
Gross anatomy is the study of anatomy at the visible or macroscopic levelAnatomy is inherently tied to developmental biology, embryology, comparative anatomy, evolutionary biology, and phylogeny, as these are the processes by which anatomy is generated, both over immediate and long-term timescalesSurface anatomy deals with anatomical features that can be studied by sight, without dissectionSurface anatomy (also called superficial anatomy and visual anatomy) is the study of the external features of the body of an animalIn particular, in the case of human surface anatomy, these are the form and proportions of the human body and the surface landmarks which correspond to deeper structures hidden from view, both in static pose and in motion.
The discipline of anatomy can be subdivided into a number of branches, including gross or macroscopic anatomy and microscopic anatomyGross anatomy is the study of structures large enough to be seen with the naked eye, and also includes superficial anatomy or surface anatomy, the study by sight of the external body featuresMicroscopic anatomy is the study of structures on a microscopic scale, along with histology (the study of tissues), and embryology (the study of an organism in its immature condition).Surface anatomy is a descriptive scienceGross anatomy of the human body or other animals seeks to understand the relationship between components of an organism in order to gain a greater appreciation of the roles of those components and their relationships in maintaining the functions of lifeAnatomy and physiology, which study the structure and function of organisms and their parts respectively, make a natural pair of related disciplines, and are often studied togetherAs such, it is a branch of gross anatomy, along with endoscopic and radiological a","- Anatomy () is the branch of biology concerned with the study of the structure of organisms and their partsAnatomy is a branch of natural science that deals with the structural organization of living thingsHuman anatomy is one of the essential basic sciences that are applied in medicine.
Gross anatomy is the study of anatomy at the visible or macroscopic levelAnatomy is inherently tied to developmental biology, embryology, comparative anatomy, evolutionary biology, and phylogeny, as these are the processes by which anatomy is generated, both over immediate and long-term timescalesSurface anatomy deals with anatomical features that can be studied by sight, without dissectionSurface anatomy (also called superficial anatomy and visual anatomy) is the study of the external features of the body of an animalIn particular, in the case of human surface anatomy, these are the form and proportions of the human body and the surface landmarks which correspond to deeper structures hidden from view, both in static pose and in motion.
The discipline of anatomy can be subdivided into a number of branches, including gross or macroscopic anatomy and microscopic anatomyGross anatomy is the study of structures large enough to be seen with the naked eye, and also includes superficial anatomy or surface anatomy, the study by sight of the external body featuresMicroscopic anatomy is the study of structures on a microscopic scale, along with histology (the study of tissues), and embryology (the study of an organism in its immature condition).Surface anatomy is a descriptive scienceGross anatomy of the human body or other animals seeks to understand the relationship between components of an organism in order to gain a greater appreciation of the roles of those components and their relationships in maintaining the functions of lifeAnatomy and physiology, which study the structure and function of organisms and their parts respectively, make a natural pair of related disciplines, and are often studied togetherAs such, it is a branch of gross anatomy, along with endoscopic and radiological a[SEP]What is the definition of anatomy?","['B', 'E', 'D']",1.0
What is a trophic level in an ecological pyramid?,"An ecological pyramid (also trophic pyramid, Eltonian pyramid, energy pyramid, or sometimes food pyramid) is a graphical representation designed to show the biomass or bioproductivity at each trophic level in an ecosystem. A pyramid of biomass shows the relationship between biomass and trophic level by quantifying the biomass present at each trophic level of an ecological community at a particular time. Energy pyramids are necessarily upright in healthy ecosystems, that is, there must always be more energy available at a given level of the pyramid to support the energy and biomass requirement of the next trophic level. The trophic level of an organism is the number of steps it is from the start of the chain. As well as the organisms in the food chains there is the problem of assigning the decomposers and detritivores to a particular level. ==Pyramid of biomass== thumb|A pyramid of biomass shows the total biomass of the organisms involved at each trophic level of an ecosystem. The organisms it eats are at a lower trophic level, and the organisms that eat it are at a higher trophic level. The trophic level of an organism is the position it occupies in a food web. The trophic cascade is an ecological concept which has stimulated new research in many areas of ecology. A pyramid of energy shows how much energy is retained in the form of new biomass from each trophic level, while a pyramid of biomass shows how much biomass (the amount of living or organic matter present in an organism) is present in the organisms. Pyramids of energy are normally upright, but other pyramids can be inverted(pyramid of biomass for marine region) or take other shapes.(spindle shaped pyramid) Ecological pyramids begin with producers on the bottom (such as plants) and proceed through the various trophic levels (such as herbivores that eat plants, then carnivores that eat flesh, then omnivores that eat both plants and flesh, and so on). This is because, in order for the ecosystem to sustain itself, there must be more energy at lower trophic levels than there is at higher trophic levels. It follows from this that the total energy originally present in the incident sunlight that is finally embodied in a tertiary consumer is about 0.001% ==Evolution== Both the number of trophic levels and the complexity of relationships between them evolve as life diversifies through time, the exception being intermittent mass extinction events. ==Fractional trophic levels== Food webs largely define ecosystems, and the trophic levels define the position of organisms within the webs. The definition of the trophic level, TL, for any consumer species is : TL_i=1 + \sum_j (TL_j \cdot DC_{ij}), where TL_j is the fractional trophic level of the prey j, and DC_{ij} represents the fraction of j in the diet of i. The definition of the trophic level, TL, for any consumer species is: :: TL_i=1 + \sum_j (TL_j \cdot DC_{ij})\\! where TL_j is the fractional trophic level of the prey j, and DC_{ij} represents the fraction of j in the diet of i. The trophic-dynamic aspect of ecology. Typically, about 10% of the energy is transferred from one trophic level to the next, thus preventing a large number of trophic levels. There is also a pyramid of numbers representing the number of individual organisms at each trophic level. For example, a traditional Inuit living on a diet consisting primarily of seals would have a trophic level of nearly 5. ==Biomass transfer efficiency== In general, each trophic level relates to the one below it by absorbing some of the energy it consumes, and in this way can be regarded as resting on, or supported by, the next lower trophic level. *For trophic cascades to be ubiquitous, communities must generally act as food chains, with discrete trophic levels. When an ecosystem is healthy, this graph produces a standard ecological pyramid. ",A group of organisms that acquire most of their energy from the level above them in the pyramid.,A group of organisms that acquire most of their energy from the abiotic sources in the ecosystem.,A group of organisms that acquire most of their energy from the level below them in the pyramid.,A group of organisms that acquire most of their energy from the same level in the pyramid.,A group of organisms that do not acquire any energy from the ecosystem.,C,kaggle200,"An ecological pyramid is a graphical representation that shows, for a given ecosystem, the relationship between biomass or biological productivity and trophic levels.
A ""pyramid of biomass"" shows the relationship between biomass and trophic level by quantifying the biomass present at each trophic level of an ecological community at a particular time. It is a graphical representation of biomass (total amount of living or organic matter in an ecosystem) present in unit area in different trophic levels. Typical units are grams per square meter, or calories per square meter.
A ""pyramid of energy"" shows how much energy is retained in the form of new biomass at each trophic level, while a ""pyramid of biomass"" shows how much biomass (the amount of living or organic matter present in an organism) is present in the organisms. There is also a ""pyramid of numbers"" representing the number of individual organisms at each trophic level. Pyramids of energy are normally upright, but other pyramids can be inverted or take other shapes.
An ecological pyramid (also trophic pyramid, Eltonian pyramid, energy pyramid, or sometimes food pyramid) is a graphical representation designed to show the biomass or bioproductivity at each trophic level in a given ecosystem.","An ecological pyramid is a graphical representation that shows, for a given ecosystem, the relationship between biomass or biological productivity and trophic levels.
A biomass pyramid shows the amount of biomass at each trophic level.
A productivity pyramid shows the production or turn-over in biomass at each trophic level.An ecological pyramid provides a snapshot in time of an ecological community.
The bottom of the pyramid represents the primary producers (autotrophs). The primary producers take energy from the environment in the form of sunlight or inorganic chemicals and use it to create energy-rich molecules such as carbohydrates. This mechanism is called primary production. The pyramid then proceeds through the various trophic levels to the apex predators at the top.
A pyramid of energy shows how much energy is retained in the form of new biomass from each trophic level, while a pyramid of biomass shows how much biomass (the amount of living or organic matter present in an organism) is present in the organisms. There is also a pyramid of numbers representing the number of individual organisms at each trophic level. Pyramids of energy are normally upright, but other pyramids can be inverted(pyramid of biomass for marine region) or take other shapes.(spindle shaped pyramid) Ecological pyramids begin with producers on the bottom (such as plants) and proceed through the various trophic levels (such as herbivores that eat plants, then carnivores that eat flesh, then omnivores that eat both plants and flesh, and so on). The highest level is the top of the food chain.
An ecological pyramid (also trophic pyramid, Eltonian pyramid, energy pyramid, or sometimes food pyramid) is a graphical representation designed to show the biomass or bioproductivity at each trophic level in an ecosystem.","The highest level is the top of the food chain.
An ecological pyramid (also trophic pyramid, Eltonian pyramid, energy pyramid, or sometimes food pyramid) is a graphical representation designed to show the biomass or bioproductivity at each trophic level in an ecosystemAn ecological pyramid is a graphical representation that shows, for a given ecosystem, the relationship between biomass or biological productivity and trophic levels.
A biomass pyramid shows the amount of biomass at each trophic level.
A productivity pyramid shows the production or turn-over in biomass at each trophic level.An ecological pyramid provides a snapshot in time of an ecological community.
The bottom of the pyramid represents the primary producers (autotrophs)- An ecological pyramid is a graphical representation that shows, for a given ecosystem, the relationship between biomass or biological productivity and trophic levels.
A ""pyramid of biomass"" shows the relationship between biomass and trophic level by quantifying the biomass present at each trophic level of an ecological community at a particular timePyramids of energy are normally upright, but other pyramids can be inverted or take other shapes.
An ecological pyramid (also trophic pyramid, Eltonian pyramid, energy pyramid, or sometimes food pyramid) is a graphical representation designed to show the biomass or bioproductivity at each trophic level in a given ecosystem.There is also a pyramid of numbers representing the number of individual organisms at each trophic levelPyramids of energy are normally upright, but other pyramids can be inverted(pyramid of biomass for marine region) or take other shapes.(spindle shaped pyramid) Ecological pyramids begin with producers on the bottom (such as plants) and proceed through the various trophic levels (such as herbivores that eat plants, then carnivores that eat flesh, then omnivores that eat both plants and flesh, and so on)The pyramid then proceeds through the various trophic levels to the apex predators at the top.
A pyramid of energy shows how much energy is retained in the form of new b","The highest level is the top of the food chain.
An ecological pyramid (also trophic pyramid, Eltonian pyramid, energy pyramid, or sometimes food pyramid) is a graphical representation designed to show the biomass or bioproductivity at each trophic level in an ecosystemAn ecological pyramid is a graphical representation that shows, for a given ecosystem, the relationship between biomass or biological productivity and trophic levels.
A biomass pyramid shows the amount of biomass at each trophic level.
A productivity pyramid shows the production or turn-over in biomass at each trophic level.An ecological pyramid provides a snapshot in time of an ecological community.
The bottom of the pyramid represents the primary producers (autotrophs)- An ecological pyramid is a graphical representation that shows, for a given ecosystem, the relationship between biomass or biological productivity and trophic levels.
A ""pyramid of biomass"" shows the relationship between biomass and trophic level by quantifying the biomass present at each trophic level of an ecological community at a particular timePyramids of energy are normally upright, but other pyramids can be inverted or take other shapes.
An ecological pyramid (also trophic pyramid, Eltonian pyramid, energy pyramid, or sometimes food pyramid) is a graphical representation designed to show the biomass or bioproductivity at each trophic level in a given ecosystem.There is also a pyramid of numbers representing the number of individual organisms at each trophic levelPyramids of energy are normally upright, but other pyramids can be inverted(pyramid of biomass for marine region) or take other shapes.(spindle shaped pyramid) Ecological pyramids begin with producers on the bottom (such as plants) and proceed through the various trophic levels (such as herbivores that eat plants, then carnivores that eat flesh, then omnivores that eat both plants and flesh, and so on)The pyramid then proceeds through the various trophic levels to the apex predators at the top.
A pyramid of energy shows how much energy is retained in the form of new b[SEP]What is a trophic level in an ecological pyramid?","['C', 'A', 'B']",1.0
What is a crossover experiment?,"In chemistry, a crossover experiment is a method used to study the mechanism of a chemical reaction. The aim of a crossover experiment is to determine whether or not a reaction process involves a stage where the components of each reactant have an opportunity to exchange with each other. Crossover designs are common for experiments in many scientific disciplines, for example psychology, pharmaceutical science, and medicine. However, in some cases a crossover experiment alone will be able to distinguish between the main possibilities, for example in the case of intramolecular vs. intermolecular organic reaction mechanisms. In practice, crossover experiments aim to use the least change possible between the usual conditions of the reaction being studied and the conditions of the crossover experiment. Crossover experiments. In medicine, a crossover study or crossover trial is a longitudinal study in which subjects receive a sequence of different treatments (or exposures). The crossover experiment has the advantage of being conceptually straightforward and relatively easy to design, carry out, and interpret. While crossover studies can be observational studies, many important crossover studies are controlled experiments, which are discussed in this article. A crossover trial has a repeated measures design in which each patient is assigned to a sequence of two or more treatments, of which one may be a standard treatment or a placebo. It can be difficult to know whether or not the changes made to reactants for a crossover experiment will affect the mechanism by which the reaction proceeds. For crossover experiments used to distinguish between intermolecular and intramolecular reactions, the absence of crossover products is less conclusive than the presence of crossover products. The design of a useful crossover experiment relies on having a proposed mechanism on which to base predictions of the label distribution in the products. Predicting the products given by each mechanism will show whether or not a given crossover experiment design can distinguish between the mechanisms in question. The results of crossover experiments are often straightforward to analyze, making them one of the most useful and most frequently applied methods of mechanistic study. This is known as a doubly labeled system, and is generally the requirement for a crossover experiment. A well- designed crossover experiment can lead to conclusions about a mechanism that would otherwise be impossible to make. In a crossover experiment, two similar but distinguishable reactants simultaneously undergo a reaction as part of the same reaction mixture. In modern mechanistic studies, crossover experiments and KIE studies are commonly used in conjunction with computational methods.Crabtree, R. H.; Dalton Trans., 2013, 42, 4104. == Theory == 550px|center The concept underlying the crossover experiment is a basic one: provided that the labeling method chosen does not affect the way a reaction proceeds, a shift in the labeling as observed in the products can be attributed to the reaction mechanism. Many mechanistic studies include both crossover experiments and measurements of rate and kinetic isotope effects. == Purpose == Crossover experiments allow for experimental study of a reaction mechanism. ",An experiment that involves crossing over two different types of materials to create a new material.,"A type of experiment used to distinguish between different mechanisms proposed for a chemical reaction, such as intermolecular vs. intramolecular mechanisms.",An experiment that involves crossing over two different types of organisms to create a hybrid.,An experiment that involves crossing over two different types of cells to create a new cell.,An experiment that involves crossing over two different chemicals to create a new substance.,B,kaggle200,"An isotopic labeling experiment is an experiment used in mechanistic study that employs isotopes as labels and traces these labels in the products. Isotopic labeling experiments are commonly considered to be a type of crossover experiment. However, there are far more possibilities for the manner of labeling and potential products in an isotopic labeling experiment than in a traditional crossover experiment. The classification of an isotopic labeling experiment as a crossover experiment is based on the similar underlying concept, goal, and design principles in the two experiments rather than on direct similarity. An isotopic labeling experiment can be designed to be directly analogous to a traditional crossover experiment, but there are many additional ways of carrying out isotopic labeling experiments.
Looking at these two proposed mechanisms, it is clear that a crossover experiment will be suitable for distinguishing between them, as is generally the case for inter- and intramolecular mechanisms. The next step in crossover experiment design is to propose labeled reactants. For a non-isotopic labeling method the smallest perturbation to the system will be by addition of a methyl group at an unreactive position.
In chemistry, a crossover experiment is a method used to study the mechanism of a chemical reaction. In a crossover experiment, two similar but distinguishable reactants simultaneously undergo a reaction as part of the same reaction mixture. The products formed will either correspond directly to one of the two reactants (non-crossover products) or will include components of both reactants (crossover products). The aim of a crossover experiment is to determine whether or not a reaction process involves a stage where the components of each reactant have an opportunity to exchange with each other.
In designing a crossover experiment the first task is to propose possible mechanisms for the reaction being studied. Based on these possible mechanisms, the goal is to determine either a traditional crossover experiment or an isotope scrambling experiment that will enable the researcher to distinguish between the two or more possible mechanisms. Often many methods of mechanistic study will have to be employed to support or discount all of the mechanisms proposed. However, in some cases a crossover experiment alone will be able to distinguish between the main possibilities, for example in the case of intramolecular vs. intermolecular organic reaction mechanisms.","Looking at these two proposed mechanisms, it is clear that a crossover experiment will be suitable for distinguishing between them, as is generally the case for inter- and intramolecular mechanisms. The next step in crossover experiment design is to propose labeled reactants. For a non-isotopic labeling method the smallest perturbation to the system will be by addition of a methyl group at an unreactive position.
In chemistry, a crossover experiment is a method used to study the mechanism of a chemical reaction. In a crossover experiment, two similar but distinguishable reactants simultaneously undergo a reaction as part of the same reaction mixture. The products formed will either correspond directly to one of the two reactants (non-crossover products) or will include components of both reactants (crossover products). The aim of a crossover experiment is to determine whether or not a reaction process involves a stage where the components of each reactant have an opportunity to exchange with each other.
In designing a crossover experiment the first task is to propose possible mechanisms for the reaction being studied. Based on these possible mechanisms, the goal is to determine either a traditional crossover experiment or an isotope scrambling experiment that will enable the researcher to distinguish between the two or more possible mechanisms. Often many methods of mechanistic study will have to be employed to support or discount all of the mechanisms proposed. However, in some cases a crossover experiment alone will be able to distinguish between the main possibilities, for example in the case of intramolecular vs. intermolecular organic reaction mechanisms.","In a crossover experiment, two similar but distinguishable reactants simultaneously undergo a reaction as part of the same reaction mixtureFor a non-isotopic labeling method the smallest perturbation to the system will be by addition of a methyl group at an unreactive position.
In chemistry, a crossover experiment is a method used to study the mechanism of a chemical reactionThe aim of a crossover experiment is to determine whether or not a reaction process involves a stage where the components of each reactant have an opportunity to exchange with each other.
In designing a crossover experiment the first task is to propose possible mechanisms for the reaction being studiedHowever, in some cases a crossover experiment alone will be able to distinguish between the main possibilities, for example in the case of intramolecular vsIsotopic labeling experiments are commonly considered to be a type of crossover experimentThe next step in crossover experiment design is to propose labeled reactantsThe classification of an isotopic labeling experiment as a crossover experiment is based on the similar underlying concept, goal, and design principles in the two experiments rather than on direct similarityThe products formed will either correspond directly to one of the two reactants (non-crossover products) or will include components of both reactants (crossover products)However, there are far more possibilities for the manner of labeling and potential products in an isotopic labeling experiment than in a traditional crossover experimentBased on these possible mechanisms, the goal is to determine either a traditional crossover experiment or an isotope scrambling experiment that will enable the researcher to distinguish between the two or more possible mechanismsLooking at these two proposed mechanisms, it is clear that a crossover experiment will be suitable for distinguishing between them, as is generally the case for inter- and intramolecular mechanisms- An isotopic labeling experiment is an experiment used in mechanistic study that employs isotopes as labels and traces these","In a crossover experiment, two similar but distinguishable reactants simultaneously undergo a reaction as part of the same reaction mixtureFor a non-isotopic labeling method the smallest perturbation to the system will be by addition of a methyl group at an unreactive position.
In chemistry, a crossover experiment is a method used to study the mechanism of a chemical reactionThe aim of a crossover experiment is to determine whether or not a reaction process involves a stage where the components of each reactant have an opportunity to exchange with each other.
In designing a crossover experiment the first task is to propose possible mechanisms for the reaction being studiedHowever, in some cases a crossover experiment alone will be able to distinguish between the main possibilities, for example in the case of intramolecular vsIsotopic labeling experiments are commonly considered to be a type of crossover experimentThe next step in crossover experiment design is to propose labeled reactantsThe classification of an isotopic labeling experiment as a crossover experiment is based on the similar underlying concept, goal, and design principles in the two experiments rather than on direct similarityThe products formed will either correspond directly to one of the two reactants (non-crossover products) or will include components of both reactants (crossover products)However, there are far more possibilities for the manner of labeling and potential products in an isotopic labeling experiment than in a traditional crossover experimentBased on these possible mechanisms, the goal is to determine either a traditional crossover experiment or an isotope scrambling experiment that will enable the researcher to distinguish between the two or more possible mechanismsLooking at these two proposed mechanisms, it is clear that a crossover experiment will be suitable for distinguishing between them, as is generally the case for inter- and intramolecular mechanisms- An isotopic labeling experiment is an experiment used in mechanistic study that employs isotopes as labels and traces these[SEP]What is a crossover experiment?","['B', 'A', 'C']",1.0
What is the role of IL-10 in the formation of Tr1 cells and tolerogenic DCs?,"The features of the IL-10 family consists of their genomic structure being similar, their primary and secondary protein structures being similar, their a clustering of encoding genes, and their utilization the similar receptor complexes. === IL-10 === Interleukin 10 is produced by regulatory T lymphocytes, B cells, and monocytes. The IL-10Rα subunit acts as the ligand binding site and may be upregulated in various cell types as necessary. The difference that the members of IL-10 family have between each other is that they have various receptor-binding residues, which help with interaction with specific cytokine receptors. IL-10R2 receptor is presented in most cells, when IL-10R1 receptor is IL-10 is also an inhibitor of expressions of CD80 and CD86 by dendritic cells (DC) and antigen- presenting cells (APC), and of T cells, decreasing their cytokine production, therefore, controlling their activation. The IL-10Rβ functions as the signaling subunit and is constitutively expressed in a majority of cell types. IL-10 plays a big role in regulating allergies by inhibiting cytokines responsible for allergic inflammation. === IL-19 === Interleukin 19 is produced mainly in monocytes, and can be found in big concentrations in patients with allergic disorders and psoriasis. IL-10 subfamily cytokine selects the innate and adaptive immune response and can prevent the function to reduce tissue damage. IL-26 assist with the process of human T cell transformation after their infections. == Three subgroups of IL-10 family == Based on the functions of the cytokine, the IL-10 family can be separated into three subfamily groups. The α subunit is exclusive to interleukin-10, however the β subunit is shared with other type II cytokine receptors such as IL-22R, IL-26R and INFλR. Interleukin 10 (Il-10) is an anti-inflammatory cytokine. Interleukin-10 receptor (IL-10R) is a type II cytokine receptor. The IL-10 family are helical cytokines categorized based on their specific similarities and can be classified as class 2 cytokines. == Biological activity == The IL-10 family is one of the important types of cytokines, that can stop the inflammation. In addition to IL-10, it includes IL-19, IL-20, IL-22, [interleukin 24|IL-24]] and IL-26. There is evidence that upon ligand binding at the α subunit, a conformational change occurs in the β subunit that allows it to additionally bind to IL-10. The Interleukin-10 receptor is implicated in regulation of gastro-intestinal immune response, primarily in the mucosal layer. Interleukin-1 family member 10 is a protein that in humans is encoded by the IL1F10 gene. CXCL10 has been attributed to several roles, such as chemoattraction for monocytes/macrophages, T cells, NK cells, and dendritic cells, promotion of T cell adhesion to endothelial cells, antitumor activity, and inhibition of bone marrow colony formation and angiogenesis. IL-19 plays a big role in the CNS by regulating the inflammation process through a delayed production of it. === IL-20 === IL-20 - induces cheratin proliferation and Stat-3 signal transduction pathway; is expressed in the CNS, myeloid cells, and keratinocytes. The α subunit (encoded in the Il10ra gene) is expressed on haematopoietic cells (such as T, B, NK, mast, and dendritic cells) whilst the β subunit (encoded in the Il10rb gene) is expressed ubiquitously. C-X-C motif chemokine ligand 10 (CXCL10) also known as Interferon gamma- induced protein 10 (IP-10) or small-inducible cytokine B10 is an 8.7 kDa protein that in humans is encoded by the CXCL10 gene. ","IL-10 inhibits the formation of Tr1 cells and tolerogenic DCs, which are dependent on TGF-β and Tregs. Tr1 cells produce low levels of IL-10 and TGF-β, while tolerogenic DCs produce TGF-β that is important for Tr1 formation.","IL-10 induces the formation of Tr1 cells and tolerogenic DCs, which are dependent on IL-10 and TGF-β, but differ from Tregs by lacking expression of Foxp3. Tr1 cells produce high levels of IL-10 and TGF-β, while tolerogenic DCs produce IL-10 that is important for Tr1 formation.",IL-10 has no role in the formation of Tr1 cells and tolerogenic DCs. TGF-β and Tregs are the only factors involved in the formation of Tr1 cells and tolerogenic DCs.,"IL-10 induces the formation of Tr1 cells and tolerogenic DCs, which are dependent on IL-10 and Tregs, but differ from Tregs by expressing Foxp3. Tr1 cells produce low levels of IL-10 and TGF-β, while tolerogenic DCs produce IL-10 that is important for Tr1 formation.","IL-10 induces the formation of Tregs, which are dependent on TGF-β and Foxp3. Tr1 cells and tolerogenic DCs are not involved in this process.",B,kaggle200,"Tolerogenic DCs often display an immature or semi-mature phenotype with characteristically low expression of costimulatory (e.g. CD80, CD86) and MHC molecules on their surface. Tolerogenic DCs also produce different cytokines as mature DCs (e.g. anti-inflammatory cytokines interleukin (IL)-10, transforming growth factor-β (TGF-β)). Moreover, tolerogenic DCs may also express various inhibitory surface molecules (e.g. programmed cell death ligand (PDL)-1, PDL-2) or can modulate metabolic parameters and change T cell response. For example, tolerogenic DCs can release or induce enzymes such as indoleamine 2,3-dioxygenase (IDO) or heme oxygenase-1 (HO-1). IDO promotes the degradation of tryptophan to N-formylkynurenin leading to reduced T cell proliferation, whereas HO- 1 catalyzes degradation of hemoglobin resulting in production of monoxide and lower DC immunogenicity. Besides that, tolerogenic DCs also may produce retinoic acid (RA), which induces Treg differentiation.
The specific cell-surface markers for Tr1 cells in humans and mice are CD4 CD49bLAG-3 CD226 from which LAG-3 and CD49b are indispensable. LAG-3 is a membrane protein on Tr1 cells that negatively regulates TCR-mediated signal transduction in cells. LAG-3 activates dendritic cells (DCs) and enhances the antigen-specific T-cell response which is necessary for Tr1 cells antigen specificity. CD49b belongs to the integrin family and is a receptor for many (extracellular) matrix and non-matrix molecules. CD49b provides only little contribution to the differentiation and function of Tr1 cells.
Tr1 cells secrete large amount of suppressing cytokines IL-10 and TGF-β. IL-10 directly inhibits T cells by blocking its production of IL-2, IFN-γ and GM-CSF and have tolerogenic effect on B cells and support differentiation of other regulatory T cells. IL-10 indirectly downregulates MHC II molecules and co-stimulatory molecules on antigen-presenting cells (APC) and force them to upregulate tolerogenic molecules such as ILT-3, ILT-4 and HLA-G.
During a tolerant state potential effector cells remain but are tightly regulated by induced antigen-specific CD4+ regulatory T cells (iTregs). Many subsets of iTregs play a part in this process, but CD4CD25FoxP3 Tregs play a key role, because they have the ability to convert conventional T cells into iTregs directly by secretion of the suppressive cytokines TGF-β, IL-10 or IL-35, or indirectly via dendritic cells (DCs). Production of IL-10 induces the formation of another population of regulatory T cells called Tr1. Tr1 cells are dependent on IL-10 and TGF-β as well as Tregs, but differ from them by lacking expression of Foxp3. High IL-10 production is characteristic for Tr1 cells themselves and they also produce TGF-β. In the presence of IL-10 can be also induced tolerogenic DCs from monocytes, whose production of IL-10 is also important for Tr1 formation. These interactions lead to the production of enzymes such as IDO (indolamine 2,3-dioxygenase) that catabolize essential amino acids. This microenvironment with a lack of essential amino acids together with other signals results in mTOR (mammalian target of rapamycin) inhibition which, particularly in synergy with TGF-β, direct the induction of new FoxP3 (forkhead box protein 3) expressing Tregs.","The specific cell-surface markers for Tr1 cells in humans and mice are CD4+ CD49b+LAG-3+ CD226+ from which LAG-3+ and CD49b+ are indispensable. LAG-3 is a membrane protein on Tr1 cells that negatively regulates TCR-mediated signal transduction in cells. LAG-3 activates dendritic cells (DCs) and enhances the antigen-specific T-cell response which is necessary for Tr1 cells antigen specificity. CD49b belongs to the integrin family and is a receptor for many (extracellular) matrix and non-matrix molecules. CD49b provides only little contribution to the differentiation and function of Tr1 cells.They characteristically produce high levels of IL-10, IFN-γ, IL-5 and also TGF- β but neither IL-4 nor IL-2. Production of IL-10 is also much more rapid than its production by other T-helper cell types.Tr1 cells do not constitutively express FOXP3 but only transiently, upon their activation and in smaller amounts than CD25+ FOXP3+ regulatory cells. FOXP3 is not required for Tr1 induction, nor for its function. They also express repressor of GATA-3 (ROG), while CD25+ FOXP3+ regulatory cells do not. ROG then downregulates GATA-3, a characteristic transcription factor for Th2 cells.
Tolerogenic DCs are essential in maintenance of central and peripheral tolerance through induction of T cell clonal deletion, T cell anergy and generation and activation of regulatory T (Treg) cells. For that reason, tolerogenic DCs are possible candidates for specific cellular therapy for treatment of allergic diseases, autoimmune diseases (e.g. type 1 diabetes, multiple sclerosis, rheumatoid arthritis) or transplant rejections.Tolerogenic DCs often display an immature or semi-mature phenotype with characteristically low expression of costimulatory (e.g. CD80, CD86) and MHC molecules on their surface. Tolerogenic DCs also produce different cytokines as mature DCs (e.g. anti-inflammatory cytokines interleukin (IL)-10, transforming growth factor-β (TGF-β)). Moreover, tolerogenic DCs may also express various inhibitory surface molecules (e.g. programmed cell death ligand (PDL)-1, PDL-2) or can modulate metabolic parameters and change T cell response. For example, tolerogenic DCs can release or induce enzymes such as indoleamine 2,3-dioxygenase (IDO) or heme oxygenase-1 (HO-1). IDO promotes the degradation of tryptophan to N-formylkynurenin leading to reduced T cell proliferation, whereas HO- 1 catalyzes degradation of hemoglobin resulting in production of monoxide and lower DC immunogenicity. Besides that, tolerogenic DCs also may produce retinoic acid (RA), which induces Treg differentiation.Human tolerogenic DCs may be induced by various immunosuppressive drugs or biomediators. Immunosuppressive drugs, e.g. corticosteroid dexamethasone, rapamycin, cyclosporine or acetylsalicylic acid, cause low expression of costimulatory molecules, reduced expression of MHC, higher expression of inhibitory molecules (e.g. PDL-1) or higher secretion of IL-10 or IDO. In addition, incubation with inhibitory cytokines IL-10 or TGF-β leads to generation of tolerogenic phenotype. Other mediators also affect generation of tolerogenic DC, e.g. vitamin D3, vitamin D2, hepatocyte growth factor or vasoactive intestinal peptide. The oldest and mostly used cytokine cocktail for in vitro DC generation is GM-CSF/IL-4.Tolerogenic DCs may be a potential candidate for specific immunotherapy and are studied for using them for treatment of inflammatory, autoimmune and allergic diseases and also in transplant medicine. Important and interesting feature of tolerogenic DCs is also the migratory capacity toward secondary lymph organs, leading to T-cell mediated immunosuppression. The first trial to transfer tolerogenic DCs to humans was undertaken by Ralph Steinman's group in 2001. Relating to the DC administration, various application have been used in humans in last years. Tolerogenic DCs have been injected e.g. intraperitoneally in patients with Crohn's disease, intradermally in diabetes and rheumatoid arthritis patients, subcutaneously in rheumatoid arthritis patients and via arthroscopic injections in joints of patient with rheumatoid and inflammatory arthritis.Therefore, it is necessary to test tolerogenic DCs for a stable phenotype to exclude a loss of the regulatory function and a switch to an immunostimulatory activity.
During a tolerant state potential effector cells remain but are tightly regulated by induced antigen-specific CD4+ regulatory T cells (iTregs). Many subsets of iTregs play a part in this process, but CD4+CD25+FoxP3+ Tregs play a key role, because they have the ability to convert conventional T cells into iTregs directly by secretion of the suppressive cytokines TGF-β, IL-10 or IL-35, or indirectly via dendritic cells (DCs). Production of IL-10 induces the formation of another population of regulatory T cells called Tr1. Tr1 cells are dependent on IL-10 and TGF-β as well as Tregs, but differ from them by lacking expression of Foxp3. High IL-10 production is characteristic for Tr1 cells themselves and they also produce TGF-β. In the presence of IL-10 can be also induced tolerogenic DCs from monocytes, whose production of IL-10 is also important for Tr1 formation. These interactions lead to the production of enzymes such as IDO (indolamine 2,3-dioxygenase) that catabolize essential amino acids. This microenvironment with a lack of essential amino acids together with other signals results in mTOR (mammalian target of rapamycin) inhibition which, particularly in synergy with TGF-β, direct the induction of new FoxP3 (forkhead box protein 3) expressing Tregs.","In the presence of IL-10 can be also induced tolerogenic DCs from monocytes, whose production of IL-10 is also important for Tr1 formationProduction of IL-10 induces the formation of another population of regulatory T cells called Tr1High IL-10 production is characteristic for Tr1 cells themselves and they also produce TGF-βProduction of IL-10 is also much more rapid than its production by other T-helper cell types.Tr1 cells do not constitutively express FOXP3 but only transiently, upon their activation and in smaller amounts than CD25+ FOXP3+ regulatory cellsIL-10 directly inhibits T cells by blocking its production of IL-2, IFN-γ and GM-CSF and have tolerogenic effect on B cells and support differentiation of other regulatory T cellsTr1 cells are dependent on IL-10 and TGF-β as well as Tregs, but differ from them by lacking expression of Foxp3Besides that, tolerogenic DCs also may produce retinoic acid (RA), which induces Treg differentiation.
The specific cell-surface markers for Tr1 cells in humans and mice are CD4 CD49bLAG-3 CD226 from which LAG-3 and CD49b are indispensableIn addition, incubation with inhibitory cytokines IL-10 or TGF-β leads to generation of tolerogenic phenotypePDL-1) or higher secretion of IL-10 or IDOCD49b provides only little contribution to the differentiation and function of Tr1 cells.
Tr1 cells secrete large amount of suppressing cytokines IL-10 and TGF-βCD49b provides only little contribution to the differentiation and function of Tr1 cells.They characteristically produce high levels of IL-10, IFN-γ, IL-5 and also TGF- β but neither IL-4 nor IL-2Tolerogenic DCs also produce different cytokines as mature DCs (e.gIL-10 indirectly downregulates MHC II molecules and co-stimulatory molecules on antigen-presenting cells (APC) and force them to upregulate tolerogenic molecules such as ILT-3, ILT-4 and HLA-G.
During a tolerant state potential effector cells remain but are tightly regulated by induced antigen-specific CD4+ regulatory T cells (iTregs)anti-inflammatory cytokines interleukin (IL)-10, transforming growth factor-β (TGF-β))The spe","In the presence of IL-10 can be also induced tolerogenic DCs from monocytes, whose production of IL-10 is also important for Tr1 formationProduction of IL-10 induces the formation of another population of regulatory T cells called Tr1High IL-10 production is characteristic for Tr1 cells themselves and they also produce TGF-βProduction of IL-10 is also much more rapid than its production by other T-helper cell types.Tr1 cells do not constitutively express FOXP3 but only transiently, upon their activation and in smaller amounts than CD25+ FOXP3+ regulatory cellsIL-10 directly inhibits T cells by blocking its production of IL-2, IFN-γ and GM-CSF and have tolerogenic effect on B cells and support differentiation of other regulatory T cellsTr1 cells are dependent on IL-10 and TGF-β as well as Tregs, but differ from them by lacking expression of Foxp3Besides that, tolerogenic DCs also may produce retinoic acid (RA), which induces Treg differentiation.
The specific cell-surface markers for Tr1 cells in humans and mice are CD4 CD49bLAG-3 CD226 from which LAG-3 and CD49b are indispensableIn addition, incubation with inhibitory cytokines IL-10 or TGF-β leads to generation of tolerogenic phenotypePDL-1) or higher secretion of IL-10 or IDOCD49b provides only little contribution to the differentiation and function of Tr1 cells.
Tr1 cells secrete large amount of suppressing cytokines IL-10 and TGF-βCD49b provides only little contribution to the differentiation and function of Tr1 cells.They characteristically produce high levels of IL-10, IFN-γ, IL-5 and also TGF- β but neither IL-4 nor IL-2Tolerogenic DCs also produce different cytokines as mature DCs (e.gIL-10 indirectly downregulates MHC II molecules and co-stimulatory molecules on antigen-presenting cells (APC) and force them to upregulate tolerogenic molecules such as ILT-3, ILT-4 and HLA-G.
During a tolerant state potential effector cells remain but are tightly regulated by induced antigen-specific CD4+ regulatory T cells (iTregs)anti-inflammatory cytokines interleukin (IL)-10, transforming growth factor-β (TGF-β))The spe[SEP]What is the role of IL-10 in the formation of Tr1 cells and tolerogenic DCs?","['B', 'D', 'E']",1.0
"What is the reason behind the designation of Class L dwarfs, and what is their color and composition?","Its relative color components are unique among brown dwarfs observed to date. New spectroscopic models for metal-poor brown dwarfs, resulted in a temperature lower than 500 K (<227 °C), making WISE 1534–1043 a Y-dwarf. L V star may refer to: * Brown dwarf * Red dwarf This is the 24th closest star to the Sun, and also intrinsically luminous for red dwarfs, having spectral class M0. Other late T- and Y-dwarfs show a much redder ch1-ch2 color when compared to WISE 1534–1043. Because the mass of a brown dwarf is between that of a planet and that of a star, they have also been called planetars or hyperjovians. Methane absorbs around the wavelength of 3.6 μm, corresponding to the W1 (WISE) and ch1 (Spitzer) bands, causing a red color for T and Y-dwarfs. LHS 2924 is the primary standard for the M9V spectral class. ==See also== * 2MASS J0523-1403 * EBLM J0555-57 == References == Category:M-type main-sequence stars Category:Boötes 3849 WISE 1534–1043 (or WISEA J153429.75-104303.3, and referred to as ""The Accident"") is a brown dwarf (substellar object), Class Y, the coolest class, visible only in the infrared. Examples include HD 114762 b (>11.68 MJ), Pi Mensae b (>10.312 MJ), and NGC 2423-3 b (>10.6 MJ). == Confirmed brown dwarfs orbiting primary stars == Sorted by increasing right ascension of the parent star. Brown dwarfs with names ending in a letter such as B, C, or D are in orbit around a primary star; those with names ending in a lower- case letter such as b, c, or d, may be exoplanets (see Exoplanet naming convention). The metallicity could be significantly lower and especially the extreme red J-W2 color suggests it could be cold even for a Y-dwarf. Various catalog designations have been used to name brown dwarfs. List of smallest red dwarf titleholders Star Date Radius Radius Radius km (mi) Notes 0.084 0.84 This star is slightly larger than the planet Saturn. 0.086 0.86 0.120 1.16 ==See also== * List of least massive stars * List of brown dwarfs * Lists of stars ==References== * Red dwarfs See T Tauri star ==List of named red dwarfs== This is a list of red dwarfs with names that are not systematically designated. Some exoplanets, especially those detected by radial velocity, can turn out to be brown dwarfs if their mass is higher than originally thought: most have only known minimum masses because the inclination of their orbit is not known. This is a list of brown dwarfs. Some brown dwarfs listed could still be massive planets. Some brown dwarfs listed could still be massive planets. LHS 2924, also commonly known as LP 271-25, is an extremely small and dim ultra-cool red dwarf located in the constellation of Boötes, about 35.85 light years from the Sun. ","Class L dwarfs are hotter than M stars and are designated L because L is the remaining letter alphabetically closest to M. They are bright blue in color and are brightest in ultraviolet. Their atmosphere is hot enough to allow metal hydrides and alkali metals to be prominent in their spectra. Some of these objects have masses large enough to support hydrogen fusion and are therefore stars, but most are of substellar mass and are therefore brown dwarfs.","Class L dwarfs are cooler than M stars and are designated L because L is the remaining letter alphabetically closest to M. They are dark red in color and are brightest in infrared. Their atmosphere is cool enough to allow metal hydrides and alkali metals to be prominent in their spectra. Some of these objects have masses large enough to support hydrogen fusion and are therefore stars, but most are of substellar mass and are therefore brown dwarfs.","Class L dwarfs are hotter than M stars and are designated L because L is the next letter alphabetically after M. They are dark red in color and are brightest in infrared. Their atmosphere is cool enough to allow metal hydrides and alkali metals to be prominent in their spectra. Some of these objects have masses large enough to support hydrogen fusion and are therefore stars, but most are of substellar mass and are therefore brown dwarfs.","Class L dwarfs are cooler than M stars and are designated L because L is the next letter alphabetically after M. They are bright yellow in color and are brightest in visible light. Their atmosphere is hot enough to allow metal hydrides and alkali metals to be prominent in their spectra. Some of these objects have masses large enough to support hydrogen fusion and are therefore stars, but most are of substellar mass and are therefore brown dwarfs.","Class L dwarfs are cooler than M stars and are designated L because L is the remaining letter alphabetically closest to M. They are bright green in color and are brightest in visible light. Their atmosphere is cool enough to allow metal hydrides and alkali metals to be prominent in their spectra. Some of these objects have masses small enough to support hydrogen fusion and are therefore stars, but most are of substellar mass and are therefore brown dwarfs.",B,kaggle200,"The existence of CrH in stars was only established in 1980 when spectral lines were identified in S-type stars and sunspots. CrH was discovered in brown dwarfs in 1999. Along with FeH, CrH became useful in classifying L dwarfs. The CrH spectrum was identified in a large sunspot in 1976, but the lines are much less prominent than FeH.
As GD 165B is the prototype of the L dwarfs, Gliese 229B is the prototype of a second new spectral class, the T dwarfs. T dwarfs are pinkish-magenta. Whereas near-infrared (NIR) spectra of L dwarfs show strong absorption bands of HO and carbon monoxide (CO), the NIR spectrum of Gliese 229B is dominated by absorption bands from methane (CH), features that were found only in the giant planets of the Solar System and Titan. CH, HO, and molecular hydrogen (H) collision-induced absorption (CIA) give Gliese 229B blue near-infrared colors. Its steeply sloped red optical spectrum also lacks the FeH and CrH bands that characterize L dwarfs and instead is influenced by exceptionally broad absorption features from the alkali metals Na and K. These differences led J. Davy Kirkpatrick to propose the T spectral class for objects exhibiting H- and K-band CH absorption. , 355 T dwarfs are known. NIR classification schemes for T dwarfs have recently been developed by Adam Burgasser and Tom Geballe. Theory suggests that L dwarfs are a mixture of very-low-mass stars and sub-stellar objects (brown dwarfs), whereas the T dwarf class is composed entirely of brown dwarfs. Because of the absorption of sodium and potassium in the green part of the spectrum of T dwarfs, the actual appearance of T dwarfs to human visual perception is estimated to be not brown, but magenta. T-class brown dwarfs, such as WISE 0316+4307, have been detected more than 100 light-years from the Sun.
The defining characteristic of spectral class M, the coolest type in the long-standing classical stellar sequence, is an optical spectrum dominated by absorption bands of titanium(II) oxide (TiO) and vanadium(II) oxide (VO) molecules. However, GD 165B, the cool companion to the white dwarf GD 165, had none of the hallmark TiO features of M dwarfs. The subsequent identification of many objects like GD 165B ultimately led to the definition of a new spectral class, the L dwarfs, defined in the red optical region of the spectrum not by metal-oxide absorption bands (TiO, VO), but by metal hydride emission bands (FeH, CrH, MgH, CaH) and prominent atomic lines of alkali metals (Na, K, Rb, Cs). , over 900 L dwarfs have been identified, most by wide-field surveys: the Two Micron All Sky Survey (2MASS), the Deep Near Infrared Survey of the Southern Sky (DENIS), and the Sloan Digital Sky Survey (SDSS). This spectral class contains not only the brown dwarfs, because the coolest main-sequence stars above brown dwarfs (> 80 M) have the spectral class L2 to L6.
Class L dwarfs get their designation because they are cooler than M stars and L is the remaining letter alphabetically closest to M. Some of these objects have masses large enough to support hydrogen fusion and are therefore stars, but most are of substellar mass and are therefore brown dwarfs. They are a very dark red in color and brightest in infrared. Their atmosphere is cool enough to allow metal hydrides and alkali metals to be prominent in their spectra.","Spectral class T As GD 165B is the prototype of the L dwarfs, Gliese 229B is the prototype of a second new spectral class, the T dwarfs. T dwarfs are pinkish-magenta. Whereas near-infrared (NIR) spectra of L dwarfs show strong absorption bands of H2O and carbon monoxide (CO), the NIR spectrum of Gliese 229B is dominated by absorption bands from methane (CH4), features that were found only in the giant planets of the Solar System and Titan. CH4, H2O, and molecular hydrogen (H2) collision-induced absorption (CIA) give Gliese 229B blue near-infrared colors. Its steeply sloped red optical spectrum also lacks the FeH and CrH bands that characterize L dwarfs and instead is influenced by exceptionally broad absorption features from the alkali metals Na and K. These differences led J. Davy Kirkpatrick to propose the T spectral class for objects exhibiting H- and K-band CH4 absorption. As of 2013, 355 T dwarfs are known. NIR classification schemes for T dwarfs have recently been developed by Adam Burgasser and Tom Geballe. Theory suggests that L dwarfs are a mixture of very-low-mass stars and sub-stellar objects (brown dwarfs), whereas the T dwarf class is composed entirely of brown dwarfs. Because of the absorption of sodium and potassium in the green part of the spectrum of T dwarfs, the actual appearance of T dwarfs to human visual perception is estimated to be not brown, but magenta. T-class brown dwarfs, such as WISE 0316+4307, have been detected more than 100 light-years from the Sun.
Spectral class L The defining characteristic of spectral class M, the coolest type in the long-standing classical stellar sequence, is an optical spectrum dominated by absorption bands of titanium(II) oxide (TiO) and vanadium(II) oxide (VO) molecules. However, GD 165B, the cool companion to the white dwarf GD 165, had none of the hallmark TiO features of M dwarfs. The subsequent identification of many objects like GD 165B ultimately led to the definition of a new spectral class, the L dwarfs, defined in the red optical region of the spectrum not by metal-oxide absorption bands (TiO, VO), but by metal hydride emission bands (FeH, CrH, MgH, CaH) and prominent atomic lines of alkali metals (Na, K, Rb, Cs). As of 2013, over 900 L dwarfs have been identified, most by wide-field surveys: the Two Micron All Sky Survey (2MASS), the Deep Near Infrared Survey of the Southern Sky (DENIS), and the Sloan Digital Sky Survey (SDSS). This spectral class contains not only the brown dwarfs, because the coolest main-sequence stars above brown dwarfs (> 80 MJ) have the spectral class L2 to L6.
Class L Class L dwarfs get their designation because they are cooler than M stars and L is the remaining letter alphabetically closest to M. Some of these objects have masses large enough to support hydrogen fusion and are therefore stars, but most are of substellar mass and are therefore brown dwarfs. They are a very dark red in color and brightest in infrared. Their atmosphere is cool enough to allow metal hydrides and alkali metals to be prominent in their spectra.Due to low surface gravity in giant stars, TiO- and VO-bearing condensates never form. Thus, L-type stars larger than dwarfs can never form in an isolated environment. However, it may be possible for these L-type supergiants to form through stellar collisions, an example of which is V838 Monocerotis while in the height of its luminous red nova eruption.","Theory suggests that L dwarfs are a mixture of very-low-mass stars and sub-stellar objects (brown dwarfs), whereas the T dwarf class is composed entirely of brown dwarfsThis spectral class contains not only the brown dwarfs, because the coolest main-sequence stars above brown dwarfs (> 80 MJ) have the spectral class L2 to L6.
Class L Class L dwarfs get their designation because they are cooler than M stars and L is the remaining letter alphabetically closest to MThis spectral class contains not only the brown dwarfs, because the coolest main-sequence stars above brown dwarfs (> 80 M) have the spectral class L2 to L6.
Class L dwarfs get their designation because they are cooler than M stars and L is the remaining letter alphabetically closest to M, over 900 L dwarfs have been identified, most by wide-field surveys: the Two Micron All Sky Survey (2MASS), the Deep Near Infrared Survey of the Southern Sky (DENIS), and the Sloan Digital Sky Survey (SDSS)Spectral class T As GD 165B is the prototype of the L dwarfs, Gliese 229B is the prototype of a second new spectral class, the T dwarfsThe subsequent identification of many objects like GD 165B ultimately led to the definition of a new spectral class, the L dwarfs, defined in the red optical region of the spectrum not by metal-oxide absorption bands (TiO, VO), but by metal hydride emission bands (FeH, CrH, MgH, CaH) and prominent atomic lines of alkali metals (Na, K, Rb, Cs)As of 2013, over 900 L dwarfs have been identified, most by wide-field surveys: the Two Micron All Sky Survey (2MASS), the Deep Near Infrared Survey of the Southern Sky (DENIS), and the Sloan Digital Sky Survey (SDSS)NIR classification schemes for T dwarfs have recently been developed by Adam Burgasser and Tom GeballeT dwarfs are pinkish-magentaBecause of the absorption of sodium and potassium in the green part of the spectrum of T dwarfs, the actual appearance of T dwarfs to human visual perception is estimated to be not brown, but magentaT-class brown dwarfs, such as WISE 0316+4307, have been detected more than 100 light-years from the Sun.
Spectra","Theory suggests that L dwarfs are a mixture of very-low-mass stars and sub-stellar objects (brown dwarfs), whereas the T dwarf class is composed entirely of brown dwarfsThis spectral class contains not only the brown dwarfs, because the coolest main-sequence stars above brown dwarfs (> 80 MJ) have the spectral class L2 to L6.
Class L Class L dwarfs get their designation because they are cooler than M stars and L is the remaining letter alphabetically closest to MThis spectral class contains not only the brown dwarfs, because the coolest main-sequence stars above brown dwarfs (> 80 M) have the spectral class L2 to L6.
Class L dwarfs get their designation because they are cooler than M stars and L is the remaining letter alphabetically closest to M, over 900 L dwarfs have been identified, most by wide-field surveys: the Two Micron All Sky Survey (2MASS), the Deep Near Infrared Survey of the Southern Sky (DENIS), and the Sloan Digital Sky Survey (SDSS)Spectral class T As GD 165B is the prototype of the L dwarfs, Gliese 229B is the prototype of a second new spectral class, the T dwarfsThe subsequent identification of many objects like GD 165B ultimately led to the definition of a new spectral class, the L dwarfs, defined in the red optical region of the spectrum not by metal-oxide absorption bands (TiO, VO), but by metal hydride emission bands (FeH, CrH, MgH, CaH) and prominent atomic lines of alkali metals (Na, K, Rb, Cs)As of 2013, over 900 L dwarfs have been identified, most by wide-field surveys: the Two Micron All Sky Survey (2MASS), the Deep Near Infrared Survey of the Southern Sky (DENIS), and the Sloan Digital Sky Survey (SDSS)NIR classification schemes for T dwarfs have recently been developed by Adam Burgasser and Tom GeballeT dwarfs are pinkish-magentaBecause of the absorption of sodium and potassium in the green part of the spectrum of T dwarfs, the actual appearance of T dwarfs to human visual perception is estimated to be not brown, but magentaT-class brown dwarfs, such as WISE 0316+4307, have been detected more than 100 light-years from the Sun.
Spectra[SEP]What is the reason behind the designation of Class L dwarfs, and what is their color and composition?","['B', 'D', 'C']",1.0
What was Isaac Newton's explanation for rectilinear propagation of light?,"While this construction indeed predicted rectilinear propagation, it was difficult to reconcile with the common observation that wavefronts on the surface of water can bend around obstructions, and with the similar behavior of sound waves—causing Newton to maintain, to the end of his life, that if light consisted of waves it would ""bend and spread every way"" into the shadows.Newton, 1730, p. 362. The corpuscular theory of light, favored by Isaac Newton and accepted by nearly all of Fresnel's seniors, easily explained rectilinear propagation: the corpuscles obviously moved very fast, so that their paths were very nearly straight. This discovery gave Newton another reason to reject the wave theory: rays of light evidently had ""sides"".Newton, 1730, pp. 358–361. The corpuscular theory, with the hypothesis that the corpuscles were subject to forces acting perpendicular to surfaces, explained the same laws equally well,Darrigol, 2012, pp. 93–94,103. albeit with the implication that light traveled faster in denser media; that implication was wrong, but could not be directly disproven with the technology of Newton's time or even Fresnel's time . Newton, who called diffraction ""inflexion"", supposed that rays of light passing close to obstacles were bent (""inflected""); but his explanation was only qualitative.Darrigol, 2012, pp. 101–102; Newton, 1730, Book , Part . Newton offered an alternative ""Rule"" for the extraordinary refraction,Newton, 1730, p. 356. which rode on his authority through the 18th century, although he made ""no known attempt to deduce it from any principles of optics, corpuscular or otherwise."" Modern scholarship has revealed that Newton's analysis and resynthesis of white light owes a debt to corpuscular alchemy.William R. Newman, ""Newton's Early Optical Theory and its Debt to Chymistry"", in Danielle Jacquart and Michel Hochmann, eds., Lumière et vision dans les sciences et dans les arts (Geneva: Droz, 2010), pp. 283–307. The wave theory, as developed by Christiaan Huygens in his Treatise on Light (1690), explained rectilinear propagation on the assumption that each point crossed by a traveling wavefront becomes the source of a secondary wavefront. He had not mentioned the curved paths of the external fringes of a shadow; but, as he later explained,Young to Arago (in English), 12 January 1817, in Young, 1855, pp. 380–384, at p. 381; quoted in Silliman, 1967, p. 171. that was because Newton had already done so.Newton, 1730, p. 321, Fig. 1, where the straight rays contribute to the curved path of a fringe, so that the same fringe is made by different rays at different distances from the obstacle (cf. Darrigol, 2012, p. 101, Fig. 3.11 – where, in the caption, ""1904"" should be ""1704"" and """" should be """"). Newton, 1730, Opticks: or, a Treatise of the Reflections, Refractions, Inflections, and Colours of Light, 4th Ed. The text explained the principles of Newton's Opticks while avoiding much of the mathematical rigor of the work in favor of a more ""agreeable"" text. Newton himself tried to explain colors of thin plates using the corpuscular theory, by supposing that his corpuscles had the wavelike property of alternating between ""fits of easy transmission"" and ""fits of easy reflection"",Darrigol, 2012, pp. 98–100; Newton, 1730, p. 281. the distance between like ""fits"" depending on the color and the mediumNewton, 1730, p. 284. and, awkwardly, on the angle of refraction or reflection into that medium.Newton, 1730, pp. 283,287. A very short introduction, Oxford University Press 2007 Newton argued that light is composed of particles or corpuscles, which were refracted by accelerating into a denser medium. The corpuscular theory could not rigorously link double refraction to surface forces; the wave theory could not yet link it to polarization. Later, he coined the terms linear polarization, circular polarization, and elliptical polarization, explained how optical rotation could be understood as a difference in propagation speeds for the two directions of circular polarization, and (by allowing the reflection coefficient to be complex) accounted for the change in polarization due to total internal reflection, as exploited in the Fresnel rhomb. Defenders of the established corpuscular theory could not match his quantitative explanations of so many phenomena on so few assumptions. The Optical Papers of Isaac Newton. In 1704, Newton published Opticks, in which he expounded his corpuscular theory of light. After the wave theory of light was subsumed by Maxwell's electromagnetic theory in the 1860s, some attention was diverted from the magnitude of Fresnel's contribution. With sections covering the nature of light, diffraction, thin-film interference, reflection and refraction, double refraction and polarization, chromatic polarization, and modification of polarization by reflection, it made a comprehensive case for the wave theory to a readership that was not restricted to physicists.Cf. Frankel, 1976, p. 169. But photons did not exactly correspond to Newton's corpuscles; for example, Newton's explanation of ordinary refraction required the corpuscles to travel faster in media of higher refractive index, which photons do not. ","Isaac Newton rejected the wave theory of light and proposed that light consists of corpuscles that are subject to a force acting parallel to the interface. In this model, the critical angle was the angle of incidence at which the normal velocity of the approaching corpuscle was just enough to reach the far side of the force field; at more oblique incidence, the corpuscle would be turned back.","Isaac Newton rejected the wave theory of light and proposed that light consists of corpuscles that are subject to a force acting perpendicular to the interface. In this model, the critical angle was the angle of incidence at which the normal velocity of the approaching corpuscle was just enough to reach the near side of the force field; at more oblique incidence, the corpuscle would be turned back.","Isaac Newton accepted the wave theory of light and proposed that light consists of transverse waves that are subject to a force acting perpendicular to the interface. In this model, the critical angle was the angle of incidence at which the normal velocity of the approaching wave was just enough to reach the far side of the force field; at more oblique incidence, the wave would be turned back.","Isaac Newton rejected the wave theory of light and proposed that light consists of corpuscles that are subject to a force acting perpendicular to the interface. In this model, the critical angle was the angle of incidence at which the normal velocity of the approaching corpuscle was just enough to reach the far side of the force field; at more oblique incidence, the corpuscle would be turned back.","Isaac Newton accepted the wave theory of light and proposed that light consists of longitudinal waves that are subject to a force acting perpendicular to the interface. In this model, the critical angle was the angle of incidence at which the normal velocity of the approaching wave was just enough to reach the far side of the force field; at more oblique incidence, the wave would be turned back.",D,kaggle200,"At that time, many favored Isaac Newton's corpuscular theory of light, among them the theoretician Siméon Denis Poisson. In 1818 the French Academy of Sciences launched a competition to explain the properties of light, where Poisson was one of the members of the judging committee. The civil engineer Augustin-Jean Fresnel entered this competition by submitting a new wave theory of light.
If one illuminates two parallel slits, the light from the two slits again interferes. Here the interference is a more pronounced pattern with a series of alternating light and dark bands. The width of the bands is a property of the frequency of the illuminating light. (See the bottom photograph to the right.) When Thomas Young (1773–1829) first demonstrated this phenomenon, it indicated that light consists of waves, as the distribution of brightness can be explained by the alternately additive and subtractive interference of wavefronts. Young's experiment, performed in the early 1800s, played a crucial role in the understanding of the wave theory of light, vanquishing the corpuscular theory of light proposed by Isaac Newton, which had been the accepted model of light propagation in the 17th and 18th centuries. However, the later discovery of the photoelectric effect demonstrated that under different circumstances, light can behave as if it is composed of discrete particles. These seemingly contradictory discoveries made it necessary to go beyond classical physics and take into account the quantum nature of light.
The angle of incidence, in geometric optics, is the angle between a ray incident on a surface and the line perpendicular (at 90 degree angle) to the surface at the point of incidence, called the normal. The ray can be formed by any waves, such as optical, acoustic, microwave, and X-ray. In the figure below, the line representing a ray makes an angle θ with the normal (dotted line). The angle of incidence at which light is first totally internally reflected is known as the critical angle. The angle of reflection and angle of refraction are other angles related to beams.
Isaac Newton rejected the wave explanation of rectilinear propagation, believing that if light consisted of waves, it would ""bend and spread every way"" into the shadows. His corpuscular theory of light explained rectilinear propagation more simply, and it accounted for the ordinary laws of refraction and reflection, including TIR, on the hypothesis that the corpuscles of light were subject to a force acting perpendicular to the interface. In this model, for dense-to-rare incidence, the force was an attraction back towards the denser medium, and the critical angle was the angle of incidence at which the normal velocity of the approaching corpuscle was just enough to reach the far side of the force field; at more oblique incidence, the corpuscle would be turned back. Newton gave what amounts to a formula for the critical angle, albeit in words: ""as the Sines are which measure the Refraction, so is the Sine of Incidence at which the total Reflexion begins, to the Radius of the Circle"".","During this period, many scientists proposed a wave theory of light based on experimental observations, including Robert Hooke, Christiaan Huygens and Leonhard Euler. However, Isaac Newton, who did many experimental investigations of light, had rejected the wave theory of light and developed his corpuscular theory of light according to which light is emitted from a luminous body in the form of tiny particles. This theory held sway until the beginning of the nineteenth century despite the fact that many phenomena, including diffraction effects at edges or in narrow apertures, colours in thin films and insect wings, and the apparent failure of light particles to crash into one another when two light beams crossed, could not be adequately explained by the corpuscular theory which, nonetheless, had many eminent supporters, including Pierre-Simon Laplace and Jean-Baptiste Biot.
If one illuminates two parallel slits, the light from the two slits again interferes. Here the interference is a more pronounced pattern with a series of alternating light and dark bands. The width of the bands is a property of the frequency of the illuminating light. (See the bottom photograph to the right.) When Thomas Young (1773–1829) first demonstrated this phenomenon, it indicated that light consists of waves, as the distribution of brightness can be explained by the alternately additive and subtractive interference of wavefronts. Young's experiment, performed in the early 1800s, played a crucial role in the understanding of the wave theory of light, vanquishing the corpuscular theory of light proposed by Isaac Newton, which had been the accepted model of light propagation in the 17th and 18th centuries. However, the later discovery of the photoelectric effect demonstrated that under different circumstances, light can behave as if it is composed of discrete particles. These seemingly contradictory discoveries made it necessary to go beyond classical physics and take into account the quantum nature of light.
Isaac Newton rejected the wave explanation of rectilinear propagation, believing that if light consisted of waves, it would ""bend and spread every way"" into the shadows. His corpuscular theory of light explained rectilinear propagation more simply, and it accounted for the ordinary laws of refraction and reflection, including TIR, on the hypothesis that the corpuscles of light were subject to a force acting perpendicular to the interface. In this model, for dense-to-rare incidence, the force was an attraction back towards the denser medium, and the critical angle was the angle of incidence at which the normal velocity of the approaching corpuscle was just enough to reach the far side of the force field; at more oblique incidence, the corpuscle would be turned back. Newton gave what amounts to a formula for the critical angle, albeit in words: ""as the Sines are which measure the Refraction, so is the Sine of Incidence at which the total Reflexion begins, to the Radius of the Circle"".Newton went beyond Huygens in two ways. First, not surprisingly, Newton pointed out the relationship between TIR and dispersion: when a beam of white light approaches a glass-to-air interface at increasing obliquity, the most strongly-refracted rays (violet) are the first to be ""taken out"" by ""total Reflexion"", followed by the less-refracted rays. Second, he observed that total reflection could be frustrated (as we now say) by laying together two prisms, one plane and the other slightly convex; and he explained this simply by noting that the corpuscles would be attracted not only to the first prism, but also to the second.In two other ways, however, Newton's system was less coherent. First, his explanation of partial reflection depended not only on the supposed forces of attraction between corpuscles and media, but also on the more nebulous hypothesis of ""Fits of easy Reflexion"" and ""Fits of easy Transmission"". Second, although his corpuscles could conceivably have ""sides"" or ""poles"", whose orientations could conceivably determine whether the corpuscles suffered ordinary or extraordinary refraction in ""Island-Crystal"", his geometric description of the extraordinary refraction was theoretically unsupported and empirically inaccurate.","These seemingly contradictory discoveries made it necessary to go beyond classical physics and take into account the quantum nature of light.
Isaac Newton rejected the wave explanation of rectilinear propagation, believing that if light consisted of waves, it would ""bend and spread every way"" into the shadowsHis corpuscular theory of light explained rectilinear propagation more simply, and it accounted for the ordinary laws of refraction and reflection, including TIR, on the hypothesis that the corpuscles of light were subject to a force acting perpendicular to the interfaceThe angle of reflection and angle of refraction are other angles related to beams.
Isaac Newton rejected the wave explanation of rectilinear propagation, believing that if light consisted of waves, it would ""bend and spread every way"" into the shadowsSecond, he observed that total reflection could be frustrated (as we now say) by laying together two prisms, one plane and the other slightly convex; and he explained this simply by noting that the corpuscles would be attracted not only to the first prism, but also to the second.In two other ways, however, Newton's system was less coherentHowever, Isaac Newton, who did many experimental investigations of light, had rejected the wave theory of light and developed his corpuscular theory of light according to which light is emitted from a luminous body in the form of tiny particles- At that time, many favored Isaac Newton's corpuscular theory of light, among them the theoretician Siméon Denis PoissonYoung's experiment, performed in the early 1800s, played a crucial role in the understanding of the wave theory of light, vanquishing the corpuscular theory of light proposed by Isaac Newton, which had been the accepted model of light propagation in the 17th and 18th centuriesFirst, not surprisingly, Newton pointed out the relationship between TIR and dispersion: when a beam of white light approaches a glass-to-air interface at increasing obliquity, the most strongly-refracted rays (violet) are the first to be ""taken out"" by ""total Reflexion"", followed by","These seemingly contradictory discoveries made it necessary to go beyond classical physics and take into account the quantum nature of light.
Isaac Newton rejected the wave explanation of rectilinear propagation, believing that if light consisted of waves, it would ""bend and spread every way"" into the shadowsHis corpuscular theory of light explained rectilinear propagation more simply, and it accounted for the ordinary laws of refraction and reflection, including TIR, on the hypothesis that the corpuscles of light were subject to a force acting perpendicular to the interfaceThe angle of reflection and angle of refraction are other angles related to beams.
Isaac Newton rejected the wave explanation of rectilinear propagation, believing that if light consisted of waves, it would ""bend and spread every way"" into the shadowsSecond, he observed that total reflection could be frustrated (as we now say) by laying together two prisms, one plane and the other slightly convex; and he explained this simply by noting that the corpuscles would be attracted not only to the first prism, but also to the second.In two other ways, however, Newton's system was less coherentHowever, Isaac Newton, who did many experimental investigations of light, had rejected the wave theory of light and developed his corpuscular theory of light according to which light is emitted from a luminous body in the form of tiny particles- At that time, many favored Isaac Newton's corpuscular theory of light, among them the theoretician Siméon Denis PoissonYoung's experiment, performed in the early 1800s, played a crucial role in the understanding of the wave theory of light, vanquishing the corpuscular theory of light proposed by Isaac Newton, which had been the accepted model of light propagation in the 17th and 18th centuriesFirst, not surprisingly, Newton pointed out the relationship between TIR and dispersion: when a beam of white light approaches a glass-to-air interface at increasing obliquity, the most strongly-refracted rays (violet) are the first to be ""taken out"" by ""total Reflexion"", followed by[SEP]What was Isaac Newton's explanation for rectilinear propagation of light?","['D', 'B', 'C']",1.0
What is the relationship between chemical potential and quarks/antiquarks?,"The Stockmayer potential is a mathematical model for representing the interactions between pairs of atoms or molecules. Most physicists simply refer to ""the number of bottom quarks"" and ""the number of bottom antiquarks"". ==Further reading== * Category:Quarks Category:Flavour (particle physics) Likewise, the potential has been extended to include spin-dependent terms ==Calculation of the quark-quark potential== A test of validity for approaches that seek to explain color confinement is that they must produce, in the limit that quark motions are non-relativistic, a potential that agrees with the Cornell potential. The bottom quark or b quark, also known as the beauty quark, is a third- generation heavy quark with a charge of − e. Up, charm and top quarks have an electric charge of +⅔, while the down, strange, and bottom quarks have an electric charge of −⅓. The strong interactions binding the quarks together are insensitive to these quantum numbers, so variation of them leads to systematic mass and coupling relationships among the hadrons in the same flavor multiplet. In physics, bottomness (symbol B′ using a prime as plain B is used already for baryon number) or beauty is a flavour quantum number reflecting the difference between the number of bottom antiquarks (n) and the number of bottom quarks (n) that are present in a particle: : B^\prime = -(n_b - n_{\bar b}) Bottom quarks have (by convention) a bottomness of −1 while bottom antiquarks have a bottomness of +1. The Cornell Potential is an effective method to account for the confinement of quarks. The potential has the form: :V(r) = -\frac{4}{3}\frac{\alpha_s}{\;r\;} + \sigma\,r + const.~ where r is the effective radius of the quarkonium state, \alpha_s is the QCD running coupling, \sigma is the QCD string tension and const. \simeq -0.3 GeV is a constant. All quarks are described in a similar way by electroweak and quantum chromodynamics, but the bottom quark has exceptionally low rates of transition to lower-mass quarks. Then the proton wave function can be written in the simpler form, :p\left(\frac{1}{2},\frac{1}{2}\right)=\frac{uud}{\sqrt{6}}[2\uparrow\uparrow\downarrow-\uparrow\downarrow\uparrow-\downarrow\uparrow\uparrow] and the :\Delta^{+}\left(\frac{3}{3},\frac{3}{2}\right)=uud[\uparrow\uparrow\uparrow] If quark-quark interactions are limited to two-body interactions, then all the successful quark model predictions, including sum rules for baryon masses and magnetic moments, can be derived. ===The discovery of color=== Color quantum numbers are the characteristic charges of the strong force, and are completely uninvolved in electroweak interactions. These consist of a bottom quark and its antiparticle. The convention is that the flavour quantum number sign for the quark is the same as the sign of the electric charge (symbol Q) of that quark (in this case, Q = −). It is sometimes useful to think of the basis states of quarks as the six states of three flavors and two spins per flavor. This article discusses the quark model for the up, down, and strange flavors of quark (which form an approximate flavor SU(3) symmetry). Conversely, the quarks serve in the definition of quantum chromodynamics, the fundamental theory fully describing the strong interactions; and the Eightfold Way is now understood to be a consequence of the flavor symmetry structure of the lightest three of them. ==Mesons== thumb|Figure 3: Mesons of spin 1 form a nonet The Eightfold Way classification is named after the following fact: If we take three flavors of quarks, then the quarks lie in the fundamental representation, 3 (called the triplet) of flavor SU(3). Its value is \sigma \sim 0.18 GeV^2. \sigma controls the intercepts and slopes of the linear Regge trajectories. ==Domains of application== The Cornell potential applies best for the case of static quarks (or very heavy quarks with non-relativistic motion), although relativistic improvements to the potential using speed-dependent terms are available. In particle physics, the quark model is a classification scheme for hadrons in terms of their valence quarks—the quarks and antiquarks which give rise to the quantum numbers of the hadrons. As with other flavour-related quantum numbers, bottomness is preserved under strong and electromagnetic interactions, but not under weak interactions. The other set is the flavor quantum numbers such as the isospin, strangeness, charm, and so on. ","Chemical potential, represented by μ, is a measure of the imbalance between quarks and antiquarks in a system. Higher μ indicates a stronger bias favoring quarks over antiquarks.","Chemical potential, represented by μ, is a measure of the balance between quarks and antiquarks in a system. Higher μ indicates an equal number of quarks and antiquarks.","Chemical potential, represented by μ, is a measure of the imbalance between quarks and antiquarks in a system. Higher μ indicates a stronger bias favoring antiquarks over quarks.","Chemical potential, represented by μ, is a measure of the density of antiquarks in a system. Higher μ indicates a higher density of antiquarks.","Chemical potential, represented by μ, is a measure of the density of quarks in a system. Higher μ indicates a higher density of quarks.",A,kaggle200,"The first derivative of the energy with respect to the number of electrons is equal to the chemical potential, ""μ"", of the system,
μ(""T"", ""P"") is defined as the chemical potential of pure species ""i"". Given this definition, the chemical potential of species ""i"" in an ideal solution is
The set Tan(""μ"", ""a"") of tangent measures of a measure ""μ"" at a point ""a"" in the support of ""μ"" is nonempty on mild conditions on ""μ"". By the weak compactness of Radon measures, Tan(""μ"", ""a"") is nonempty if one of the following conditions hold:
For guidance it also shows the typical values of μ and ""T"" in heavy-ion collisions and in the early universe. For readers who are not familiar with the concept of a chemical potential, it is helpful to think of μ as a measure of the imbalance between quarks and antiquarks in the system. Higher μ means a stronger bias favoring quarks over antiquarks. At low temperatures there are no antiquarks, and then higher μ generally means a higher density of quarks.","αμA+βμB=σμS+τμT where μ is in this case a partial molar Gibbs energy, a chemical potential. The chemical potential of a reagent A is a function of the activity, {A} of that reagent.
ln {A} (where μoA is the standard chemical potential).
Now, imagine starting at the bottom left corner of the phase diagram, in the vacuum where μ = T = 0. If we heat up the system without introducing any preference for quarks over antiquarks, this corresponds to moving vertically upwards along the T axis. At first, quarks are still confined and we create a gas of hadrons (pions, mostly). Then around T = 150 MeV there is a crossover to the quark gluon plasma: thermal fluctuations break up the pions, and we find a gas of quarks, antiquarks, and gluons, as well as lighter particles such as photons, electrons, positrons, etc. Following this path corresponds to travelling far back in time (so to say), to the state of the universe shortly after the big bang (where there was a very tiny preference for quarks over antiquarks).
The phase diagram of quark matter is not well known, either experimentally or theoretically. A commonly conjectured form of the phase diagram is shown in the figure to the right. It is applicable to matter in a compact star, where the only relevant thermodynamic potentials are quark chemical potential μ and temperature T. For guidance it also shows the typical values of μ and T in heavy-ion collisions and in the early universe. For readers who are not familiar with the concept of a chemical potential, it is helpful to think of μ as a measure of the imbalance between quarks and antiquarks in the system. Higher μ means a stronger bias favoring quarks over antiquarks. At low temperatures there are no antiquarks, and then higher μ generally means a higher density of quarks.","For readers who are not familiar with the concept of a chemical potential, it is helpful to think of μ as a measure of the imbalance between quarks and antiquarks in the systemThe chemical potential of a reagent A is a function of the activity, {A} of that reagent.
ln {A} (where μoA is the standard chemical potential).
Now, imagine starting at the bottom left corner of the phase diagram, in the vacuum where μ = T = 0Given this definition, the chemical potential of species ""i"" in an ideal solution is
The set Tan(""μ"", ""a"") of tangent measures of a measure ""μ"" at a point ""a"" in the support of ""μ"" is nonempty on mild conditions on ""μ""Higher μ means a stronger bias favoring quarks over antiquarksIt is applicable to matter in a compact star, where the only relevant thermodynamic potentials are quark chemical potential μ and temperature T- The first derivative of the energy with respect to the number of electrons is equal to the chemical potential, ""μ"", of the system,
μ(""T"", ""P"") is defined as the chemical potential of pure species ""i""Following this path corresponds to travelling far back in time (so to say), to the state of the universe shortly after the big bang (where there was a very tiny preference for quarks over antiquarks).
The phase diagram of quark matter is not well known, either experimentally or theoreticallyAt low temperatures there are no antiquarks, and then higher μ generally means a higher density of quarksAt low temperatures there are no antiquarks, and then higher μ generally means a higher density of quarks.At first, quarks are still confined and we create a gas of hadrons (pions, mostly)If we heat up the system without introducing any preference for quarks over antiquarks, this corresponds to moving vertically upwards along the T axisαμA+βμB=σμS+τμT where μ is in this case a partial molar Gibbs energy, a chemical potentialThen around T = 150 MeV there is a crossover to the quark gluon plasma: thermal fluctuations break up the pions, and we find a gas of quarks, antiquarks, and gluons, as well as lighter particles such as photons, electrons, positro","For readers who are not familiar with the concept of a chemical potential, it is helpful to think of μ as a measure of the imbalance between quarks and antiquarks in the systemThe chemical potential of a reagent A is a function of the activity, {A} of that reagent.
ln {A} (where μoA is the standard chemical potential).
Now, imagine starting at the bottom left corner of the phase diagram, in the vacuum where μ = T = 0Given this definition, the chemical potential of species ""i"" in an ideal solution is
The set Tan(""μ"", ""a"") of tangent measures of a measure ""μ"" at a point ""a"" in the support of ""μ"" is nonempty on mild conditions on ""μ""Higher μ means a stronger bias favoring quarks over antiquarksIt is applicable to matter in a compact star, where the only relevant thermodynamic potentials are quark chemical potential μ and temperature T- The first derivative of the energy with respect to the number of electrons is equal to the chemical potential, ""μ"", of the system,
μ(""T"", ""P"") is defined as the chemical potential of pure species ""i""Following this path corresponds to travelling far back in time (so to say), to the state of the universe shortly after the big bang (where there was a very tiny preference for quarks over antiquarks).
The phase diagram of quark matter is not well known, either experimentally or theoreticallyAt low temperatures there are no antiquarks, and then higher μ generally means a higher density of quarksAt low temperatures there are no antiquarks, and then higher μ generally means a higher density of quarks.At first, quarks are still confined and we create a gas of hadrons (pions, mostly)If we heat up the system without introducing any preference for quarks over antiquarks, this corresponds to moving vertically upwards along the T axisαμA+βμB=σμS+τμT where μ is in this case a partial molar Gibbs energy, a chemical potentialThen around T = 150 MeV there is a crossover to the quark gluon plasma: thermal fluctuations break up the pions, and we find a gas of quarks, antiquarks, and gluons, as well as lighter particles such as photons, electrons, positro[SEP]What is the relationship between chemical potential and quarks/antiquarks?","['E', 'A', 'C']",0.5
What is the American Petroleum Institute (API) gravity?,"API gravity is thus an inverse measure of a petroleum liquid's density relative to that of water (also known as specific gravity). The American Petroleum Institute gravity, or API gravity, is a measure of how heavy or light a petroleum liquid is compared to water: if its API gravity is greater than 10, it is lighter and floats on water; if less than 10, it is heavier and sinks. API has entered petroleum industry nomenclature in a number of areas: * API gravity, a measure of the density of petroleum. API gravity is graduated in degrees on a hydrometer instrument. The scale was so firmly established that, by 1921, the remedy implemented by the American Petroleum Institute was to create the API gravity scale, recognizing the scale that was actually being used.API Degree history ==API gravity formulas== The formula to calculate API gravity from specific gravity (SG) is: :\text{API gravity} = \frac{141.5}{\text{SG}} - 131.5 Conversely, the specific gravity of petroleum liquids can be derived from their API gravity value as :\text{SG at}~60^\circ\text{F} = \frac{141.5}{\text{API gravity} + 131.5} Thus, a heavy oil with a specific gravity of 1.0 (i.e., with the same density as pure water at 60 °F) has an API gravity of: :\frac{141.5}{1.0} - 131.5 = 10.0^\circ{\text{API}} ==Using API gravity to calculate barrels of crude oil per metric ton== In the oil industry, quantities of crude oil are often measured in metric tons. For example, if one petroleum liquid is less dense than another, it has a greater API gravity. The specific gravity is defined by the formula below. :\mbox{SG oil} = \frac{\rho_\text{crudeoil}}{\rho_{\text{H}_2\text{O}}} With the formula presented in the previous section, the API gravity can be readily calculated. API gravity values of most petroleum liquids fall between 10 and 70 degrees. Retrieved on: 2012-09-10. ==References== ==External links== *Comments on API gravity adjustment scale *Instructions for using a glass hydrometer measured in API gravity Category:Units of density Category:Physical quantities Category:Petroleum geology Category:Petroleum production Gravity Although API gravity is mathematically a dimensionless quantity (see the formula below), it is referred to as being in 'degrees'. * Light crude oil has an API gravity higher than 31.1° (i.e., less than 870 kg/m3) * Medium oil has an API gravity between 22.3 and 31.1° (i.e., 870 to 920 kg/m3) * Heavy crude oil has an API gravity below 22.3° (i.e., 920 to 1000 kg/m3) * Extra heavy oil has an API gravity below 10.0° (i.e., greater than 1000 kg/m3) However, not all parties use the same grading.Crude oil grades, Crudemonitor.ca, web PDF file: CMonitor-Gr-PDF The United States Geological Survey uses slightly different ranges.USGS FS2006-3133_508, web PDF file: USGS-508-PDF Crude oil with API gravity less than 10° is referred to as extra heavy oil or bitumen. Crude oil is classified as light, medium, or heavy according to its measured API gravity. One can calculate the approximate number of barrels per metric ton for a given crude oil based on its API gravity: :\text{barrels of crude oil per metric ton} = \frac{\text{API gravity}+131.5}{141.5\times 0.159} For example, a metric ton of West Texas Intermediate (39.6° API) has a volume of about 7.6 barrels. ==Measurement of API gravity from its specific gravity== To derive the API gravity, the specific gravity (i.e., density relative to water) is first measured using either the hydrometer, detailed in ASTM D1298 or with the oscillating U-tube method detailed in ASTM D4052. The 1980 value is 999.012 kg/m3.API Manual of Petroleum Measurement Standards, Chapter 11.1 – 1980,Volume XI/XII, Adjunct to: ASTM D1250-80 and IP 200/80 In some cases the standard conditions may be 15 °C (59 °F) and not 60 °F (15.56 °C), in which case a different value for the water density would be appropriate (see standard conditions for temperature and pressure). ==Direct measurement of API gravity (hydrometer method)== There are advantages to field testing and on-board conversion of measured volumes to volume correction. * API number, a unique identifier applied to each petroleum exploration or production well drilled in the United States. The American Petroleum Institute (API) is the largest U.S. trade association for the oil and natural gas industry. It is used to compare densities of petroleum liquids. When converting oil density to specific gravity using the above definition, it is important to use the correct density of water, according to the standard conditions used when the measurement was made. This method is detailed in ASTM D287. ==Classifications or grades== Generally speaking, oil with an API gravity between 40 and 45° commands the highest prices. Bitumen derived from oil sands deposits in Alberta, Canada, has an API gravity of around 8°. ",API gravity is a measure of how heavy or light a petroleum liquid is compared to water. It is an inverse measure of a petroleum liquid's density relative to that of water and is graduated in degrees on a hydrometer instrument.,API gravity is a measure of the viscosity of a petroleum liquid. It is an inverse measure of a petroleum liquid's density relative to that of water and is graduated in degrees on a hydrometer instrument.,API gravity is a measure of the temperature at which a petroleum liquid freezes. It is an inverse measure of a petroleum liquid's density relative to that of water and is graduated in degrees on a hydrometer instrument.,API gravity is a measure of how much petroleum liquid is present in a given volume of water. It is an inverse measure of a petroleum liquid's density relative to that of water and is graduated in degrees on a hydrometer instrument.,API gravity is a measure of the acidity or alkalinity of a petroleum liquid. It is an inverse measure of a petroleum liquid's density relative to that of water and is graduated in degrees on a hydrometer instrument.,A,kaggle200,"To derive the API gravity, the specific gravity (i.e., density relative to water) is first measured using either the hydrometer, detailed in ASTM D1298 or with the oscillating U-tube method detailed in ASTM D4052.
Since API gravity is an inverse measure of a liquid's density relative to that of water, it can be calculated by first dividing the liquid's density by the density of water at a base temperature (usually 60 °F) to compute Specific Gravity (SG), then converting the Specific Gravity to Degrees API as follows: formula_30
The American Petroleum Institute gravity, or API gravity, is a measure of how heavy or light a petroleum liquid is compared to water: if its API gravity is greater than 10, it is lighter and floats on water; if less than 10, it is heavier and sinks.
API gravity is thus an inverse measure of a petroleum liquid's density relative to that of water (also known as specific gravity). It is used to compare densities of petroleum liquids. For example, if one petroleum liquid is less dense than another, it has a greater API gravity. Although API gravity is mathematically a dimensionless quantity (see the formula below), it is referred to as being in 'degrees'. API gravity is graduated in degrees on a hydrometer instrument. API gravity values of most petroleum liquids fall between 10 and 70 degrees.","VCorrected=VCF∗VObserved Since API gravity is an inverse measure of a liquid's density relative to that of water, it can be calculated by first dividing the liquid's density by the density of water at a base temperature (usually 60 °F) to compute Specific Gravity (SG), then converting the Specific Gravity to Degrees API as follows: 141.5 131.5 Traditionally, VCF / CTL are found by matching the observed temperature and API gravity within standardized books and tables published by the American Petroleum Institute. These methods are often more time-consuming than entering the values into an online VCF calculator; however, due to the variance in methodology and computation of constants, the tables published by the American Petroleum Institute are preferred when dealing with the purchase and sale of crude oil and residual fuels.
The American Petroleum Institute gravity, or API gravity, is a measure of how heavy or light a petroleum liquid is compared to water: if its API gravity is greater than 10, it is lighter and floats on water; if less than 10, it is heavier and sinks.
API gravity is thus an inverse measure of a petroleum liquid's density relative to that of water (also known as specific gravity). It is used to compare densities of petroleum liquids. For example, if one petroleum liquid is less dense than another, it has a greater API gravity. Although API gravity is mathematically a dimensionless quantity (see the formula below), it is referred to as being in 'degrees'. API gravity is graduated in degrees on a hydrometer instrument. API gravity values of most petroleum liquids fall between 10 and 70 degrees.","These methods are often more time-consuming than entering the values into an online VCF calculator; however, due to the variance in methodology and computation of constants, the tables published by the American Petroleum Institute are preferred when dealing with the purchase and sale of crude oil and residual fuels.
The American Petroleum Institute gravity, or API gravity, is a measure of how heavy or light a petroleum liquid is compared to water: if its API gravity is greater than 10, it is lighter and floats on water; if less than 10, it is heavier and sinks.
API gravity is thus an inverse measure of a petroleum liquid's density relative to that of water (also known as specific gravity)- To derive the API gravity, the specific gravity (i.e., density relative to water) is first measured using either the hydrometer, detailed in ASTM D1298 or with the oscillating U-tube method detailed in ASTM D4052.
Since API gravity is an inverse measure of a liquid's density relative to that of water, it can be calculated by first dividing the liquid's density by the density of water at a base temperature (usually 60 °F) to compute Specific Gravity (SG), then converting the Specific Gravity to Degrees API as follows: formula_30
The American Petroleum Institute gravity, or API gravity, is a measure of how heavy or light a petroleum liquid is compared to water: if its API gravity is greater than 10, it is lighter and floats on water; if less than 10, it is heavier and sinks.
API gravity is thus an inverse measure of a petroleum liquid's density relative to that of water (also known as specific gravity)API gravity values of most petroleum liquids fall between 10 and 70 degreesAlthough API gravity is mathematically a dimensionless quantity (see the formula below), it is referred to as being in 'degrees'API gravity values of most petroleum liquids fall between 10 and 70 degrees.API gravity is graduated in degrees on a hydrometer instrumentFor example, if one petroleum liquid is less dense than another, it has a greater API gravityVCorrected=VCF∗VObserved Since API gravity is an inver","These methods are often more time-consuming than entering the values into an online VCF calculator; however, due to the variance in methodology and computation of constants, the tables published by the American Petroleum Institute are preferred when dealing with the purchase and sale of crude oil and residual fuels.
The American Petroleum Institute gravity, or API gravity, is a measure of how heavy or light a petroleum liquid is compared to water: if its API gravity is greater than 10, it is lighter and floats on water; if less than 10, it is heavier and sinks.
API gravity is thus an inverse measure of a petroleum liquid's density relative to that of water (also known as specific gravity)- To derive the API gravity, the specific gravity (i.e., density relative to water) is first measured using either the hydrometer, detailed in ASTM D1298 or with the oscillating U-tube method detailed in ASTM D4052.
Since API gravity is an inverse measure of a liquid's density relative to that of water, it can be calculated by first dividing the liquid's density by the density of water at a base temperature (usually 60 °F) to compute Specific Gravity (SG), then converting the Specific Gravity to Degrees API as follows: formula_30
The American Petroleum Institute gravity, or API gravity, is a measure of how heavy or light a petroleum liquid is compared to water: if its API gravity is greater than 10, it is lighter and floats on water; if less than 10, it is heavier and sinks.
API gravity is thus an inverse measure of a petroleum liquid's density relative to that of water (also known as specific gravity)API gravity values of most petroleum liquids fall between 10 and 70 degreesAlthough API gravity is mathematically a dimensionless quantity (see the formula below), it is referred to as being in 'degrees'API gravity values of most petroleum liquids fall between 10 and 70 degrees.API gravity is graduated in degrees on a hydrometer instrumentFor example, if one petroleum liquid is less dense than another, it has a greater API gravityVCorrected=VCF∗VObserved Since API gravity is an inver[SEP]What is the American Petroleum Institute (API) gravity?","['A', 'D', 'B']",1.0
What are the two main factors that cause resistance in a metal?,"Within a certain range of strain this relationship is linear, so that the piezoresistive coefficient : \rho_\sigma = \frac{\left(\frac{\partial\rho}{\rho}\right)}{\varepsilon} where :∂ρ = Change in resistivity :ρ = Original resistivity :ε = Strain is constant. === Piezoresistivity in metals === Usually the resistance change in metals is mostly due to the change of geometry resulting from applied mechanical stress. Geological resistance is a measure of how well minerals resist erosive factors, and is based primarily on hardness, chemical reactivity and cohesion. Specific properties are designed into metal components to make them more robust to various environmental conditions. Metal components are designed to withstand the environment and stresses that they will be subjected to. The design of a metal component involves not only a specific elemental composition but also specific manufacturing process such as heat treatments, machining processes, etc. In contrast to the piezoelectric effect, the piezoresistive effect causes a change only in electrical resistance, not in electric potential. == History == The change of electrical resistance in metal devices due to an applied mechanical load was first discovered in 1856 by Lord Kelvin. Newton's metal is a fusible alloy with a low melting point. In platinum alloys, for instance, piezoresistivity is more than a factor of two larger, combining with the geometry effects to give a strain gauge sensitivity of up to more than three times as large than due to geometry effects alone. The huge arrays of different metals that result all have unique physical properties. Metallurgical failure analysis is the process to determine the mechanism that has caused a metal component to fail. The more hardness, less reactivity and more cohesion a mineral has, the less susceptible it is to erosion. The piezoresistive effect is a change in the electrical resistivity of a semiconductor or metal when mechanical strain is applied. Pure nickel's piezoresistivity is -13 times larger, completely dwarfing and even reversing the sign of the geometry-induced resistance change. === Piezoresistive effect in bulk semiconductors === The piezoresistive effect of semiconductor materials can be several orders of magnitudes larger than the geometrical effect and is present in materials like germanium, polycrystalline silicon, amorphous silicon, silicon carbide, and single crystal silicon. In cases where it is, it can be calculated using the simple resistance equation derived from Ohm's law; :R = \rho\frac{\ell}{A} \, where :\ell Conductor length [m] :A Cross-sectional area of the current flow [m²] Some metals display piezoresistivity that is much larger than the resistance change due to geometry. This results in a change in resistivity of the material. A metallurgical failure analysis takes into account as much of this information as possible during analysis. Magnesium, aluminium and titanium are light metals of significant commercial importance.Brandes EA & Brook GB (eds) 1998, Light Metals Handbook, Butterworth Heinemann, Oxford, , p. viii Their densities of 1.7, 2.7 and 4.5 g/cm3 range from 19 to 56% of the densities of the older structural metals,Polmear I 2006, Light Alloys: From Traditional Alloys to Nanocrystals, 4th ed., Butterworth Heinemann, Oxford, , p. 1 iron (7.9) and copper (8.9). ==See also== * Heavy metals ==References== Category:Sets of chemical elements Category:Metals For silicon, gauge factors can be two orders of magnitudes larger than those observed in most metals (Smith 1954). ASM,ASM-International Metals Handbook, Ninth Edition, Corrosion, ASM-International, Metals Park, OH and/or NACENACE- International NACE Basic Corrosion Course, NACE-International, Houston, TX as distinct metallurgical failure mechanisms. === Caused by corrosion and stress === *Stress corrosion crackingM&M; Engineering Conduit Fall 2007 “Chloride Pitting and Stress Corrosion Cracking of Stainless Steel Alloys,” Stress corrosion (NACE term) *Corrosion fatigue *Caustic cracking (ASTM term) *Caustic embrittlement (ASM term) *Sulfide stress cracking (ASM, NACE term) *Stress-accelerated Corrosion (NACE term) *Hydrogen stress cracking (ASM term) *Hydrogen-assisted stress corrosion cracking (ASM term) ===Caused by stress=== *Fatigue (ASTM, ASM term) *Mechanical overload *Creep *Rupture *Cracking (NACE term) *Embrittlement ===Caused by corrosion=== *Erosion corrosion *Pitting corrosion Oxygen pitting *Hydrogen embrittlement *Hydrogen-induced cracking (ASM term) *Corrosion embrittlement (ASM term) *Hydrogen disintegration (NACE term) *Hydrogen-assisted cracking (ASM term) *Hydrogen blistering *Corrosion == Potential root causes == Potential root causes of metallurgical failures are vast, spanning the lifecycle of component from design to manufacturing to usage. A light metal is any metal of relatively low density.Jackson JA, Mehl JP, Neuendorf KKE (eds) 2005, Glossary of Geology, 5th ed., American Geological Institute, Alexandria, , p. 371 More specific definitions have been proposed; none have obtained widespread acceptance. ","The amount of resistance in a metal is mainly caused by the temperature and the pressure applied to the metal. Higher temperatures cause bigger vibrations, and pressure causes the metal to become more compact, leading to more resistance.","The amount of resistance in a metal is mainly caused by the temperature and the purity of the metal. Higher temperatures cause bigger vibrations, and a mixture of different ions acts as an irregularity.","The amount of resistance in a metal is mainly caused by the temperature and the thickness of the metal. Higher temperatures cause bigger vibrations, and thicker metals have more irregularities, leading to more resistance.","The amount of resistance in a metal is mainly caused by the purity of the metal and the amount of pressure applied to the metal. A mixture of different ions acts as an irregularity, and pressure causes the metal to become more compact, leading to more resistance.","The amount of resistance in a metal is mainly caused by the purity of the metal and the thickness of the metal. A mixture of different ions acts as an irregularity, and thicker metals have more irregularities, leading to more resistance.",B,kaggle200,"Nitride chlorides may be produced by heating metal nitrides with metal chlorides. The ammonolysis process heats a metal chloride with ammonia. A related method heats a metal or metal hydride with ammonium chloride. The nitrogen source could also be an azide or an amide.
A metal zipper functions just like any other zipper, with a number of similar components. The components of a metal zipper include:
A scrap metal shredder, also sometimes referred to as a metal scrap shredder, is a machine used for reducing the size of scrap metal. Scrap metal shredders come in many different variations and sizes.
Most metals have electrical resistance. In simpler models (non quantum mechanical models) this can be explained by replacing electrons and the crystal lattice by a wave-like structure. When the electron wave travels through the lattice, the waves interfere, which causes resistance. The more regular the lattice is, the less disturbance happens and thus the less resistance. The amount of resistance is thus mainly caused by two factors. First, it is caused by the temperature and thus amount of vibration of the crystal lattice. Higher temperatures cause bigger vibrations, which act as irregularities in the lattice. Second, the purity of the metal is relevant as a mixture of different ions is also an irregularity. The small decrease in conductivity on melting of pure metals is due to the loss of long range crystalline order. The short range order remains and strong correlation between positions of ions results in coherence between waves diffracted by adjacent ions.","A scrap metal shredder, also sometimes referred to as a metal scrap shredder, is a machine used for reducing the size of scrap metal. Scrap metal shredders come in many different variations and sizes.
In metalworking, a filler metal is a metal added in the making of a joint through welding, brazing, or soldering.
Most metals have electrical resistance. In simpler models (non quantum mechanical models) this can be explained by replacing electrons and the crystal lattice by a wave-like structure. When the electron wave travels through the lattice, the waves interfere, which causes resistance. The more regular the lattice is, the less disturbance happens and thus the less resistance. The amount of resistance is thus mainly caused by two factors. First, it is caused by the temperature and thus amount of vibration of the crystal lattice. Higher temperatures cause bigger vibrations, which act as irregularities in the lattice. Second, the purity of the metal is relevant as a mixture of different ions is also an irregularity. The small decrease in conductivity on melting of pure metals is due to the loss of long range crystalline order. The short range order remains and strong correlation between positions of ions results in coherence between waves diffracted by adjacent ions.","The amount of resistance is thus mainly caused by two factorsThe more regular the lattice is, the less disturbance happens and thus the less resistanceWhen the electron wave travels through the lattice, the waves interfere, which causes resistanceSecond, the purity of the metal is relevant as a mixture of different ions is also an irregularityFirst, it is caused by the temperature and thus amount of vibration of the crystal latticeThe small decrease in conductivity on melting of pure metals is due to the loss of long range crystalline orderScrap metal shredders come in many different variations and sizes.
Most metals have electrical resistanceScrap metal shredders come in many different variations and sizes.
In metalworking, a filler metal is a metal added in the making of a joint through welding, brazing, or soldering.
Most metals have electrical resistanceHigher temperatures cause bigger vibrations, which act as irregularities in the latticeA related method heats a metal or metal hydride with ammonium chloride- Nitride chlorides may be produced by heating metal nitrides with metal chloridesThe components of a metal zipper include:
A scrap metal shredder, also sometimes referred to as a metal scrap shredder, is a machine used for reducing the size of scrap metalIn simpler models (non quantum mechanical models) this can be explained by replacing electrons and the crystal lattice by a wave-like structureThe nitrogen source could also be an azide or an amide.
A metal zipper functions just like any other zipper, with a number of similar componentsA scrap metal shredder, also sometimes referred to as a metal scrap shredder, is a machine used for reducing the size of scrap metalThe ammonolysis process heats a metal chloride with ammoniaThe short range order remains and strong correlation between positions of ions results in coherence between waves diffracted by adjacent ionsThe short range order remains and strong correlation between positions of ions results in coherence between waves diffracted by adjacent ions.","The amount of resistance is thus mainly caused by two factorsThe more regular the lattice is, the less disturbance happens and thus the less resistanceWhen the electron wave travels through the lattice, the waves interfere, which causes resistanceSecond, the purity of the metal is relevant as a mixture of different ions is also an irregularityFirst, it is caused by the temperature and thus amount of vibration of the crystal latticeThe small decrease in conductivity on melting of pure metals is due to the loss of long range crystalline orderScrap metal shredders come in many different variations and sizes.
Most metals have electrical resistanceScrap metal shredders come in many different variations and sizes.
In metalworking, a filler metal is a metal added in the making of a joint through welding, brazing, or soldering.
Most metals have electrical resistanceHigher temperatures cause bigger vibrations, which act as irregularities in the latticeA related method heats a metal or metal hydride with ammonium chloride- Nitride chlorides may be produced by heating metal nitrides with metal chloridesThe components of a metal zipper include:
A scrap metal shredder, also sometimes referred to as a metal scrap shredder, is a machine used for reducing the size of scrap metalIn simpler models (non quantum mechanical models) this can be explained by replacing electrons and the crystal lattice by a wave-like structureThe nitrogen source could also be an azide or an amide.
A metal zipper functions just like any other zipper, with a number of similar componentsA scrap metal shredder, also sometimes referred to as a metal scrap shredder, is a machine used for reducing the size of scrap metalThe ammonolysis process heats a metal chloride with ammoniaThe short range order remains and strong correlation between positions of ions results in coherence between waves diffracted by adjacent ionsThe short range order remains and strong correlation between positions of ions results in coherence between waves diffracted by adjacent ions.[SEP]What are the two main factors that cause resistance in a metal?","['D', 'B', 'E']",0.5
What is the significance of the redshift-distance relationship in determining the expansion history of the universe?,"The theory of relativity predicts phenomena associated with the expansion, notably the redshift-versus-distance relationship known as Hubble's Law; functional forms for cosmological distance measurements that differ from what would be expected if space were not expanding; and an observable change in the matter and energy density of the universe seen at different lookback times. Hubble's contribution was to show that the magnitude of the redshift correlated strongly with the distance to the galaxies. In standard inflationary cosmological models, the redshift of cosmological bodies is ascribed to the expansion of the universe, with greater redshift indicating greater cosmic distance from the Earth (see Hubble's Law). To determine the distance of distant objects, astronomers generally measure luminosity of standard candles, or the redshift factor 'z' of distant galaxies, and then convert these measurements into distances based on some particular model of spacetime, such as the Lambda-CDM model. However, galaxies lying farther away from this will recede away at ever-increasing speed and be redshifted out of our range of visibility. ===Metric expansion and speed of light=== At the end of the early universe's inflationary period, all the matter and energy in the universe was set on an inertial trajectory consistent with the equivalence principle and Einstein's general theory of relativity and this is when the precise and regular form of the universe's expansion had its origin (that is, matter in the universe is separating because it was separating in the past due to the inflaton field). Redshift is directly observable and used by cosmologists as a direct measure of lookback time. Redshift-space distortions are an effect in observational cosmology where the spatial distribution of galaxies appears squashed and distorted when their positions are plotted as a function of their redshift rather than as a function of their distance. This explains observations that indicate that galaxies that are more distant from us are receding faster than galaxies that are closer to us (see Hubble's law). ===Cosmological constant and the Friedmann equations=== The first general relativistic models predicted that a universe that was dynamical and contained ordinary gravitational matter would contract rather than expand. Alternatively, Zwicky proposed a kind of Sachs–Wolfe effect explanation for the redshift distance relation: Zwicky's proposals were carefully presented as falsifiable according to later observations: Such broadening of absorption lines is not seen in high-redshift objects, thus falsifying this particular hypothesis.See, for example, high-redshift spectra shown at http://astrobites.com/2011/04/27/prospecting-for-c-iv-at-high-redshifts/ Zwicky also notes, in the same paper, that according to a tired light model a distance-redshift relationship would necessarily be present in the light from sources within our own galaxy (even if the redshift would be so small that it would be hard to measure), that do not appear under a recessional-velocity based theory. Others proposed that systematic effects could explain the redshift-distance correlation. A photometric redshift is an estimate for the recession velocity of an astronomical object such as a galaxy or quasar, made without measuring its spectrum. Princeton University Press, Following after Zwicky in 1935, Edwin Hubble and Richard Tolman compared recessional redshift with a non-recessional one, writing that they These conditions became almost impossible to meet and the overall success of general relativistic explanations for the redshift- distance relation is one of the core reasons that the Big Bang model of the universe remains the cosmology preferred by researchers. Most recently, by comparing the apparent brightness of distant standard candles to the redshift of their host galaxies, the expansion rate of the universe has been measured to be H0 = . The universal redshift-distance relation in this solution is attributable to the effect an expanding universe has on a photon traveling on a null spacetime interval (also known as a ""light-like"" geodesic). The effect is due to the peculiar velocities of the galaxies causing a Doppler shift in addition to the redshift caused by the cosmological expansion. When choosing an arbitrary reference point such as the gold galaxy or the red galaxy, the increased distance to other galaxies the further away they are appear the same. Sources of this confidence and confirmation include: * Hubble demonstrated that all galaxies and distant astronomical objects were moving away from us, as predicted by a universal expansion.Hubble, Edwin, ""A Relation between Distance and Radial Velocity among Extra-Galactic Nebulae"" (1929) Proceedings of the National Academy of Sciences of the United States of America, Volume 15, Issue 3, pp. 168-173 (Full article, PDF) Using the redshift of their electromagnetic spectra to determine the distance and speed of remote objects in space, he showed that all objects are moving away from us, and that their speed is proportional to their distance, a feature of metric expansion. The first measurement of the expansion of space came with Hubble's realization of the velocity vs. redshift relation. He writes, referring to sources of light within our galaxy: ""It is especially desirable to determine the redshift independent of the proper velocities of the objects observed"". Other means of estimating the redshift based on alternative observed quantities have been developed, like for instance morphological redshifts applied to galaxy clusters which rely on geometric measurements J.M. Diego et al. Morphological redshift estimates for galaxy clusters in a Sunyaev-Zel'dovich effect survey. ","Observations of the redshift-distance relationship can be used to determine the expansion history of the universe and the matter and energy content, especially for galaxies whose light has been travelling to us for much shorter times.","Observations of the redshift-distance relationship can be used to determine the age of the universe and the matter and energy content, especially for nearby galaxies whose light has been travelling to us for much shorter times.","Observations of the redshift-distance relationship can be used to determine the expansion history of the universe and the matter and energy content, especially for nearby galaxies whose light has been travelling to us for much longer times.","Observations of the redshift-distance relationship can be used to determine the age of the universe and the matter and energy content, especially for distant galaxies whose light has been travelling to us for much shorter times.","Observations of the redshift-distance relationship can be used to determine the expansion history of the universe and the matter and energy content, especially for distant galaxies whose light has been travelling to us for much longer times.",E,kaggle200,"Maps of large-scale structure can be used to measure the expansion history of the Universe because sound waves in the early Universe, or baryon acoustic oscillations (BAO), have left slight overdensities in the distribution of matter on scales of about 500 million light-years. This characteristic BAO scale has been well-measured by experiments like ""Planck"" and can therefore be used as a 'standard ruler' to determine the size of the Universe as a function of time, thereby indicating the expansion rate.
Larry Gonick (""The Cartoon History of the Universe"") produced graphic non-fiction about science and history for more than 30 years.
While it was long believed that the expansion rate has been continuously decreasing since the Big Bang, recent observations of the redshift-distance relationship using Type Ia supernovae have suggested that in comparatively recent times the expansion rate of the universe has begun to accelerate.
The Hubble law's linear relationship between distance and redshift assumes that the rate of expansion of the universe is constant. However, when the universe was much younger, the expansion rate, and thus the Hubble ""constant"", was larger than it is today. For more distant galaxies, then, whose light has been travelling to us for much longer times, the approximation of constant expansion rate fails, and the Hubble law becomes a non-linear integral relationship and dependent on the history of the expansion rate since the emission of the light from the galaxy in question. Observations of the redshift-distance relationship can be used, then, to determine the expansion history of the universe and thus the matter and energy content.","Radiation epoch The history of the universe after inflation but before a time of about 1 second is largely unknown. However, the universe is known to have been dominated by ultrarelativistic Standard-Model particles, conventionally called radiation, by the time of neutrino decoupling at about 1 second. During radiation domination, cosmic expansion decelerated, with the scale factor growing proportionally with the square root of the time.
After the discovery of the redshift-distance relationship (deduced by the inverse correlation of galactic brightness to redshift) by American astronomers Vesto Slipher and Edwin Hubble, the astrophysicist and priest Georges Lemaître interpreted the redshift as evidence of universal expansion and thus a Big Bang, whereas Swiss astronomer Fritz Zwicky proposed that the redshift was caused by the photons losing energy as they passed through the matter and/or forces in intergalactic space. Zwicky's proposal would come to be termed 'tired light'—a term invented by the major Big Bang proponent Richard Tolman.
Extragalactic observations The most distant objects exhibit larger redshifts corresponding to the Hubble flow of the universe. The largest-observed redshift, corresponding to the greatest distance and furthest back in time, is that of the cosmic microwave background radiation; the numerical value of its redshift is about z = 1089 (z = 0 corresponds to present time), and it shows the state of the universe about 13.8 billion years ago, and 379,000 years after the initial moments of the Big Bang.The luminous point-like cores of quasars were the first ""high-redshift"" (z > 0.1) objects discovered before the improvement of telescopes allowed for the discovery of other high-redshift galaxies.For galaxies more distant than the Local Group and the nearby Virgo Cluster, but within a thousand megaparsecs or so, the redshift is approximately proportional to the galaxy's distance. This correlation was first observed by Edwin Hubble and has come to be known as Hubble's law. Vesto Slipher was the first to discover galactic redshifts, in about the year 1912, while Hubble correlated Slipher's measurements with distances he measured by other means to formulate his Law. In the widely accepted cosmological model based on general relativity, redshift is mainly a result of the expansion of space: this means that the farther away a galaxy is from us, the more the space has expanded in the time since the light left that galaxy, so the more the light has been stretched, the more redshifted the light is, and so the faster it appears to be moving away from us. Hubble's law follows in part from the Copernican principle. Because it is usually not known how luminous objects are, measuring the redshift is easier than more direct distance measurements, so redshift is sometimes in practice converted to a crude distance measurement using Hubble's law.Gravitational interactions of galaxies with each other and clusters cause a significant scatter in the normal plot of the Hubble diagram. The peculiar velocities associated with galaxies superimpose a rough trace of the mass of virialized objects in the universe. This effect leads to such phenomena as nearby galaxies (such as the Andromeda Galaxy) exhibiting blueshifts as we fall towards a common barycenter, and redshift maps of clusters showing a fingers of god effect due to the scatter of peculiar velocities in a roughly spherical distribution. This added component gives cosmologists a chance to measure the masses of objects independent of the mass-to-light ratio (the ratio of a galaxy's mass in solar masses to its brightness in solar luminosities), an important tool for measuring dark matter.The Hubble law's linear relationship between distance and redshift assumes that the rate of expansion of the universe is constant. However, when the universe was much younger, the expansion rate, and thus the Hubble ""constant"", was larger than it is today. For more distant galaxies, then, whose light has been travelling to us for much longer times, the approximation of constant expansion rate fails, and the Hubble law becomes a non-linear integral relationship and dependent on the history of the expansion rate since the emission of the light from the galaxy in question. Observations of the redshift-distance relationship can be used, then, to determine the expansion history of the universe and thus the matter and energy content.While it was long believed that the expansion rate has been continuously decreasing since the Big Bang, observations beginning in 1988 of the redshift-distance relationship using Type Ia supernovae have suggested that in comparatively recent times the expansion rate of the universe has begun to accelerate.","Observations of the redshift-distance relationship can be used, then, to determine the expansion history of the universe and thus the matter and energy content.Observations of the redshift-distance relationship can be used, then, to determine the expansion history of the universe and thus the matter and energy content.While it was long believed that the expansion rate has been continuously decreasing since the Big Bang, observations beginning in 1988 of the redshift-distance relationship using Type Ia supernovae have suggested that in comparatively recent times the expansion rate of the universe has begun to accelerateDuring radiation domination, cosmic expansion decelerated, with the scale factor growing proportionally with the square root of the time.
After the discovery of the redshift-distance relationship (deduced by the inverse correlation of galactic brightness to redshift) by American astronomers Vesto Slipher and Edwin Hubble, the astrophysicist and priest Georges Lemaître interpreted the redshift as evidence of universal expansion and thus a Big Bang, whereas Swiss astronomer Fritz Zwicky proposed that the redshift was caused by the photons losing energy as they passed through the matter and/or forces in intergalactic spaceIn the widely accepted cosmological model based on general relativity, redshift is mainly a result of the expansion of space: this means that the farther away a galaxy is from us, the more the space has expanded in the time since the light left that galaxy, so the more the light has been stretched, the more redshifted the light is, and so the faster it appears to be moving away from usThis added component gives cosmologists a chance to measure the masses of objects independent of the mass-to-light ratio (the ratio of a galaxy's mass in solar masses to its brightness in solar luminosities), an important tool for measuring dark matter.The Hubble law's linear relationship between distance and redshift assumes that the rate of expansion of the universe is constantThis characteristic BAO scale has been well-measured by experiments like ""Pla","Observations of the redshift-distance relationship can be used, then, to determine the expansion history of the universe and thus the matter and energy content.Observations of the redshift-distance relationship can be used, then, to determine the expansion history of the universe and thus the matter and energy content.While it was long believed that the expansion rate has been continuously decreasing since the Big Bang, observations beginning in 1988 of the redshift-distance relationship using Type Ia supernovae have suggested that in comparatively recent times the expansion rate of the universe has begun to accelerateDuring radiation domination, cosmic expansion decelerated, with the scale factor growing proportionally with the square root of the time.
After the discovery of the redshift-distance relationship (deduced by the inverse correlation of galactic brightness to redshift) by American astronomers Vesto Slipher and Edwin Hubble, the astrophysicist and priest Georges Lemaître interpreted the redshift as evidence of universal expansion and thus a Big Bang, whereas Swiss astronomer Fritz Zwicky proposed that the redshift was caused by the photons losing energy as they passed through the matter and/or forces in intergalactic spaceIn the widely accepted cosmological model based on general relativity, redshift is mainly a result of the expansion of space: this means that the farther away a galaxy is from us, the more the space has expanded in the time since the light left that galaxy, so the more the light has been stretched, the more redshifted the light is, and so the faster it appears to be moving away from usThis added component gives cosmologists a chance to measure the masses of objects independent of the mass-to-light ratio (the ratio of a galaxy's mass in solar masses to its brightness in solar luminosities), an important tool for measuring dark matter.The Hubble law's linear relationship between distance and redshift assumes that the rate of expansion of the universe is constantThis characteristic BAO scale has been well-measured by experiments like ""Pla[SEP]What is the significance of the redshift-distance relationship in determining the expansion history of the universe?","['E', 'D', 'C']",1.0
What is the Evans balance?,"An Evans balance, also known as a Johnson's balance (after a commercial producer of the Evans balance) is a device for measuring magnetic susceptibility. The Evans balance employs a similar sample configuration but measures the force on the magnet. ==Mechanism== The suspension trip has two pairs of magnets placed back-to-back, making a balanced system with a magnetic field at each end. The original Evans balance had an accuracy within 1% of literature values for diamagnetic solutions and within 2% of literature values of paramagnetic solids. The original Evans balance was described by the English scientist Dennis F. Evans in 1973, based on a torsional balance developed in 1937 by Alexander Rankine. Moreover, using a Evans balance is less time-consuming than using a Gouy or Faraday balances, although it is not sensitive and accurate in comparison to these last two systems. With the Evans balance, a reading could be taken in a matter of seconds with only small sacrifices in sensitivity and accuracy. For each measurement, only around 250 mg of sample is required (50 mg can be used for a thin-bore sample tube). == Calibration == The Evans balance measures susceptibility indirectly by referring to a calibration standard of known susceptibility. Some balances have an auto-tare feature that eliminates the need for the R0 measurement. To calculate the volume magnetic susceptibility (χ) instead of the weight susceptibility (χg), such as in a liquid sample, the equation would have the extra V term added to the numerator and instead of being divided by m, the equation would be divided by d for the density of the solution. == References == Category:Magnetometers A Johnson-Matthey balance has a range from 0.001 x 10−7 to 1.99 x 10−7 c.g.s. volume susceptibility units. When a sample tube was placed between the first pair of magnets, the torsional force was restored by the current passed through the coil between the second pair of magnets, giving a reading on a display instead of a Helipot (as was used in the original). ===Advantages vs alternative magnetic balances=== The main advantage of this system is that it is cheap to construct as it does not require a precision weighing device. Various practical devices are available for the measurement of susceptibility, which differ in the shape of the magnetic field and the way the force is measured. The system allows for measurements of solid, liquid, and gaseous forms of a wide range of paramagnetic and diamagnetic materials. Evans used Ticonal bars with cadmium-plated mild steel yokes as the magnets, a Johnson Matthey gold alloy (hence the other name of the balance) for the suspension strip, all glued together with epoxy resin onto a phosphor brown spacer. Magnetic susceptibility is related to the force experienced by a substance in a magnetic field. Evans v. Evans v. Evans v. The sample was placed into the gap between one pair of magnets and a small coil in the gap between the second pair of magnets. In Evans v. ","The Evans balance is a system used to measure the change in weight of a sample when an electromagnet is turned on, which is proportional to the susceptibility.",The Evans balance is a system used to measure the dependence of the NMR frequency of a liquid sample on its shape or orientation to determine its susceptibility.,The Evans balance is a system used to measure the magnetic field distortion around a sample immersed in water inside an MR scanner to determine its susceptibility.,The Evans balance is a system used to measure the susceptibility of a sample by measuring the force change on a strong compact magnet upon insertion of the sample.,"The Evans balance is a system used to measure the magnetic susceptibility of most crystals, which is not a scalar quantity.",D,kaggle200,"The Evans balance measures susceptibility indirectly by referring to a calibration standard of known susceptibility. The most convenient compound for this purpose is mercury cobalt thiocyanate, HgCo(NCS), which has a susceptibility of 16.44×10 (±0.5%) CGS at 20 °C. Another common calibration standard is [Ni(en)]SO which has a susceptibility of 1.104 x 10 erg G cm. Three readings of the meter are needed, of an empty tube, ""R"" of the tube filled with calibrant and of the tube filled with the sample, ""R"". Some balances have an auto-tare feature that eliminates the need for the ""R"" measurement. Accuracy depends somewhat on homogeneous packing of the sample. The first two provide a calibration constant, ""C"". The mass susceptibility in grams is calculated as
Volume magnetic susceptibility is measured by the force change felt upon a substance when a magnetic field gradient is applied. Early measurements are made using the Gouy balance where a sample is hung between the poles of an electromagnet. The change in weight when the electromagnet is turned on is proportional to the susceptibility. Today, high-end measurement systems use a superconductive magnet. An alternative is to measure the force change on a strong compact magnet upon insertion of the sample. This system, widely used today, is called the Evans balance. For liquid samples, the susceptibility can be measured from the dependence of the NMR frequency of the sample on its shape or orientation.
The main advantage of this system is that it is cheap to construct as it does not require a precision weighing device. It is also more convenient to use than the Gouy and Faraday balances. These systems were very sensitive and accurate but were very time-consuming. One reason that they were time-consuming is that the sample had to be suspended in between the two poles of a very powerful magnet. The tube had to be suspended in the same place every time in order for the apparatus constant to be accurate. In the case of the Gouy balance, static charge on the glass tube often caused the tube to stick to magnets. With the Evans balance, a reading could be taken in a matter of seconds with only small sacrifices in sensitivity and accuracy. A Johnson-Matthey balance has a range from 0.001 x 10 to 1.99 x 10 c.g.s. volume susceptibility units. Even the original Evans balance had an accuracy within 1% of literature values for diamagnetic solutions and within 2% of literature values of paramagnetic solids.
An Evans balance, also known as a Johnson-Matthey balance (after a commercial producer of the Evans balance) is a device for measuring magnetic susceptibility. Magnetic susceptibility is related to the force experienced by a substance in a magnetic field. Various practical devices are available for the measurement of susceptibility, which differ in the shape of the magnetic field and the way the force is measured.","Volume magnetic susceptibility is measured by the force change felt upon a substance when a magnetic field gradient is applied. Early measurements are made using the Gouy balance where a sample is hung between the poles of an electromagnet. The change in weight when the electromagnet is turned on is proportional to the susceptibility. Today, high-end measurement systems use a superconductive magnet. An alternative is to measure the force change on a strong compact magnet upon insertion of the sample. This system, widely used today, is called the Evans balance. For liquid samples, the susceptibility can be measured from the dependence of the NMR frequency of the sample on its shape or orientation.Another method using NMR techniques measures the magnetic field distortion around a sample immersed in water inside an MR scanner. This method is highly accurate for diamagnetic materials with susceptibilities similar to water.
Advantages vs alternative magnetic balances The main advantage of this system is that it is cheap to construct as it does not require a precision weighing device. Moreover, using a Evans balance is less time-consuming than using a Gouy or Faraday balances, although it is not sensitive and accurate in comparison to these last two systems. One reason that they were time-consuming is that the sample had to be suspended between the two poles of a very powerful magnet. The tube had to be suspended in the same place every time for the apparatus constant to be accurate. In the case of the Gouy balance, the static charge on the glass tube often caused the tube to stick to magnets. With the Evans balance, a reading could be taken in a matter of seconds with only small sacrifices in sensitivity and accuracy. A Johnson-Matthey balance has a range from 0.001 x 10−7 to 1.99 x 10−7 c.g.s. volume susceptibility units. The original Evans balance had an accuracy within 1% of literature values for diamagnetic solutions and within 2% of literature values of paramagnetic solids.The system allows for measurements of solid, liquid, and gaseous forms of a wide range of paramagnetic and diamagnetic materials. For each measurement, only around 250 mg of sample is required (50 mg can be used for a thin-bore sample tube).
An Evans balance, also known as a Johnson's balance (after a commercial producer of the Evans balance) is a device for measuring magnetic susceptibility. Magnetic susceptibility is related to the force experienced by a substance in a magnetic field. Various practical devices are available for the measurement of susceptibility, which differ in the shape of the magnetic field and the way the force is measured.The Evans balance employs a similar sample configuration but measures the force on the magnet.","Even the original Evans balance had an accuracy within 1% of literature values for diamagnetic solutions and within 2% of literature values of paramagnetic solids.
An Evans balance, also known as a Johnson-Matthey balance (after a commercial producer of the Evans balance) is a device for measuring magnetic susceptibilityThis system, widely used today, is called the Evans balanceFor each measurement, only around 250 mg of sample is required (50 mg can be used for a thin-bore sample tube).
An Evans balance, also known as a Johnson's balance (after a commercial producer of the Evans balance) is a device for measuring magnetic susceptibility- The Evans balance measures susceptibility indirectly by referring to a calibration standard of known susceptibilityMoreover, using a Evans balance is less time-consuming than using a Gouy or Faraday balances, although it is not sensitive and accurate in comparison to these last two systemsWith the Evans balance, a reading could be taken in a matter of seconds with only small sacrifices in sensitivity and accuracyThe original Evans balance had an accuracy within 1% of literature values for diamagnetic solutions and within 2% of literature values of paramagnetic solids.The system allows for measurements of solid, liquid, and gaseous forms of a wide range of paramagnetic and diamagnetic materialsA Johnson-Matthey balance has a range from 0.001 x 10−7 to 1.99 x 10−7 c.g.sVarious practical devices are available for the measurement of susceptibility, which differ in the shape of the magnetic field and the way the force is measured.The Evans balance employs a similar sample configuration but measures the force on the magnetA Johnson-Matthey balance has a range from 0.001 x 10 to 1.99 x 10 c.g.sSome balances have an auto-tare feature that eliminates the need for the ""R"" measurementIt is also more convenient to use than the Gouy and Faraday balancesThis method is highly accurate for diamagnetic materials with susceptibilities similar to water.
Advantages vs alternative magnetic balances The main advantage of this system is that it is chea","Even the original Evans balance had an accuracy within 1% of literature values for diamagnetic solutions and within 2% of literature values of paramagnetic solids.
An Evans balance, also known as a Johnson-Matthey balance (after a commercial producer of the Evans balance) is a device for measuring magnetic susceptibilityThis system, widely used today, is called the Evans balanceFor each measurement, only around 250 mg of sample is required (50 mg can be used for a thin-bore sample tube).
An Evans balance, also known as a Johnson's balance (after a commercial producer of the Evans balance) is a device for measuring magnetic susceptibility- The Evans balance measures susceptibility indirectly by referring to a calibration standard of known susceptibilityMoreover, using a Evans balance is less time-consuming than using a Gouy or Faraday balances, although it is not sensitive and accurate in comparison to these last two systemsWith the Evans balance, a reading could be taken in a matter of seconds with only small sacrifices in sensitivity and accuracyThe original Evans balance had an accuracy within 1% of literature values for diamagnetic solutions and within 2% of literature values of paramagnetic solids.The system allows for measurements of solid, liquid, and gaseous forms of a wide range of paramagnetic and diamagnetic materialsA Johnson-Matthey balance has a range from 0.001 x 10−7 to 1.99 x 10−7 c.g.sVarious practical devices are available for the measurement of susceptibility, which differ in the shape of the magnetic field and the way the force is measured.The Evans balance employs a similar sample configuration but measures the force on the magnetA Johnson-Matthey balance has a range from 0.001 x 10 to 1.99 x 10 c.g.sSome balances have an auto-tare feature that eliminates the need for the ""R"" measurementIt is also more convenient to use than the Gouy and Faraday balancesThis method is highly accurate for diamagnetic materials with susceptibilities similar to water.
Advantages vs alternative magnetic balances The main advantage of this system is that it is chea[SEP]What is the Evans balance?","['D', 'E', 'C']",1.0
What is the definition of dimension in mathematics?,"Dimensioning is the process of measuring either the area or the volume that an object occupies. In mathematics, metric dimension may refer to: * Metric dimension (graph theory), the minimum number of vertices of an undirected graph G in a subset S of G such that all other vertices are uniquely determined by their distances to the vertices in S * Minkowski–Bouligand dimension (also called the metric dimension), a way of determining the dimension of a fractal set in a Euclidean space by counting the number of fixed-size boxes needed to cover the set as a function of the box size * Equilateral dimension of a metric space (also called the metric dimension), the maximum number of points at equal distances from each other * Hausdorff dimension, an extended non-negative real number associated with any metric space that generalizes the notion of the dimension of a real vector space In mathematics, dimension theory is the study in terms of commutative algebra of the notion dimension of an algebraic variety (and by extension that of a scheme). In mathematics, the dimension of a vector space V is the cardinality (i.e., the number of vectors) of a basis of V over its base field. p. 44, §2.36 It is sometimes called Hamel dimension (after Georg Hamel) or algebraic dimension to distinguish it from other types of dimension. In mathematics, and particularly in graph theory, the dimension of a graph is the least integer such that there exists a ""classical representation"" of the graph in the Euclidean space of dimension with all the edges having unit length. :If dim V is infinite then |V| = \max (|F|, \dim V). == Generalizations == A vector space can be seen as a particular case of a matroid, and in the latter there is a well-defined notion of dimension. Firstly, it allows for a definition of a notion of dimension when one has a trace but no natural sense of basis. A large part of dimension theory consists in studying the conditions under which several dimensions are equal, and many important classes of commutative rings may be defined as the rings such that two dimensions are equal; for example, a regular ring is a commutative ring such that the homological dimension is equal to the Krull dimension. \- Why dimensioning? Some formulae relate the dimension of a vector space with the cardinality of the base field and the cardinality of the space itself. The need of a theory for such an apparently simple notion results from the existence of many definitions of dimension that are equivalent only in the most regular cases (see Dimension of an algebraic variety). For every vector space there exists a basis, and all bases of a vector space have equal cardinality; as a result, the dimension of a vector space is uniquely defined. The definition of the dimension of a graph given above says, of the minimal- representation: * if two vertices of are connected by an edge, they must be at unit distance apart; * however, two vertices at unit distance apart are not necessarily connected by an edge. In this case, which is the algebraic counterpart of the case of affine algebraic sets, most of the definitions of the dimension are equivalent. A different definition was proposed in 1991 by Alexander Soifer, for what he termed the Euclidean dimension of a graph. In the warehousing industry, dimensioning is used to provide an overview of the volume items in stock which can reduce the costs of materials, return handling, shipping and manpower. We say V is if the dimension of V is finite, and if its dimension is infinite. So the dimension depends on the base field. The injective dimension of an R-module M denoted by \operatorname{id}_R M is defined just like a projective dimension: it is the minimal length of an injective resolution of M. The dimensions are related by the formula \dim_K(V) = \dim_K(F) \dim_F(V). ","The dimension of an object is the number of independent parameters or coordinates needed to define the position of a point constrained to be on the object, and is an extrinsic property of the object, dependent on the dimension of the space in which it is embedded.","The dimension of an object is the number of degrees of freedom of a point that moves on this object, and is an extrinsic property of the object, dependent on the dimension of the space in which it is embedded.","The dimension of an object is the number of independent parameters or coordinates needed to define the position of a point constrained to be on the object, and is an intrinsic property of the object, independent of the dimension of the space in which it is embedded.","The dimension of an object is the number of directions in which a point can move on the object, and is an extrinsic property of the object, dependent on the dimension of the space in which it is embedded.","The dimension of an object is the number of directions in which a point can move on the object, and is an intrinsic property of the object, independent of the dimension of the space in which it is embedded.",C,kaggle200,"In mathematics, effective dimension is a modification of Hausdorff dimension and other fractal dimensions that places it in a computability theory setting. There are several variations (various notions of effective dimension) of which the most common is effective Hausdorff dimension. Dimension, in mathematics, is a particular way of describing the size of an object (contrasting with measure and other, different, notions of size). Hausdorff dimension generalizes the well-known integer dimensions assigned to points, lines, planes, etc. by allowing one to distinguish between objects of intermediate size between these integer-dimensional objects. For example, fractal subsets of the plane may have intermediate dimension between 1 and 2, as they are ""larger"" than lines or curves, and yet ""smaller"" than filled circles or rectangles. Effective dimension modifies Hausdorff dimension by requiring that objects with small effective dimension be not only small but also locatable (or partially locatable) in a computable sense. As such, objects with large Hausdorff dimension also have large effective dimension, and objects with small effective dimension have small Hausdorff dimension, but an object can have small Hausdorff but large effective dimension. An example is an algorithmically random point on a line, which has Hausdorff dimension 0 (since it is a point) but effective dimension 1 (because, roughly speaking, it can't be effectively localized any better than a small interval, which has Hausdorff dimension 1).
Similarly, for the class of CW complexes, the dimension of an object is the largest for which the -skeleton is nontrivial. Intuitively, this can be described as follows: if the original space can be continuously deformed into a collection of higher-dimensional triangles joined at their faces with a complicated surface, then the dimension of the object is the dimension of those triangles.
The dimension is an intrinsic property of an object, in the sense that it is independent of the dimension of the space in which the object is or can be embedded. For example, a curve, such as a circle, is of dimension one, because the position of a point on a curve is determined by its signed distance along the curve to a fixed point on the curve. This is independent from the fact that a curve cannot be embedded in a Euclidean space of dimension lower than two, unless it is a line.
In mathematics, the dimension of an object is, roughly speaking, the number of degrees of freedom of a point that moves on this object. In other words, the dimension is the number of independent parameters or coordinates that are needed for defining the position of a point that is constrained to be on the object. For example, the dimension of a point is zero; the dimension of a line is one, as a point can move on a line in only one direction (or its opposite); the dimension of a plane is two, etc.","An inductive dimension may be defined inductively as follows. Consider a discrete set of points (such as a finite collection of points) to be 0-dimensional. By dragging a 0-dimensional object in some direction, one obtains a 1-dimensional object. By dragging a 1-dimensional object in a new direction, one obtains a 2-dimensional object. In general one obtains an (n + 1)-dimensional object by dragging an n-dimensional object in a new direction. The inductive dimension of a topological space may refer to the small inductive dimension or the large inductive dimension, and is based on the analogy that, in the case of metric spaces, (n + 1)-dimensional balls have n-dimensional boundaries, permitting an inductive definition based on the dimension of the boundaries of open sets. Moreover, the boundary of a discrete set of points is the empty set, and therefore the empty set can be taken to have dimension -1.Similarly, for the class of CW complexes, the dimension of an object is the largest n for which the n-skeleton is nontrivial. Intuitively, this can be described as follows: if the original space can be continuously deformed into a collection of higher-dimensional triangles joined at their faces with a complicated surface, then the dimension of the object is the dimension of those triangles.
The dimension is an intrinsic property of an object, in the sense that it is independent of the dimension of the space in which the object is or can be embedded. For example, a curve, such as a circle, is of dimension one, because the position of a point on a curve is determined by its signed distance along the curve to a fixed point on the curve. This is independent from the fact that a curve cannot be embedded in a Euclidean space of dimension lower than two, unless it is a line.
In mathematics, the dimension of an object is, roughly speaking, the number of degrees of freedom of a point that moves on this object. In other words, the dimension is the number of independent parameters or coordinates that are needed for defining the position of a point that is constrained to be on the object. For example, the dimension of a point is zero; the dimension of a line is one, as a point can move on a line in only one direction (or its opposite); the dimension of a plane is two etc.","Dimension, in mathematics, is a particular way of describing the size of an object (contrasting with measure and other, different, notions of size)For example, the dimension of a point is zero; the dimension of a line is one, as a point can move on a line in only one direction (or its opposite); the dimension of a plane is two etcFor example, the dimension of a point is zero; the dimension of a line is one, as a point can move on a line in only one direction (or its opposite); the dimension of a plane is two, etc.In other words, the dimension is the number of independent parameters or coordinates that are needed for defining the position of a point that is constrained to be on the objectFor example, a curve, such as a circle, is of dimension one, because the position of a point on a curve is determined by its signed distance along the curve to a fixed point on the curveFor example, fractal subsets of the plane may have intermediate dimension between 1 and 2, as they are ""larger"" than lines or curves, and yet ""smaller"" than filled circles or rectanglesIntuitively, this can be described as follows: if the original space can be continuously deformed into a collection of higher-dimensional triangles joined at their faces with a complicated surface, then the dimension of the object is the dimension of those triangles.
The dimension is an intrinsic property of an object, in the sense that it is independent of the dimension of the space in which the object is or can be embedded- In mathematics, effective dimension is a modification of Hausdorff dimension and other fractal dimensions that places it in a computability theory settingThere are several variations (various notions of effective dimension) of which the most common is effective Hausdorff dimensionThe inductive dimension of a topological space may refer to the small inductive dimension or the large inductive dimension, and is based on the analogy that, in the case of metric spaces, (n + 1)-dimensional balls have n-dimensional boundaries, permitting an inductive definition based on the dimension of the boundaries o","Dimension, in mathematics, is a particular way of describing the size of an object (contrasting with measure and other, different, notions of size)For example, the dimension of a point is zero; the dimension of a line is one, as a point can move on a line in only one direction (or its opposite); the dimension of a plane is two etcFor example, the dimension of a point is zero; the dimension of a line is one, as a point can move on a line in only one direction (or its opposite); the dimension of a plane is two, etc.In other words, the dimension is the number of independent parameters or coordinates that are needed for defining the position of a point that is constrained to be on the objectFor example, a curve, such as a circle, is of dimension one, because the position of a point on a curve is determined by its signed distance along the curve to a fixed point on the curveFor example, fractal subsets of the plane may have intermediate dimension between 1 and 2, as they are ""larger"" than lines or curves, and yet ""smaller"" than filled circles or rectanglesIntuitively, this can be described as follows: if the original space can be continuously deformed into a collection of higher-dimensional triangles joined at their faces with a complicated surface, then the dimension of the object is the dimension of those triangles.
The dimension is an intrinsic property of an object, in the sense that it is independent of the dimension of the space in which the object is or can be embedded- In mathematics, effective dimension is a modification of Hausdorff dimension and other fractal dimensions that places it in a computability theory settingThere are several variations (various notions of effective dimension) of which the most common is effective Hausdorff dimensionThe inductive dimension of a topological space may refer to the small inductive dimension or the large inductive dimension, and is based on the analogy that, in the case of metric spaces, (n + 1)-dimensional balls have n-dimensional boundaries, permitting an inductive definition based on the dimension of the boundaries o[SEP]What is the definition of dimension in mathematics?","['C', 'E', 'D']",1.0
What is accelerator-based light-ion fusion?,"Heavy ion fusion is a fusion energy concept that uses a stream of high-energy ions from a particle accelerator to rapidly heat and compress a small pellet of fusion fuel. This is the HIF approach's major downside; although it is possible to build an accelerator with less beam current for testing purposes, the individual ions still require the same energy and thus the accelerator will be a similar size as a higher-current version for a production reactor. ===Advantages over lasers=== There are significant practical advantages to the use of ions over lasers. Migma uses self-intersecting beams of ions from small particle accelerators to force the ions to fuse. The types of experiments done at a particular accelerator facility are determined by characteristics of the generated particle beam such as average energy, particle type, intensity, and dimensions. ==Acceleration and interaction of particles with RF structures== While it is possible to accelerate charged particles using electrostatic fields, like in a Cockcroft-Walton voltage multiplier, this method has limits given by electrical breakdown at high voltages. The electric field does work on the ions heating them to fusion conditions. Accelerators have the potential to be much more efficient in terms of delivering energy to the fuel pellet; typical laser-based ""drivers"" have overall efficiency on the order of 1%, while heavy-ion systems aim for 30% or more. As ions fall down the potential well, the electric field works on them, heating it to fusion conditions. An accelerator capable of giving lead ions this level of energy is neither small nor inexpensive, even for low numbers of ions, making it difficult to produce in a small-scale device. Ions are electrostatically confined raising the density and increasing the fusion rate. The only approach that appears to have a theoretical possibility of working is the D-T or perhaps D-D reaction in a thermalized plasma mass. ==References== ==External links== *Patent 4788024: Apparatus and method for obtaining a self- colliding beam of charged particles operating above the space charge limit Category:Fusion reactors Ions that collide at high enough energies can fuse. This approach has been successful in producing fusion reactions, but to date the devices that can provide the compression, typically lasers, require more energy than the reactions produce. Migma testbed devices used accelerators of about 1 MeV,Migma IV High Energy Fusion Apperatus to 2 MeV. In the 1970s when the concept was first being considered, the most powerful accelerators, typically using electron or proton, accelerated small numbers of particles to high energies. Accelerator physics is a branch of applied physics, concerned with designing, building and operating particle accelerators. Direct conversion collectors inside the vacuum chamber would convert the alpha particles' kinetic energy to a high-voltage direct current. Their fusion occurs when the ions reach 4 keV (kiloelectronvolts), or about 45 million kelvins. To date, the record on NIF is 1.3 MJ of fusion from 2 MJ of laser output, from 422 MJ of electricity, so it is extremely unlikely the current approach could ever be used for power production. ===Alternate drivers=== In 1963, Friedwardt Winterberg introduced the concept of igniting fusion using small groups of particles that have been accelerated to about 200 km/s, a concept that is now known as cluster impact fusion. This means it can only accelerate short pulses of ions, and therefore requires some way to combine the pulses back together. Confining a gas at millions of degrees for this sort of time scale has proven difficult, although modern experimental machines are approaching the conditions needed for net power production. ==Migma fusion== The colliding beam approach avoided the problem of heating the mass of fuel to these temperatures by accelerating the ions directly in a particle accelerator. ","Accelerator-based light-ion fusion is a technique that uses particle accelerators to achieve particle kinetic energies sufficient to induce light-ion fusion reactions. This method is relatively easy to implement and can be done in an efficient manner, requiring only a vacuum tube, a pair of electrodes, and a high-voltage transformer. Fusion can be observed with as little as 10 kV between the electrodes.","Accelerator-based light-ion fusion is a technique that uses particle accelerators to achieve particle kinetic energies sufficient to induce heavy-ion fusion reactions. This method is relatively difficult to implement and requires a complex system of vacuum tubes, electrodes, and transformers. Fusion can be observed with as little as 10 kV between the electrodes.","Accelerator-based light-ion fusion is a technique that uses particle accelerators to achieve particle kinetic energies sufficient to induce light-ion fusion reactions. This method is relatively difficult to implement and requires a complex system of vacuum tubes, electrodes, and transformers. Fusion can be observed with as little as 100 kV between the electrodes.","Accelerator-based light-ion fusion is a technique that uses particle accelerators to achieve particle kinetic energies sufficient to induce heavy-ion fusion reactions. This method is relatively easy to implement and can be done in an efficient manner, requiring only a vacuum tube, a pair of electrodes, and a high-voltage transformer. Fusion can be observed with as little as 100 kV between the electrodes.","Accelerator-based light-ion fusion is a technique that uses particle accelerators to achieve particle kinetic energies sufficient to induce light-ion fission reactions. This method is relatively easy to implement and can be done in an efficient manner, requiring only a vacuum tube, a pair of electrodes, and a high-voltage transformer. Fission can be observed with as little as 10 kV between the electrodes.",A,kaggle200,"If matter is sufficiently heated (hence being plasma) and confined, fusion reactions may occur due to collisions with extreme thermal kinetic energies of the particles. Thermonuclear weapons produce what amounts to an uncontrolled release of fusion energy. Controlled thermonuclear fusion concepts use magnetic fields to confine the plasma.
A protein-lined pore perfectly meets all the observed requirements of the early fusion pore, and while some data does support this theory, sufficient data does not exist to pronounce it the primary method of fusion. A protein-lined pore requires at least five copies of the SNARE complex while fusion has been observed with as few as two.
For example, sensor fusion is also known as (multi-sensor) data fusion and is a subset of information fusion.
Accelerator-based light-ion fusion is a technique using particle accelerators to achieve particle kinetic energies sufficient to induce light-ion fusion reactions. Accelerating light ions is relatively easy, and can be done in an efficient manner—requiring only a vacuum tube, a pair of electrodes, and a high-voltage transformer; fusion can be observed with as little as 10 kV between the electrodes. The system can be arranged to accelerate ions into a static fuel-infused target, known as ""beam-target"" fusion, or by accelerating two streams of ions towards each other, ""beam-beam"" fusion.","Sensor fusion is also known as (multi-sensor) data fusion and is a subset of information fusion.
Beam–beam or beam–target fusion Accelerator-based light-ion fusion is a technique using particle accelerators to achieve particle kinetic energies sufficient to induce light-ion fusion reactions.Accelerating light ions is relatively easy, and can be done in an efficient manner—requiring only a vacuum tube, a pair of electrodes, and a high-voltage transformer; fusion can be observed with as little as 10 kV between the electrodes. The system can be arranged to accelerate ions into a static fuel-infused target, known as beam–target fusion, or by accelerating two streams of ions towards each other, beam–beam fusion. The key problem with accelerator-based fusion (and with cold targets in general) is that fusion cross sections are many orders of magnitude lower than Coulomb interaction cross-sections. Therefore, the vast majority of ions expend their energy emitting bremsstrahlung radiation and the ionization of atoms of the target. Devices referred to as sealed-tube neutron generators are particularly relevant to this discussion. These small devices are miniature particle accelerators filled with deuterium and tritium gas in an arrangement that allows ions of those nuclei to be accelerated against hydride targets, also containing deuterium and tritium, where fusion takes place, releasing a flux of neutrons. Hundreds of neutron generators are produced annually for use in the petroleum industry where they are used in measurement equipment for locating and mapping oil reserves.A number of attempts to recirculate the ions that ""miss"" collisions have been made over the years. One of the better-known attempts in the 1970s was Migma, which used a unique particle storage ring to capture ions into circular orbits and return them to the reaction area. Theoretical calculations made during funding reviews pointed out that the system would have significant difficulty scaling up to contain enough fusion fuel to be relevant as a power source. In the 1990s, a new arrangement using a field-reverse configuration (FRC) as the storage system was proposed by Norman Rostoker and continues to be studied by TAE Technologies as of 2021. A closely related approach is to merge two FRC's rotating in opposite directions, which is being actively studied by Helion Energy. Because these approaches all have ion energies well beyond the Coulomb barrier, they often suggest the use of alternative fuel cycles like p-11B that are too difficult to attempt using conventional approaches.
Beam–beam or beam–target fusion Accelerator-based light-ion fusion is a technique using particle accelerators to achieve particle kinetic energies sufficient to induce light-ion fusion reactions.Accelerating light ions is relatively easy, and can be done in an efficient manner—requiring only a vacuum tube, a pair of electrodes, and a high-voltage transformer; fusion can be observed with as little as 10 kV between the electrodes. The system can be arranged to accelerate ions into a static fuel-infused target, known as beam–target fusion, or by accelerating two streams of ions towards each other, beam–beam fusion. The key problem with accelerator-based fusion (and with cold targets in general) is that fusion cross sections are many orders of magnitude lower than Coulomb interaction cross-sections. Therefore, the vast majority of ions expend their energy emitting bremsstrahlung radiation and the ionization of atoms of the target. Devices referred to as sealed-tube neutron generators are particularly relevant to this discussion. These small devices are miniature particle accelerators filled with deuterium and tritium gas in an arrangement that allows ions of those nuclei to be accelerated against hydride targets, also containing deuterium and tritium, where fusion takes place, releasing a flux of neutrons. Hundreds of neutron generators are produced annually for use in the petroleum industry where they are used in measurement equipment for locating and mapping oil reserves.A number of attempts to recirculate the ions that ""miss"" collisions have been made over the years. One of the better-known attempts in the 1970s was Migma, which used a unique particle storage ring to capture ions into circular orbits and return them to the reaction area. Theoretical calculations made during funding reviews pointed out that the system would have significant difficulty scaling up to contain enough fusion fuel to be relevant as a power source. In the 1990s, a new arrangement using a field-reverse configuration (FRC) as the storage system was proposed by Norman Rostoker and continues to be studied by TAE Technologies as of 2021. A closely related approach is to merge two FRC's rotating in opposite directions, which is being actively studied by Helion Energy. Because these approaches all have ion energies well beyond the Coulomb barrier, they often suggest the use of alternative fuel cycles like p-11B that are too difficult to attempt using conventional approaches.","Sensor fusion is also known as (multi-sensor) data fusion and is a subset of information fusion.
Beam–beam or beam–target fusion Accelerator-based light-ion fusion is a technique using particle accelerators to achieve particle kinetic energies sufficient to induce light-ion fusion reactions.Accelerating light ions is relatively easy, and can be done in an efficient manner—requiring only a vacuum tube, a pair of electrodes, and a high-voltage transformer; fusion can be observed with as little as 10 kV between the electrodesBecause these approaches all have ion energies well beyond the Coulomb barrier, they often suggest the use of alternative fuel cycles like p-11B that are too difficult to attempt using conventional approaches.
Beam–beam or beam–target fusion Accelerator-based light-ion fusion is a technique using particle accelerators to achieve particle kinetic energies sufficient to induce light-ion fusion reactions.Accelerating light ions is relatively easy, and can be done in an efficient manner—requiring only a vacuum tube, a pair of electrodes, and a high-voltage transformer; fusion can be observed with as little as 10 kV between the electrodesThe system can be arranged to accelerate ions into a static fuel-infused target, known as ""beam-target"" fusion, or by accelerating two streams of ions towards each other, ""beam-beam"" fusion.The system can be arranged to accelerate ions into a static fuel-infused target, known as beam–target fusion, or by accelerating two streams of ions towards each other, beam–beam fusionAccelerating light ions is relatively easy, and can be done in an efficient manner—requiring only a vacuum tube, a pair of electrodes, and a high-voltage transformer; fusion can be observed with as little as 10 kV between the electrodesA protein-lined pore requires at least five copies of the SNARE complex while fusion has been observed with as few as two.
For example, sensor fusion is also known as (multi-sensor) data fusion and is a subset of information fusion.
Accelerator-based light-ion fusion is a technique using particle accelerators to achiev","Sensor fusion is also known as (multi-sensor) data fusion and is a subset of information fusion.
Beam–beam or beam–target fusion Accelerator-based light-ion fusion is a technique using particle accelerators to achieve particle kinetic energies sufficient to induce light-ion fusion reactions.Accelerating light ions is relatively easy, and can be done in an efficient manner—requiring only a vacuum tube, a pair of electrodes, and a high-voltage transformer; fusion can be observed with as little as 10 kV between the electrodesBecause these approaches all have ion energies well beyond the Coulomb barrier, they often suggest the use of alternative fuel cycles like p-11B that are too difficult to attempt using conventional approaches.
Beam–beam or beam–target fusion Accelerator-based light-ion fusion is a technique using particle accelerators to achieve particle kinetic energies sufficient to induce light-ion fusion reactions.Accelerating light ions is relatively easy, and can be done in an efficient manner—requiring only a vacuum tube, a pair of electrodes, and a high-voltage transformer; fusion can be observed with as little as 10 kV between the electrodesThe system can be arranged to accelerate ions into a static fuel-infused target, known as ""beam-target"" fusion, or by accelerating two streams of ions towards each other, ""beam-beam"" fusion.The system can be arranged to accelerate ions into a static fuel-infused target, known as beam–target fusion, or by accelerating two streams of ions towards each other, beam–beam fusionAccelerating light ions is relatively easy, and can be done in an efficient manner—requiring only a vacuum tube, a pair of electrodes, and a high-voltage transformer; fusion can be observed with as little as 10 kV between the electrodesA protein-lined pore requires at least five copies of the SNARE complex while fusion has been observed with as few as two.
For example, sensor fusion is also known as (multi-sensor) data fusion and is a subset of information fusion.
Accelerator-based light-ion fusion is a technique using particle accelerators to achiev[SEP]What is accelerator-based light-ion fusion?","['A', 'D', 'C']",1.0
What is the interstellar medium (ISM)?,"In astronomy, the interstellar medium (ISM) is the matter and radiation that exist in the space between the star systems in a galaxy. The interstellar medium is composed of multiple phases distinguished by whether matter is ionic, atomic, or molecular, and the temperature and density of the matter. The interstellar medium is composed, primarily, of hydrogen, followed by helium with trace amounts of carbon, oxygen, and nitrogen. Stars form within the densest regions of the ISM, which ultimately contributes to molecular clouds and replenishes the ISM with matter and energy through planetary nebulae, stellar winds, and supernovae. In the interstellar medium, matter is primarily in molecular form, and reaches number densities of 106 molecules per cm3 (1 million molecules per cm3). Although the density of atoms in the ISM is usually far below that in the best laboratory vacuums, the mean free path between collisions is short compared to typical interstellar lengths, so on these scales the ISM behaves as a gas (more precisely a plasma - it is everywhere at least slightly ionized), responding to pressure forces, and not as a collection of non-interacting particles. The growing evidence for interstellar material led to comment that ""While the interstellar absorbing medium may be simply the ether, yet the character of its selective absorption, as indicated by Kapteyn, is characteristic of a gas, and free gaseous molecules are certainly there, since they are probably constantly being expelled by the Sun and stars."" This matter includes gas in ionic, atomic, and molecular form, as well as dust and cosmic rays. It fills interstellar space and blends smoothly into the surrounding intergalactic space. This interplay between stars and the ISM helps determine the rate at which a galaxy depletes its gaseous content, and therefore its lifespan of active star formation. In astronomy, the intracluster medium (ICM) is the superheated plasma that permeates a galaxy cluster. The gas consists mainly of ionized hydrogen and helium and accounts for most of the baryonic material in galaxy clusters. Since the interplanetary medium is a plasma, or gas of ions, the interplanetary medium has the characteristics of a plasma, rather than a simple gas. The interplanetary medium (IPM) or interplanetary space consists of the mass and energy which fills the Solar System, and through which all the larger Solar System bodies, such as planets, dwarf planets, asteroids, and comets, move. The ISM plays a crucial role in astrophysics precisely because of its intermediate role between stellar and galactic scales. Before 1950, interplanetary space was widely considered to either be an empty vacuum, or consisting of ""aether"". ==Composition and physical characteristics== The interplanetary medium includes interplanetary dust, cosmic rays, and hot plasma from the solar wind. However, the interstellar radiation field is typically much weaker than a medium in thermodynamic equilibrium; it is most often roughly that of an A star (surface temperature of ~10,000 K) highly diluted. But the column density through the atmosphere is vastly larger than the column through the entire Galaxy, due to the extremely low density of the ISM. ==History of knowledge of interstellar space== The word 'interstellar' (between the stars) was coined by Francis Bacon in the context of the ancient theory of a literal sphere of fixed stars. The interplanetary medium thus fills the roughly spherical volume contained within the heliopause. ==Interaction with planets== How the interplanetary medium interacts with planets depends on whether they have magnetic fields or not. In the series of investigations, Viktor Ambartsumian introduced the now commonly accepted notion that interstellar matter occurs in the form of clouds. ","The matter and radiation that exist in the space between the star systems in a galaxy, including gas in ionic, atomic, and molecular form, as well as dust and cosmic rays. It fills interstellar space and blends smoothly into the surrounding intergalactic space.","The matter and radiation that exist in the space between stars in a galaxy, including gas in ionic, atomic, and molecular form, as well as dust and cosmic rays. It fills interstellar space and blends smoothly into the surrounding interplanetary space.","The matter and radiation that exist in the space between galaxies, including gas in ionic, atomic, and molecular form, as well as dust and cosmic rays. It fills intergalactic space and blends smoothly into the surrounding interstellar space.","The matter and radiation that exist in the space between planets in a solar system, including gas in ionic, atomic, and molecular form, as well as dust and cosmic rays. It fills interplanetary space and blends smoothly into the surrounding interstellar space.","The matter and radiation that exist within a star, including gas in ionic, atomic, and molecular form, as well as dust and cosmic rays. It fills the star and blends smoothly into the surrounding interstellar space.",A,kaggle200,"Interstellar space is defined as the space beyond a magnetic region that extends about 122 AU from the Sun, as detected by ""Voyager 1,"" and the equivalent region of influence surrounding other stars. ""Voyager 1"" entered interstellar space in 2012.
It was determined that in 2012 ""Voyager 1"" entered interstellar space, that is it entered the interstellar medium between the stars. One of the reasons this was recognized was a significant increase in galactic cosmic rays.
In astronomy, the interstellar medium (or ISM) is the gas and cosmic dust that pervade interstellar space: the matter that exists between the star systems within a galaxy. It fills interstellar space and blends smoothly into the surrounding intergalactic medium. The interstellar medium consists of an extremely dilute (by terrestrial standards) mixture of ions, atoms, molecules, larger dust grains, cosmic rays, and (galactic) magnetic fields. The energy that occupies the same volume, in the form of electromagnetic radiation, is the interstellar radiation field.
In astronomy, the interstellar medium is the matter and radiation that exist in the space between the star systems in a galaxy. This matter includes gas in ionic, atomic, and molecular form, as well as dust and cosmic rays. It fills interstellar space and blends smoothly into the surrounding intergalactic space. The energy that occupies the same volume, in the form of electromagnetic radiation, is the interstellar radiation field.","Interplanetary space - Interplanetary medium - interplanetary dust Interstellar space - Interstellar medium - interstellar dust Intergalactic space - Intergalactic medium - Intergalactic dust
X-ray Quantum Calorimeter (XQC) project In astronomy, the interstellar medium (or ISM) is the gas and cosmic dust that pervade interstellar space: the matter that exists between the star systems within a galaxy. It fills interstellar space and blends smoothly into the surrounding intergalactic medium. The interstellar medium consists of an extremely dilute (by terrestrial standards) mixture of ions, atoms, molecules, larger dust grains, cosmic rays, and (galactic) magnetic fields. The energy that occupies the same volume, in the form of electromagnetic radiation, is the interstellar radiation field.
In astronomy, the interstellar medium (ISM) is the matter and radiation that exist in the space between the star systems in a galaxy. This matter includes gas in ionic, atomic, and molecular form, as well as dust and cosmic rays. It fills interstellar space and blends smoothly into the surrounding intergalactic space. The energy that occupies the same volume, in the form of electromagnetic radiation, is the interstellar radiation field. Although the density of atoms in the ISM is usually far below that in the best laboratory vacuums, the mean free path between collisions is short compared to typical interstellar lengths, so on these scales the ISM behaves as a gas (more precisely, as a plasma: it is everywhere at least slightly ionized), responding to pressure forces, and not as a collection of non-interacting particles.","One of the reasons this was recognized was a significant increase in galactic cosmic rays.
In astronomy, the interstellar medium (or ISM) is the gas and cosmic dust that pervade interstellar space: the matter that exists between the star systems within a galaxyInterplanetary space - Interplanetary medium - interplanetary dust Interstellar space - Interstellar medium - interstellar dust Intergalactic space - Intergalactic medium - Intergalactic dust
X-ray Quantum Calorimeter (XQC) project In astronomy, the interstellar medium (or ISM) is the gas and cosmic dust that pervade interstellar space: the matter that exists between the star systems within a galaxyThe interstellar medium consists of an extremely dilute (by terrestrial standards) mixture of ions, atoms, molecules, larger dust grains, cosmic rays, and (galactic) magnetic fieldsThe energy that occupies the same volume, in the form of electromagnetic radiation, is the interstellar radiation field.
In astronomy, the interstellar medium (ISM) is the matter and radiation that exist in the space between the star systems in a galaxyIt fills interstellar space and blends smoothly into the surrounding intergalactic mediumAlthough the density of atoms in the ISM is usually far below that in the best laboratory vacuums, the mean free path between collisions is short compared to typical interstellar lengths, so on these scales the ISM behaves as a gas (more precisely, as a plasma: it is everywhere at least slightly ionized), responding to pressure forces, and not as a collection of non-interacting particles- Interstellar space is defined as the space beyond a magnetic region that extends about 122 AU from the Sun, as detected by ""Voyager 1,"" and the equivalent region of influence surrounding other starsIt fills interstellar space and blends smoothly into the surrounding intergalactic spaceThe energy that occupies the same volume, in the form of electromagnetic radiation, is the interstellar radiation field.
In astronomy, the interstellar medium is the matter and radiation that exist in the space between the star systems","One of the reasons this was recognized was a significant increase in galactic cosmic rays.
In astronomy, the interstellar medium (or ISM) is the gas and cosmic dust that pervade interstellar space: the matter that exists between the star systems within a galaxyInterplanetary space - Interplanetary medium - interplanetary dust Interstellar space - Interstellar medium - interstellar dust Intergalactic space - Intergalactic medium - Intergalactic dust
X-ray Quantum Calorimeter (XQC) project In astronomy, the interstellar medium (or ISM) is the gas and cosmic dust that pervade interstellar space: the matter that exists between the star systems within a galaxyThe interstellar medium consists of an extremely dilute (by terrestrial standards) mixture of ions, atoms, molecules, larger dust grains, cosmic rays, and (galactic) magnetic fieldsThe energy that occupies the same volume, in the form of electromagnetic radiation, is the interstellar radiation field.
In astronomy, the interstellar medium (ISM) is the matter and radiation that exist in the space between the star systems in a galaxyIt fills interstellar space and blends smoothly into the surrounding intergalactic mediumAlthough the density of atoms in the ISM is usually far below that in the best laboratory vacuums, the mean free path between collisions is short compared to typical interstellar lengths, so on these scales the ISM behaves as a gas (more precisely, as a plasma: it is everywhere at least slightly ionized), responding to pressure forces, and not as a collection of non-interacting particles- Interstellar space is defined as the space beyond a magnetic region that extends about 122 AU from the Sun, as detected by ""Voyager 1,"" and the equivalent region of influence surrounding other starsIt fills interstellar space and blends smoothly into the surrounding intergalactic spaceThe energy that occupies the same volume, in the form of electromagnetic radiation, is the interstellar radiation field.
In astronomy, the interstellar medium is the matter and radiation that exist in the space between the star systems[SEP]What is the interstellar medium (ISM)?","['A', 'B', 'C']",1.0
What is the significance of the change in slope of the pinched hysteresis curves in ReRAM and other forms of two-terminal resistance memory?,"Whether redox- based resistively switching elements (ReRAM) are covered by the current memristor theory is disputed. Leon Chua argued that all two- terminal non-volatile memory devices including ReRAM should be considered memristors. ReRAM bears some similarities to conductive-bridging RAM (CBRAM) and phase- change memory (PCM). Resistive random-access memory (ReRAM or RRAM) is a type of non-volatile (NV) random-access (RAM) computer memory that works by changing the resistance across a dielectric solid-state material, often referred to as a memristor. These defects are essential for the defect drift-dominated resistive switching memory. On 8 July they announced they would begin prototyping ReRAM using their memristors.EETimes.com – Memristors ready for prime time HP first demonstrated its memristor using TiOx,D. B. Strukov, Nature 453, 80 (2008). but later migrated to TaOx,J. P. Strachan et al., IEEE Trans. Elec. Dev. 60, 2194 (2013). possibly due to improved stability. This mechanism is supported by marked variation in capacitance value in ON and OFF states. == ReRam test boards == * Panasonic AM13L-STK2 : MN101LR05D 8-bit MCU with built in ReRAM for evaluation, connector == Future applications == Compared to PRAM, ReRAM operates at a faster timescale (switching time can be less than 10 ns), while compared to MRAM, it has a simpler, smaller cell structure (less than 8F² MIM stack). Filamentary and homogenous switching effects can be distinguished by measuring the area dependence of the low-resistance state. However, others challenged this terminology and the applicability of memristor theory to any physically realizable device is open to question. Silicon dioxide was shown to exhibit resistive switching as early as May 1966, and has recently been revisited. Scientific reports, 10(1), 1-8. == Demonstrations == Papers at the IEDM Conference in 2007 suggested for the first time that ReRAM exhibits lower programming currents than PRAM or MRAM without sacrificing programming performance, retention or endurance. Silicon oxide presents an interesting case of resistance switching. Stan Williams of HP Labs also argued that ReRAM was a memristor. These can be grouped into the following categories: * phase-change chalcogenides such as or AgInSbTe * binary transition metal oxides such as NiO or * perovskites such as Sr(Zr) or PCMO * solid-state electrolytes such as GeS, GeSe, or * organic charge-transfer complexes such as CuTCNQ * organic donor–acceptor systems such as Al AIDCN * two dimensional (layered) insulating materials like hexagonal boron nitride == RRAM Based on Perovskite == ABO3-type inorganic perovskite materials such as BaTiO3, SrRuO3, SrZrO3, and SrTiO3 have attracted extensive research interest as the storage media in memristors due to their remarkable resistance switching effects and various functionalities such as ferroelectric, dielectric, and semiconducting physical characteristics.S.C. Lee, Q. Hu, Y.-J. Baek, Y.J. Choi, C.J. Kang, H.H. Lee, T.-S. Yoon, Analog and bipolar resistive switching in pn junction of n-type ZnO nanowires on p-type Si substrate, J. Appl. Phys. 114 (2013) 1–5. Bulk switching in silicon oxide, pioneered by researchers at UCL (University College London) since 2012, offers low electroforming voltages (2.5V), switching voltages around 1V, switching times in the nanoseconds regime, and more than 10,000,000 cycles without device failure - all in ambient conditions. == Forming == right|thumb|300px|Filament forming: A 50 nm × 50 nm ReRAM cell by [http://www.crossbar- inc.com/assets/img/media/Crossbar-RRAM-Technology-Whitepaper-080413.pdf Crossbar] the instance of filament forming when the current abruptly increases beyond a certain voltage. Metal halide perovskites for resistive switching memory devices and artificial synapses. Unipolar switching leaves polarity unaffected, but uses different voltages. == Material systems for resistive memory cells == Multiple inorganic and organic material systems display thermal or ionic resistive switching effects. The subthreshold slope is a feature of a MOSFET's current–voltage characteristic. By contrast, ReRAM involves generating defects in a thin oxide layer, known as oxygen vacancies (oxide bond locations where the oxygen has been removed), which can subsequently charge and drift under an electric field. Plateau potentials, caused by persistent inward currents (PICs), are a type of electrical behavior seen in neurons. == Spinal Cord == Plateau potentials are of particular importance to spinal cord motor systems. ","The change in slope of the pinched hysteresis curves demonstrates switching between different resistance states, which is a phenomenon central to ReRAM and other forms of two-terminal resistance memory.","The change in slope of the pinched hysteresis curves indicates the presence of a Type-II non-crossing curve, which is a phenomenon central to ReRAM and other forms of two-terminal resistance memory.","The change in slope of the pinched hysteresis curves demonstrates the presence of a memristor, which is a phenomenon central to ReRAM and other forms of two-terminal resistance memory.","The change in slope of the pinched hysteresis curves demonstrates the presence of a memristive network, which is a phenomenon central to ReRAM and other forms of two-terminal resistance memory.","The change in slope of the pinched hysteresis curves indicates the presence of a linear resistor, which is a phenomenon central to ReRAM and other forms of two-terminal resistance memory.",A,kaggle200,"formula_14 is the voltage across the discharge tube, formula_11 is the current flowing through it and formula_12 is the number of conduction electrons. A simple memristance function is formula_17. formula_18 and formula_19 are parameters depending on the dimensions of the tube and the gas fillings. An experimental identification of memristive behaviour is the ""pinched hysteresis loop"" in the formula_20 plane. For an experiment that shows such a characteristic for a common discharge tube, see ""A physical memristor Lissajous figure"" (YouTube). The video also illustrates how to understand deviations in the pinched hysteresis characteristics of physical memristors.
Another example suggests including an offset value formula_43 to account for an observed nanobattery effect which violates the predicted zero-crossing pinched hysteresis effect.
It can also be done to change the slope or camber of the road or for grade adjustments which can help with drainage.
One of the resulting properties of memristors and memristive systems is the existence of a pinched hysteresis effect. For a current-controlled memristive system, the input ""u""(""t"") is the current ""i""(""t""), the output ""y""(""t"") is the voltage ""v""(""t""), and the slope of the curve represents the electrical resistance. The change in slope of the pinched hysteresis curves demonstrates switching between different resistance states which is a phenomenon central to ReRAM and other forms of two-terminal resistance memory. At high frequencies, memristive theory predicts the pinched hysteresis effect will degenerate, resulting in a straight line representative of a linear resistor. It has been proven that some types of non-crossing pinched hysteresis curves (denoted Type-II) cannot be described by memristors.","In the early 2000s, ReRAMs were under development by a number of companies, some of which filed patent applications claiming various implementations of this technology. ReRAM has entered commercialization on an initially limited KB-capacity scale.In February 2012, Rambus bought a ReRAM company called Unity Semiconductor for $35 million. Panasonic launched an ReRAM evaluation kit in May 2012, based on a tantalum oxide 1T1R (1 transistor – 1 resistor) memory cell architecture.In 2013, Crossbar introduced an ReRAM prototype as a chip about the size of a postage stamp that could store 1 TB of data. In August 2013, the company claimed that large-scale production of their ReRAM chips was scheduled for 2015. The memory structure (Ag/a-Si/Si) closely resembles a silver-based CBRAM.
Panasonic AM13L-STK2 : MN101LR05D 8-bit MCU with built in ReRAM for evaluation, USB 2.0 connector
Pinched hysteresis One of the resulting properties of memristors and memristive systems is the existence of a pinched hysteresis effect. For a current-controlled memristive system, the input u(t) is the current i(t), the output y(t) is the voltage v(t), and the slope of the curve represents the electrical resistance. The change in slope of the pinched hysteresis curves demonstrates switching between different resistance states which is a phenomenon central to ReRAM and other forms of two-terminal resistance memory. At high frequencies, memristive theory predicts the pinched hysteresis effect will degenerate, resulting in a straight line representative of a linear resistor. It has been proven that some types of non-crossing pinched hysteresis curves (denoted Type-II) cannot be described by memristors.","The change in slope of the pinched hysteresis curves demonstrates switching between different resistance states which is a phenomenon central to ReRAM and other forms of two-terminal resistance memoryAt high frequencies, memristive theory predicts the pinched hysteresis effect will degenerate, resulting in a straight line representative of a linear resistorThe video also illustrates how to understand deviations in the pinched hysteresis characteristics of physical memristors.
Another example suggests including an offset value formula_43 to account for an observed nanobattery effect which violates the predicted zero-crossing pinched hysteresis effect.
It can also be done to change the slope or camber of the road or for grade adjustments which can help with drainage.
One of the resulting properties of memristors and memristive systems is the existence of a pinched hysteresis effectIt has been proven that some types of non-crossing pinched hysteresis curves (denoted Type-II) cannot be described by memristorsIt has been proven that some types of non-crossing pinched hysteresis curves (denoted Type-II) cannot be described by memristors.An experimental identification of memristive behaviour is the ""pinched hysteresis loop"" in the formula_20 planeThe memory structure (Ag/a-Si/Si) closely resembles a silver-based CBRAM.
Panasonic AM13L-STK2 : MN101LR05D 8-bit MCU with built in ReRAM for evaluation, USB 2.0 connector
Pinched hysteresis One of the resulting properties of memristors and memristive systems is the existence of a pinched hysteresis effectFor a current-controlled memristive system, the input ""u""(""t"") is the current ""i""(""t""), the output ""y""(""t"") is the voltage ""v""(""t""), and the slope of the curve represents the electrical resistanceFor a current-controlled memristive system, the input u(t) is the current i(t), the output y(t) is the voltage v(t), and the slope of the curve represents the electrical resistancePanasonic launched an ReRAM evaluation kit in May 2012, based on a tantalum oxide 1T1R (1 transistor – 1 resistor) memory cell architecture.In 2013, Crossba","The change in slope of the pinched hysteresis curves demonstrates switching between different resistance states which is a phenomenon central to ReRAM and other forms of two-terminal resistance memoryAt high frequencies, memristive theory predicts the pinched hysteresis effect will degenerate, resulting in a straight line representative of a linear resistorThe video also illustrates how to understand deviations in the pinched hysteresis characteristics of physical memristors.
Another example suggests including an offset value formula_43 to account for an observed nanobattery effect which violates the predicted zero-crossing pinched hysteresis effect.
It can also be done to change the slope or camber of the road or for grade adjustments which can help with drainage.
One of the resulting properties of memristors and memristive systems is the existence of a pinched hysteresis effectIt has been proven that some types of non-crossing pinched hysteresis curves (denoted Type-II) cannot be described by memristorsIt has been proven that some types of non-crossing pinched hysteresis curves (denoted Type-II) cannot be described by memristors.An experimental identification of memristive behaviour is the ""pinched hysteresis loop"" in the formula_20 planeThe memory structure (Ag/a-Si/Si) closely resembles a silver-based CBRAM.
Panasonic AM13L-STK2 : MN101LR05D 8-bit MCU with built in ReRAM for evaluation, USB 2.0 connector
Pinched hysteresis One of the resulting properties of memristors and memristive systems is the existence of a pinched hysteresis effectFor a current-controlled memristive system, the input ""u""(""t"") is the current ""i""(""t""), the output ""y""(""t"") is the voltage ""v""(""t""), and the slope of the curve represents the electrical resistanceFor a current-controlled memristive system, the input u(t) is the current i(t), the output y(t) is the voltage v(t), and the slope of the curve represents the electrical resistancePanasonic launched an ReRAM evaluation kit in May 2012, based on a tantalum oxide 1T1R (1 transistor – 1 resistor) memory cell architecture.In 2013, Crossba[SEP]What is the significance of the change in slope of the pinched hysteresis curves in ReRAM and other forms of two-terminal resistance memory?","['A', 'C', 'B']",1.0
What is geometric quantization in mathematical physics?,"In mathematical physics, geometric quantization is a mathematical approach to defining a quantum theory corresponding to a given classical theory. As a mere representation change, however, Weyl's map is useful and important, as it underlies the alternate equivalent phase space formulation of conventional quantum mechanics. == Geometric quantization == In mathematical physics, geometric quantization is a mathematical approach to defining a quantum theory corresponding to a given classical theory. In theoretical physics, quantum geometry is the set of mathematical concepts generalizing the concepts of geometry whose understanding is necessary to describe the physical phenomena at distance scales comparable to the Planck length. It attempts to carry out quantization, for which there is in general no exact recipe, in such a way that certain analogies between the classical theory and the quantum theory remain manifest. It attempts to carry out quantization, for which there is in general no exact recipe, in such a way that certain analogies between the classical theory and the quantum theory remain manifest. It is possible (but considered unlikely) that this strictly quantized understanding of geometry will be consistent with the quantum picture of geometry arising from string theory. In physics, quantisation (in American English quantization) is the systematic transition procedure from a classical understanding of physical phenomena to a newer understanding known as quantum mechanics. The construction of the preceding Hilbert space and the operators Q(f) is known as prequantization. ===Polarization=== The next step in the process of geometric quantization is the choice of a polarization. String theory, a leading candidate for a quantum theory of gravity, uses the term quantum geometry to describe exotic phenomena such as T-duality and other geometric dualities, mirror symmetry, topology-changing transitions, minimal possible distance scale, and other effects that challenge intuition. A first quantization of a physical system is a possibly semiclassical treatment of quantum mechanics, in which particles or physical objects are treated using quantum wave functions but the surrounding environment (for example a potential well or a bulk electromagnetic field or gravitational field) is treated classically. In the case that the area of the sphere is 2\pi\hbar, we obtain the two-dimensional spin-½ representation. ==See also== * Half-form * Lagrangian foliation * Kirillov orbit method * Quantization commutes with reduction == Notes == ==Citations== ==Sources== * * * * * * * * * ==External links== * William Ritter's review of Geometric Quantization presents a general framework for all problems in physics and fits geometric quantization into this framework * John Baez's review of Geometric Quantization, by John Baez is short and pedagogical * Matthias Blau's primer on Geometric Quantization, one of the very few good primers (ps format only) * A. Echeverria-Enriquez, M. Munoz-Lecanda, N. Roman- Roy, Mathematical foundations of geometric quantization, . More technically, quantum geometry refers to the shape of a spacetime manifold as experienced by D-branes which includes quantum corrections to the metric tensor, such as the worldsheet instantons. In an alternative approach to quantum gravity called loop quantum gravity (LQG), the phrase ""quantum geometry"" usually refers to the formalism within LQG where the observables that capture the information about the geometry are now well defined operators on a Hilbert space. The term quantization may refer to: == Signal processing == * Quantization (signal processing) ** Quantization (image processing) *** Color quantization ** Quantization (music) == Physics == * Quantization (physics) ** Canonical quantization ** Geometric quantization * Discrete spectrum, or otherwise discrete quantity ** Spatial quantization ** Charge quantization == Computing == * The process of making the signal discrete in amplitude by approximating the sampled signal to the nearest pre- defined level is called as quantization == Linguistics == * Quantization (linguistics) == Similar terms == * Quantification (science) ""Quantization methods: a guide for physicists and analysts"". Quantization is the process of constraining an input from a continuous or otherwise large set of values (such as the real numbers) to a discrete set (such as the integers). At these distances, quantum mechanics has a profound effect on physical phenomena. ==Quantum gravity== Each theory of quantum gravity uses the term ""quantum geometry"" in a slightly different fashion. The modern theory of geometric quantization was developed by Bertram Kostant and Jean-Marie Souriau in the 1970s. One then restricts to functions (or sections) depending on half the variables on the phase space, yielding the quantum Hilbert space. == Loop quantization == See Loop quantum gravity. == Path integral quantization == A classical mechanical theory is given by an action with the permissible configurations being the ones which are extremal with respect to functional variations of the action. For generality, a formalism which can be used in any coordinate system is useful. ==See also== * Noncommutative geometry ==References== ==Further reading== * Supersymmetry, Demystified, P. Labelle, McGraw-Hill (USA), 2010, * Quantum Mechanics, E. Abers, Pearson Ed., Addison Wesley, Prentice Hall Inc, 2004, * Quantum Mechanics Demystified, D. McMahon, Mc Graw Hill (USA), 2006, * Quantum Field Theory, D. McMahon, Mc Graw Hill (USA), 2008, ==External links== *Space and Time: From Antiquity to Einstein and Beyond *Quantum Geometry and its Applications Category:Quantum gravity Category:Quantum mechanics Category:Mathematical physics ",Geometric quantization is a mathematical approach to defining a classical theory corresponding to a given quantum theory. It attempts to carry out quantization in such a way that certain analogies between the quantum theory and the classical theory are lost.,Geometric quantization is a mathematical approach to defining a quantum theory corresponding to a given classical theory. It attempts to carry out quantization in such a way that certain analogies between the classical theory and the quantum theory are lost.,Geometric quantization is a mathematical approach to defining a classical theory corresponding to a given quantum theory. It attempts to carry out quantization in such a way that certain analogies between the quantum theory and the classical theory remain manifest.,Geometric quantization is a mathematical approach to defining a quantum theory corresponding to a given classical theory. It attempts to carry out quantization in such a way that certain analogies between the classical theory and the quantum theory are not important.,Geometric quantization is a mathematical approach to defining a quantum theory corresponding to a given classical theory. It attempts to carry out quantization in such a way that certain analogies between the classical theory and the quantum theory remain manifest.,E,kaggle200,"The idea of quantum field theory began in the late 1920s with British physicist [[Paul Dirac]], when he attempted to [[quantization (physics)|quantize]] the energy of the [[electromagnetic field]]; just like in quantum mechanics the energy of an electron in the hydrogen atom was quantized. Quantization is a procedure for constructing a quantum theory starting from a classical theory.
The modern theory of geometric quantization was developed by Bertram Kostant and Jean-Marie Souriau in the 1970s. One of the motivations of the theory was to understand and generalize Kirillov's orbit method in representation theory.
In mathematical physics, geometric quantization is a mathematical approach to defining a quantum theory corresponding to a given classical theory. It attempts to carry out quantization, for which there is in general no exact recipe, in such a way that certain analogies between the classical theory and the quantum theory remain manifest. For example, the similarity between the Heisenberg equation in the Heisenberg picture of quantum mechanics and the Hamilton equation in classical physics should be built in.
In mathematical physics, geometric quantization is a mathematical approach to defining a quantum theory corresponding to a given classical theory. It attempts to carry out quantization, for which there is in general no exact recipe, in such a way that certain analogies between the classical theory and the quantum theory remain manifest. For example, the similarity between the Heisenberg equation in the Heisenberg picture of quantum mechanics and the Hamilton equation in classical physics should be built in.","The modern theory of geometric quantization was developed by Bertram Kostant and Jean-Marie Souriau in the 1970s. One of the motivations of the theory was to understand and generalize Kirillov's orbit method in representation theory.
In mathematical physics, geometric quantization is a mathematical approach to defining a quantum theory corresponding to a given classical theory. It attempts to carry out quantization, for which there is in general no exact recipe, in such a way that certain analogies between the classical theory and the quantum theory remain manifest. For example, the similarity between the Heisenberg equation in the Heisenberg picture of quantum mechanics and the Hamilton equation in classical physics should be built in.
In mathematical physics, geometric quantization is a mathematical approach to defining a quantum theory corresponding to a given classical theory. It attempts to carry out quantization, for which there is in general no exact recipe, in such a way that certain analogies between the classical theory and the quantum theory remain manifest. For example, the similarity between the Heisenberg equation in the Heisenberg picture of quantum mechanics and the Hamilton equation in classical physics should be built in.","One of the motivations of the theory was to understand and generalize Kirillov's orbit method in representation theory.
In mathematical physics, geometric quantization is a mathematical approach to defining a quantum theory corresponding to a given classical theoryQuantization is a procedure for constructing a quantum theory starting from a classical theory.
The modern theory of geometric quantization was developed by Bertram Kostant and Jean-Marie Souriau in the 1970sFor example, the similarity between the Heisenberg equation in the Heisenberg picture of quantum mechanics and the Hamilton equation in classical physics should be built in.
In mathematical physics, geometric quantization is a mathematical approach to defining a quantum theory corresponding to a given classical theoryThe modern theory of geometric quantization was developed by Bertram Kostant and Jean-Marie Souriau in the 1970sIt attempts to carry out quantization, for which there is in general no exact recipe, in such a way that certain analogies between the classical theory and the quantum theory remain manifest- The idea of quantum field theory began in the late 1920s with British physicist [[Paul Dirac]], when he attempted to [[quantization (physics)|quantize]] the energy of the [[electromagnetic field]]; just like in quantum mechanics the energy of an electron in the hydrogen atom was quantizedFor example, the similarity between the Heisenberg equation in the Heisenberg picture of quantum mechanics and the Hamilton equation in classical physics should be built in.For example, the similarity between the Heisenberg equation in the Heisenberg picture of quantum mechanics and the Hamilton equation in classical physics should be built in","One of the motivations of the theory was to understand and generalize Kirillov's orbit method in representation theory.
In mathematical physics, geometric quantization is a mathematical approach to defining a quantum theory corresponding to a given classical theoryQuantization is a procedure for constructing a quantum theory starting from a classical theory.
The modern theory of geometric quantization was developed by Bertram Kostant and Jean-Marie Souriau in the 1970sFor example, the similarity between the Heisenberg equation in the Heisenberg picture of quantum mechanics and the Hamilton equation in classical physics should be built in.
In mathematical physics, geometric quantization is a mathematical approach to defining a quantum theory corresponding to a given classical theoryThe modern theory of geometric quantization was developed by Bertram Kostant and Jean-Marie Souriau in the 1970sIt attempts to carry out quantization, for which there is in general no exact recipe, in such a way that certain analogies between the classical theory and the quantum theory remain manifest- The idea of quantum field theory began in the late 1920s with British physicist [[Paul Dirac]], when he attempted to [[quantization (physics)|quantize]] the energy of the [[electromagnetic field]]; just like in quantum mechanics the energy of an electron in the hydrogen atom was quantizedFor example, the similarity between the Heisenberg equation in the Heisenberg picture of quantum mechanics and the Hamilton equation in classical physics should be built in.For example, the similarity between the Heisenberg equation in the Heisenberg picture of quantum mechanics and the Hamilton equation in classical physics should be built in[SEP]What is geometric quantization in mathematical physics?","['C', 'E', 'D']",0.5
What is the definition of an improper rotation?,"The ""improper rotation"" term refers to isometries that reverse (flip) the orientation. That is, any improper orthogonal 3x3 matrix may be decomposed as a proper rotation (from which an axis of rotation can be found as described above) followed by an inversion (multiplication by −1). Therefore, we don't have a proper rotation, but either the identity or the result of a sequence of reflections. In contrast, the reflectional symmetry is not a precise symmetry law of nature. ==Generalizations== The complex-valued matrices analogous to real orthogonal matrices are the unitary matrices \mathrm{U}(n), which represent rotations in complex space. A rotation is different from other types of motions: translations, which have no fixed points, and (hyperplane) reflections, each of them having an entire -dimensional flat of fixed points in a -dimensional space. Every proper rotation A in 3D space has an axis of rotation, which is defined such that any vector v that is aligned with the rotation axis will not be affected by rotation. As it was already stated, a (proper) rotation is different from an arbitrary fixed-point motion in its preservation of the orientation of the vector space. The circular symmetry is an invariance with respect to all rotation about the fixed axis. Any rotation about the origin can be represented as the composition of three rotations defined as the motion obtained by changing one of the Euler angles while leaving the other two constant. The reverse (inverse) of a rotation is also a rotation. The corresponding rotation axis must be defined to point in a direction that limits the rotation angle to not exceed 180 degrees. But a (proper) rotation also has to preserve the orientation structure. A rotation is simply a progressive radial orientation to a common point. For example, in two dimensions rotating a body clockwise about a point keeping the axes fixed is equivalent to rotating the axes counterclockwise about the same point while the body is kept fixed. This definition applies to rotations within both two and three dimensions (in a plane and in space, respectively.) The former are sometimes referred to as affine rotations (although the term is misleading), whereas the latter are vector rotations. It is a broader class of the sphere transformations known as Möbius transformations. ===Discrete rotations=== ==Importance== Rotations define important classes of symmetry: rotational symmetry is an invariance with respect to a particular rotation. Intrinsic (green), Precession (blue) and Nutation (red) Euler rotations provide an alternative description of a rotation. Matrices of all proper rotations form the special orthogonal group. ====Two dimensions==== In two dimensions, to carry out a rotation using a matrix, the point to be rotated counterclockwise is written as a column vector, then multiplied by a rotation matrix calculated from the angle : : \begin{bmatrix} x' \\\ y' \end{bmatrix} = \begin{bmatrix} \cos \theta & -\sin \theta \\\ \sin \theta & \cos \theta \end{bmatrix} \begin{bmatrix} x \\\ y \end{bmatrix}. These rotations are called precession, nutation, and intrinsic rotation. == Flight dynamics == right|thumb|The principal axes of rotation in space In flight dynamics, the principal rotations described with Euler angles above are known as pitch, roll and yaw. ","An improper rotation is the combination of a rotation about an axis and reflection in a plane perpendicular to that axis, or inversion about a point on the axis. The order of the rotation and reflection does not matter, and the symmetry elements for improper rotation are the rotation axis, and either the mirror plane, the inversion point, or both, and a third plane.","An improper rotation is the combination of a rotation about an axis and reflection in a plane perpendicular to that axis, or inversion about a point on the axis. The order of the rotation and reflection does not matter, and the symmetry elements for improper rotation are the rotation axis, and either the mirror plane, the inversion point, or both.","An improper rotation is the combination of a rotation about an axis and reflection in a plane parallel to that axis, or inversion about a point on the axis. The order of the rotation and reflection does not matter, and the symmetry elements for improper rotation are the rotation axis, and either the mirror plane, the inversion point, or neither.","An improper rotation is the combination of a rotation about an axis and reflection in a plane perpendicular to that axis, or inversion about a point on the axis. The order of the rotation and reflection does not matter, and the symmetry elements for improper rotation are the rotation axis, and either the mirror plane, the inversion point, or neither.","An improper rotation is the combination of a rotation about an axis and reflection in a plane parallel to that axis, or inversion about a point on the axis. The order of the rotation and reflection does not matter, and the symmetry elements for improper rotation are the rotation axis, and either the mirror plane, the inversion point, or both.",B,kaggle200,"An improper rotation of an object thus produces a rotation of its mirror image. The axis is called the rotation-reflection axis. This is called an ""n""-fold improper rotation if the angle of rotation, before or after reflexion, is 360°/""n"" (where ""n"" must be even). There are several different systems for naming individual improper rotations:
In 3 dimensions, improper rotation is equivalently defined as a combination of rotation about an axis and inversion in a point on the axis. For this reason it is also called a rotoinversion or rotary inversion. The two definitions are equivalent because rotation by an angle θ followed by reflection is the same transformation as rotation by θ + 180° followed by inversion (taking the point of inversion to be in the plane of reflection). In both definitions, the operations commute.
In geometry, an improper rotation, also called rotation-reflection, rotoreflection, rotary reflection, or rotoinversion is an isometry in Euclidean space that is a combination of a rotation about an axis and a reflection in a plane perpendicular to that axis. Reflection and inversion are each special case of improper rotation. Any improper rotation is an affine transformation and, in cases that keep the coordinate origin fixed, a linear transformation.
An improper rotation is the composition of a rotation about an axis, and reflection in a plane perpendicular to that axis. The order in which the rotation and reflection are performed does not matter (that is, these operations commute). Improper rotation is also defined as the composition of a rotation about an axis, and inversion about a point on the axis. These definitions are equivalent because inversion about a point is equivalent to rotation by 180° about any axis, followed by mirroring about a plane perpendicular to that axis. The symmetry elements for improper rotation are the rotation axis, and either the mirror plane, the inversion point, or both. The improper rotation group of order 2""n"" is denoted ""S"".","In 3 dimensions, improper rotation is equivalently defined as a combination of rotation about an axis and inversion in a point on the axis. For this reason it is also called a rotoinversion or rotary inversion. The two definitions are equivalent because rotation by an angle θ followed by reflection is the same transformation as rotation by θ + 180° followed by inversion (taking the point of inversion to be in the plane of reflection). In both definitions, the operations commute.
In geometry, an improper rotation (also called rotation-reflection, rotoreflection, rotary reflection, or rotoinversion) is an isometry in Euclidean space that is a combination of a rotation about an axis and a reflection in a plane perpendicular to that axis. Reflection and inversion are each special case of improper rotation. Any improper rotation is an affine transformation and, in cases that keep the coordinate origin fixed, a linear transformation.
An improper rotation is the composition of a rotation about an axis, and reflection in a plane perpendicular to that axis. The order in which the rotation and reflection are performed does not matter (that is, these operations commute). Improper rotation is also defined as the composition of a rotation about an axis, and inversion about a point on the axis. These definitions are equivalent because inversion about a point is equivalent to rotation by 180° about any axis, followed by mirroring about a plane perpendicular to that axis. The symmetry elements for improper rotation are the rotation axis, and either the mirror plane, the inversion point, or both. The improper rotation group of order 2n is denoted S2n.","Improper rotation is also defined as the composition of a rotation about an axis, and inversion about a point on the axisIn 3 dimensions, improper rotation is equivalently defined as a combination of rotation about an axis and inversion in a point on the axisIn both definitions, the operations commute.
In geometry, an improper rotation, also called rotation-reflection, rotoreflection, rotary reflection, or rotoinversion is an isometry in Euclidean space that is a combination of a rotation about an axis and a reflection in a plane perpendicular to that axisIn both definitions, the operations commute.
In geometry, an improper rotation (also called rotation-reflection, rotoreflection, rotary reflection, or rotoinversion) is an isometry in Euclidean space that is a combination of a rotation about an axis and a reflection in a plane perpendicular to that axisAny improper rotation is an affine transformation and, in cases that keep the coordinate origin fixed, a linear transformation.
An improper rotation is the composition of a rotation about an axis, and reflection in a plane perpendicular to that axisThere are several different systems for naming individual improper rotations:
In 3 dimensions, improper rotation is equivalently defined as a combination of rotation about an axis and inversion in a point on the axisThe symmetry elements for improper rotation are the rotation axis, and either the mirror plane, the inversion point, or both- An improper rotation of an object thus produces a rotation of its mirror imageThis is called an ""n""-fold improper rotation if the angle of rotation, before or after reflexion, is 360°/""n"" (where ""n"" must be even)Reflection and inversion are each special case of improper rotationThe improper rotation group of order 2""n"" is denoted ""S"".For this reason it is also called a rotoinversion or rotary inversionThe improper rotation group of order 2n is denoted S2nThe two definitions are equivalent because rotation by an angle θ followed by reflection is the same transformation as rotation by θ + 180° followed by inversion (taking the point of i","Improper rotation is also defined as the composition of a rotation about an axis, and inversion about a point on the axisIn 3 dimensions, improper rotation is equivalently defined as a combination of rotation about an axis and inversion in a point on the axisIn both definitions, the operations commute.
In geometry, an improper rotation, also called rotation-reflection, rotoreflection, rotary reflection, or rotoinversion is an isometry in Euclidean space that is a combination of a rotation about an axis and a reflection in a plane perpendicular to that axisIn both definitions, the operations commute.
In geometry, an improper rotation (also called rotation-reflection, rotoreflection, rotary reflection, or rotoinversion) is an isometry in Euclidean space that is a combination of a rotation about an axis and a reflection in a plane perpendicular to that axisAny improper rotation is an affine transformation and, in cases that keep the coordinate origin fixed, a linear transformation.
An improper rotation is the composition of a rotation about an axis, and reflection in a plane perpendicular to that axisThere are several different systems for naming individual improper rotations:
In 3 dimensions, improper rotation is equivalently defined as a combination of rotation about an axis and inversion in a point on the axisThe symmetry elements for improper rotation are the rotation axis, and either the mirror plane, the inversion point, or both- An improper rotation of an object thus produces a rotation of its mirror imageThis is called an ""n""-fold improper rotation if the angle of rotation, before or after reflexion, is 360°/""n"" (where ""n"" must be even)Reflection and inversion are each special case of improper rotationThe improper rotation group of order 2""n"" is denoted ""S"".For this reason it is also called a rotoinversion or rotary inversionThe improper rotation group of order 2n is denoted S2nThe two definitions are equivalent because rotation by an angle θ followed by reflection is the same transformation as rotation by θ + 180° followed by inversion (taking the point of i[SEP]What is the definition of an improper rotation?","['D', 'B', 'A']",0.5
"What is power density in the context of energy systems, and how does it differ between renewable and non-renewable energy sources?","Power density is the amount of power (time rate of energy transfer) per unit volume. Energy density differs from energy conversion efficiency (net output per input) or embodied energy (the energy output costs to provide, as harvesting, refining, distributing, and dealing with pollution all use energy). In energy transformers including batteries, fuel cells, motors, power supply units etc., power density refers to a volume, where it is often called volume power density, expressed as W/m3. In physics, energy density is the amount of energy stored in a given system or region of space per unit volume. Specific energy density may refer to: * Energy density, energy per unit volume * Specific energy, energy per unit mass This extremely high power density distinguishes nuclear power plants (NPP's) from any thermal power plants (burning coal, fuel or gas) or any chemical plants and explains the large redundancy required to permanently control the neutron reactivity and to remove the residual heat from the core of NPP's. ==Energy density of electric and magnetic fields== Electric and magnetic fields store energy. The energy density of a fuel per unit mass is called the specific energy of that fuel. Renewable energy replaces conventional fuels in four distinct areas: electricity generation, air and water heating/cooling, motor fuels, and rural (off-grid) energy services.REN21 (2010). Resource consumption is about the consumption of non-renewable, or less often, renewable resources. Renewable energy is generally defined as energy that comes from resources which are naturally replenished on a human timescale, such as sunlight, wind, rain, tides, waves, and geothermal heat.Omar Ellabban, Haitham Abu-Rub, Frede Blaabjerg, Renewable energy resources: Current status, future prospects and their enabling technology. A convenient table of HHV and LHV of some fuels can be found in the references. ==In energy storage and fuels== thumb|400px|Selected energy densities plot In energy storage applications the energy density relates the energy in an energy store to the volume of the storage facility, e.g. the fuel tank. In reciprocating internal combustion engines, power density (power per swept volume or brake horsepower per cubic centimeter) is an important metric, based on the internal capacity of the engine, not its external size. ==Examples== Storage material Energy type Specific power (W/kg) Power density (W/m3) Hydrogen (in star) Stellar fusion 0.00184 276.5 Plutonium Alpha decay 1.94 38,360 Supercapacitors Capacitance up to 15000 Variable Lithium-ion Chemical ~250–350 ~700 ==See also== *Surface power density, energy per unit of area *Energy density, energy per unit volume *Specific energy, energy per unit mass *Power-to-weight ratio/specific power, power per unit mass **Specific absorption rate (SAR) ==References== Category:Power (physics) Coal, gas, and petroleum are the current primary energy sources in the U.S. but have a much lower energy density. The higher the energy density of the fuel, the more energy may be stored or transported for the same amount of volume. Specifically, it may refer to: * water consumption * energy consumption ** electric energy consumption ** world energy consumption * natural gas consumption/gas depletion * oil consumption/oil depletion * logging/deforestation * fishing/overfishing * land use/land loss or * resource depletion and * general exploitation and associated environmental degradation Measures of resource consumption are resource intensity and resource efficiency. The figure above shows the gravimetric and volumetric energy density of some fuels and storage technologies (modified from the Gasoline article). Energy per unit volume has the same physical units as pressure and in many situations is synonymous. The (volumetric) energy density is given by : u = \frac{\varepsilon}{2} \mathbf{E}^2 + \frac{1}{2\mu} \mathbf{B}^2 where is the electric field, is the magnetic field, and and are the permittivity and permeability of the surroundings respectively. These are lists about renewable energy: * Index of solar energy articles * List of books about renewable energy * List of concentrating solar thermal power companies * List of countries by electricity production from renewable sources * List of energy storage projects * Lists of environmental topics * List of geothermal power stations * List of hydroelectric power stations * List of largest hydroelectric power stations * List of offshore wind farms * Lists of offshore wind farms by country * Lists of offshore wind farms by water area * List of onshore wind farms * List of onshore wind farms in the United Kingdom * List of people associated with renewable energy * List of photovoltaics companies * List of photovoltaic power stations * List of pioneering solar buildings * List of renewable energy organizations * List of renewable energy topics by country * List of rooftop photovoltaic installations * List of solar car teams * List of solar powered products * List of solar thermal power stations * List of U.S. states by electricity production from renewable sources * Lists of wind farms by country * List of wind farms in Australia * List of wind farms in Canada * List of wind farms in Iran * List of wind farms in Romania * List of wind farms in Sweden * List of wind farms in the United States * List of wind turbine manufacturers ==See also== *Outline of solar energy *Outline of wind energy ==References== Based on REN21's 2014 report, renewables contributed 19 percent to our global energy consumption and 22 percent to our electricity generation in 2012 and 2013, respectively. ","Power density is a measure of the amount of power obtained per unit of Earth surface area used by a specific energy system, including all supporting infrastructure, manufacturing, mining of fuel (if applicable) and decommissioning. Both renewable and non-renewable energy sources have similar power density, which means that the same amount of power can be obtained from power plants occupying similar areas.","Power density is a measure of the amount of power obtained per unit of Earth surface area used by a specific energy system, including all supporting infrastructure, manufacturing, mining of fuel (if applicable) and decommissioning. Fossil fuels and nuclear power have high power density, which means large power can be drawn from power plants occupying relatively small areas. Renewable energy sources have power density at least three orders of magnitude smaller and, for the same energy output, they need to occupy accordingly larger areas.","Power density is a measure of the amount of power obtained per unit of Earth surface area used by a specific energy system, including all supporting infrastructure, manufacturing, mining of fuel (if applicable) and decommissioning. Renewable energy sources have higher power density than non-renewable energy sources, which means that they can produce more power from power plants occupying smaller areas.","Power density is a measure of the amount of power obtained per unit of Earth surface area used by a specific energy system, including all supporting infrastructure, manufacturing, mining of fuel (if applicable) and decommissioning. Fossil fuels and nuclear power have low power density, which means that they need to occupy larger areas to produce the same amount of power as renewable energy sources.","Power density is a measure of the amount of power obtained per unit of Earth surface area used by a specific energy system, including all supporting infrastructure, manufacturing, mining of fuel (if applicable) and decommissioning. Both renewable and non-renewable energy sources have low power density, which means that they need to occupy larger areas to produce the same amount of power.",B,kaggle200,"In energy transformers including batteries, fuel cells, motors, power supply units etc., power density refers to a volume, where it is often called volume power density, expressed as W/m.
Surface power density is an important factor in comparison of industrial energy sources. The concept was popularised by geographer Vaclav Smil. The term is usually shortened to ""power density"" in the relevant literature, which can lead to confusion with homonymous or related terms.
The following table shows median surface power density of renewable and non-renewable energy sources.
Measured in codice_1 it describes the amount of power obtained per unit of Earth surface area used by a specific energy system, including all supporting infrastructure, manufacturing, mining of fuel (if applicable) and decommissioning. Fossil fuels and nuclear power are characterized by high power density which means large power can be drawn from power plants occupying relatively small area. Renewable energy sources have power density at least three orders of magnitude smaller and for the same energy output they need to occupy accordingly larger area, which has been already highlighted as a limiting factor of renewable energy in German Energiewende.","Power density is the amount of power (time rate of energy transfer) per unit volume.In energy transformers including batteries, fuel cells, motors, power supply units etc., power density refers to a volume, where it is often called volume power density, expressed as W/m3.
In reciprocating internal combustion engines, power density (power per swept volume or brake horsepower per cubic centimeter) is an important metric, based on the internal capacity of the engine, not its external size.
Surface power density is an important factor in comparison of industrial energy sources. The concept was popularised by geographer Vaclav Smil. The term is usually shortened to ""power density"" in the relevant literature, which can lead to confusion with homonymous or related terms.
Measured in W/m2 it describes the amount of power obtained per unit of Earth surface area used by a specific energy system, including all supporting infrastructure, manufacturing, mining of fuel (if applicable) and decommissioning., Fossil fuels and nuclear power are characterized by high power density which means large power can be drawn from power plants occupying relatively small area. Renewable energy sources have power density at least three orders of magnitude smaller and for the same energy output they need to occupy accordingly larger area, which has been already highlighted as a limiting factor of renewable energy in German Energiewende.The following table shows median surface power density of renewable and non-renewable energy sources.","The term is usually shortened to ""power density"" in the relevant literature, which can lead to confusion with homonymous or related terms.
The following table shows median surface power density of renewable and non-renewable energy sources.
Measured in codice_1 it describes the amount of power obtained per unit of Earth surface area used by a specific energy system, including all supporting infrastructure, manufacturing, mining of fuel (if applicable) and decommissioningThe term is usually shortened to ""power density"" in the relevant literature, which can lead to confusion with homonymous or related terms.
Measured in W/m2 it describes the amount of power obtained per unit of Earth surface area used by a specific energy system, including all supporting infrastructure, manufacturing, mining of fuel (if applicable) and decommissioning., Fossil fuels and nuclear power are characterized by high power density which means large power can be drawn from power plants occupying relatively small areaPower density is the amount of power (time rate of energy transfer) per unit volume.In energy transformers including batteries, fuel cells, motors, power supply units etc., power density refers to a volume, where it is often called volume power density, expressed as W/m3.
In reciprocating internal combustion engines, power density (power per swept volume or brake horsepower per cubic centimeter) is an important metric, based on the internal capacity of the engine, not its external size.
Surface power density is an important factor in comparison of industrial energy sources- In energy transformers including batteries, fuel cells, motors, power supply units etc., power density refers to a volume, where it is often called volume power density, expressed as W/m.
Surface power density is an important factor in comparison of industrial energy sourcesRenewable energy sources have power density at least three orders of magnitude smaller and for the same energy output they need to occupy accordingly larger area, which has been already highlighted as a limiting factor of renewable energy i","The term is usually shortened to ""power density"" in the relevant literature, which can lead to confusion with homonymous or related terms.
The following table shows median surface power density of renewable and non-renewable energy sources.
Measured in codice_1 it describes the amount of power obtained per unit of Earth surface area used by a specific energy system, including all supporting infrastructure, manufacturing, mining of fuel (if applicable) and decommissioningThe term is usually shortened to ""power density"" in the relevant literature, which can lead to confusion with homonymous or related terms.
Measured in W/m2 it describes the amount of power obtained per unit of Earth surface area used by a specific energy system, including all supporting infrastructure, manufacturing, mining of fuel (if applicable) and decommissioning., Fossil fuels and nuclear power are characterized by high power density which means large power can be drawn from power plants occupying relatively small areaPower density is the amount of power (time rate of energy transfer) per unit volume.In energy transformers including batteries, fuel cells, motors, power supply units etc., power density refers to a volume, where it is often called volume power density, expressed as W/m3.
In reciprocating internal combustion engines, power density (power per swept volume or brake horsepower per cubic centimeter) is an important metric, based on the internal capacity of the engine, not its external size.
Surface power density is an important factor in comparison of industrial energy sources- In energy transformers including batteries, fuel cells, motors, power supply units etc., power density refers to a volume, where it is often called volume power density, expressed as W/m.
Surface power density is an important factor in comparison of industrial energy sourcesRenewable energy sources have power density at least three orders of magnitude smaller and for the same energy output they need to occupy accordingly larger area, which has been already highlighted as a limiting factor of renewable energy i[SEP]What is power density in the context of energy systems, and how does it differ between renewable and non-renewable energy sources?","['B', 'C', 'D']",1.0
What is Modified Newtonian Dynamics (MOND)?,"The MOND type behavior is suppressed in this regime due to the contribution of the second gauge field. ==See also== * Dark energy * Dark fluid * Dark matter * General theory of relativity * Law of universal gravitation * Modified Newtonian dynamics * Nonsymmetric gravitational theory * Pioneer anomaly * Scalar – scalar field * Scalar–tensor–vector gravity * Tensor * Vector ==References== Category:Theories of gravity Category:Theoretical physics Category:Astrophysics To account for the anomalous rotation curves of spiral galaxies, Milgrom proposed a modification of this force law in the form : F=\mu \left (\frac{a}{a_0} \right )ma, where \mu(x) is an arbitrary function subject to the following conditions: :\mu(x)= \begin{cases} 1 & |x|\gg 1 \\\ x & |x|\ll 1 \end{cases} In this form, MOND is not a complete theory: for instance, it violates the law of momentum conservation. Mond may refer to: ==Science and industry== * MOND (Modified Newtonian dynamics), a proposed adjustment to the classical inverse-square law of gravity * Mond gas, a cheap form of coal gas * Mond Nickel Company, a defunct mining company * Brunner Mond, a chemicals company * Der Mond, a 1837 description of the Moon by Johann Heinrich von Mädler and Wilhelm Beer ==Other== * Mond (playing card), a trump card in Tarock games * Mond (surname) * Mond River, a river in Iran * Der Mond, an opera in one act ==See also== * Mond Mond Mond , a German television series * Tensor–vector–scalar gravity (TeVeS), developed by Jacob Bekenstein in 2004, is a relativistic generalization of Mordehai Milgrom's Modified Newtonian dynamics (MOND) paradigm. These components are combined into a relativistic Lagrangian density, which forms the basis of TeVeS theory. ==Details== MOND is a phenomenological modification of the Newtonian acceleration law. Gauge vector–tensor gravity (GVT) is a relativistic generalization of Mordehai Milgrom's modified Newtonian dynamics (MOND) paradigm where gauge fields cause the MOND behavior. The former covariant realizations of MOND such as the Bekenestein's tensor–vector–scalar gravity and the Moffat's scalar–tensor–vector gravity attribute MONDian behavior to some scalar fields. In his paper, Bekenstein also investigated the consequences of TeVeS in relation to gravitational lensing and cosmology. ==Problems and criticisms== In addition to its ability to account for the flat rotation curves of galaxies (which is what MOND was originally designed to address), TeVeS is claimed to be consistent with a range of other phenomena, such as gravitational lensing and cosmological observations. TeVeS solves problems associated with earlier attempts to generalize MOND, such as superluminal propagation. This led Bekenstein to a first, nonrelativistic generalization of MOND. In the case of a spherically symmetric, static gravitational field, this Lagrangian reproduces the MOND acceleration law after the substitutions a=- abla\Phi and \mu(\sqrt{y})=df(y)/dy are made. The main features of GVT can be summarized as follows: * As it is derived from the action principle, GVT respects conservation laws; * In the weak-field approximation of the spherically symmetric, static solution, GVT reproduces the MOND acceleration formula; * It can accommodate gravitational lensing. The main features of TeVeS can be summarized as follows: * As it is derived from the action principle, TeVeS respects conservation laws; * In the weak-field approximation of the spherically symmetric, static solution, TeVeS reproduces the MOND acceleration formula; * TeVeS avoids the problems of earlier attempts to generalize MOND, such as superluminal propagation; * As it is a relativistic theory it can accommodate gravitational lensing. Newton–Cartan theory (or geometrized Newtonian gravitation) is a geometrical re-formulation, as well as a generalization, of Newtonian gravity first introduced by Élie Cartan and Kurt Friedrichs and later developed by Dautcourt, Dixon, Dombrowski and Horneffer, Ehlers, Havas, Künzle, Lottermoser, Trautman, and others. In this re-formulation, the structural similarities between Newton's theory and Albert Einstein's general theory of relativity are readily seen, and it has been used by Cartan and Friedrichs to give a rigorous formulation of the way in which Newtonian gravity can be seen as a specific limit of general relativity, and by Jürgen Ehlers to extend this correspondence to specific solutions of general relativity. ==Classical spacetimes== In Newton–Cartan theory, one starts with a smooth four- dimensional manifold M and defines two (degenerate) metrics. For the static mass distribution, the theory then converts to the AQUAL model of gravity with the critical acceleration of :a_0 = \frac{4\sqrt{2}\kappa c^2}{\ell} So the GVT theory is capable of reproducing the flat rotational velocity curves of galaxies. The matter current is :J^\mu = \rho u^\mu where \rho is the density and u^\mu represents the four velocity. ==Regimes of the GVT theory== GVT accommodates the Newtonian and MOND regime of gravity; but it admits the post-MONDian regime. ===Strong and Newtonian regimes=== The strong and Newtonian regime of the theory is defined to be where holds: :\begin{align} L \left (\frac{\ell^2}{4} B_{\mu u} B^{\mu u} \right ) &= \frac{\ell^2}{4} B_{\mu u} B^{\mu u}\\\ L \left (\frac{\widetilde{\ell}^2}{4} \widetilde{B}_{\mu u} \widetilde{B}^{\mu u} \right ) &= \frac{\widetilde{\ell}^2}{4} \widetilde{B}_{\mu u} \widetilde{B}^{\mu u} \end{align} The consistency between the gravitoelectromagnetism approximation to the GVT theory and that predicted and measured by the Einstein–Hilbert gravity demands that :\kappa + \widetilde{\kappa} =0 which results in :B_\mu+\widetilde{B}_\mu = 0. In physics, Newtonian dynamics (also known as Newtonian mechanics) is the study of the dynamics of a particle or a small body according to Newton's laws of motion. ==Mathematical generalizations== Typically, the Newtonian dynamics occurs in a three-dimensional Euclidean space, which is flat. Often the term Newtonian dynamics is narrowed to Newton's second law \displaystyle m\,\mathbf a=\mathbf F. ==Newton's second law in a multidimensional space== Consider \displaystyle N particles with masses \displaystyle m_1,\,\ldots,\,m_N in the regular three-dimensional Euclidean space. A study in August 2006 reported an observation of a pair of colliding galaxy clusters, the Bullet Cluster, whose behavior, it was reported, was not compatible with any current modified gravity theory. ",MOND is a theory that explains the behavior of light in the presence of strong gravitational fields. It is an alternative to the hypothesis of dark matter in terms of explaining why galaxies do not appear to obey the currently understood laws of physics.,MOND is a hypothesis that proposes a modification of Einstein's theory of general relativity to account for observed properties of galaxies. It is an alternative to the hypothesis of dark matter in terms of explaining why galaxies do not appear to obey the currently understood laws of physics.,MOND is a hypothesis that proposes a modification of Newton's law of universal gravitation to account for observed properties of galaxies. It is an alternative to the hypothesis of dark matter in terms of explaining why galaxies do not appear to obey the currently understood laws of physics.,MOND is a hypothesis that proposes a modification of Coulomb's law to account for observed properties of galaxies. It is an alternative to the hypothesis of dark matter in terms of explaining why galaxies do not appear to obey the currently understood laws of physics.,MOND is a theory that explains the behavior of subatomic particles in the presence of strong magnetic fields. It is an alternative to the hypothesis of dark energy in terms of explaining why subatomic particles do not appear to obey the currently understood laws of physics.,C,kaggle200,"However, both Milgrom's bi-metric formulation of MOND and nonlocal MOND are compatible with this measurement.
A significant piece of evidence in favor of standard dark matter is the observed anisotropies in the cosmic microwave background. While ΛCDM is able to explain the observed angular power spectrum, MOND has a much harder time, though recently it has been shown that MOND can fit the observations too. MOND also encounters difficulties explaining structure formation, with density perturbations in MOND perhaps growing so rapidly that too much structure is formed by the present epoch. However, forming galaxies more rapidly than in ΛCDM can be a good thing to some extent.
MOND is an example of a class of theories known as modified gravity, and is an alternative to the hypothesis that the dynamics of galaxies are determined by massive, invisible dark matter halos. Since Milgrom's original proposal, proponents of MOND have claimed to successfully predict a variety of galactic phenomena that they state are difficult to understand as consequences of dark matter. However, MOND and its generalizations do not adequately account for observed properties of galaxy clusters, and no satisfactory cosmological model has been constructed from the hypothesis.
Modified Newtonian dynamics (MOND) is a hypothesis that proposes a modification of Newton's law of universal gravitation to account for observed properties of galaxies. It is an alternative to the hypothesis of dark matter in terms of explaining why galaxies do not appear to obey the currently understood laws of physics.","MOND is an example of a class of theories known as modified gravity, and is an alternative to the hypothesis that the dynamics of galaxies are determined by massive, invisible dark matter halos. Since Milgrom's original proposal, proponents of MOND have claimed to successfully predict a variety of galactic phenomena that they state are difficult to understand as consequences of dark matter.Though MOND explains the anomalously great rotational velocities of galaxies at their perimeters, it does not fully explain the velocity dispersions of individual galaxies within galaxy clusters. MOND reduces the discrepancy between the velocity dispersions and clusters' observed missing baryonic mass from a factor of around 10 to a factor of about 2. However, the residual discrepancy cannot be accounted for by MOND, requiring that other explanations close the gap such as the presence of as-yet undetected missing baryonic matter.The accurate measurement of the speed of gravitational waves compared to the speed of light in 2017 ruled out a certain class of modified gravity theories but concluded that other MOND theories that dispense with the need for dark matter remained viable. Two years later, theories put forth by Constantinos Skordis and Tom Zlosnik were consistent with gravitational waves that always travel at the speed of light. Later still in 2021, Skordis and Zlosnik developed a subclass of their theory called ""RMOND"", for ""relativistic MOND"", which had ""been shown to reproduce in great detail the main observations in cosmology, including the cosmic-microwave-background power spectrum, and the matter structure power spectrum.""
Both MOND and dark matter halos stabilize disk galaxies, helping them retain their rotation-supported structure and preventing their transformation into elliptical galaxies. In MOND, this added stability is only available for regions of galaxies within the deep-MOND regime (i.e., with a < a0), suggesting that spirals with a > a0 in their central regions should be prone to instabilities and hence less likely to survive to the present day. This may explain the ""Freeman limit"" to the observed central surface mass density of spiral galaxies, which is roughly a0/G. This scale must be put in by hand in dark matter-based galaxy formation models.
Modified Newtonian dynamics (MOND) is a hypothesis that proposes a modification of Newton's law of universal gravitation to account for observed properties of galaxies. It is an alternative to the hypothesis of dark matter in terms of explaining why galaxies do not appear to obey the currently understood laws of physics.","MOND is an example of a class of theories known as modified gravity, and is an alternative to the hypothesis that the dynamics of galaxies are determined by massive, invisible dark matter halosHowever, MOND and its generalizations do not adequately account for observed properties of galaxy clusters, and no satisfactory cosmological model has been constructed from the hypothesis.
Modified Newtonian dynamics (MOND) is a hypothesis that proposes a modification of Newton's law of universal gravitation to account for observed properties of galaxiesThis scale must be put in by hand in dark matter-based galaxy formation models.
Modified Newtonian dynamics (MOND) is a hypothesis that proposes a modification of Newton's law of universal gravitation to account for observed properties of galaxiesHowever, forming galaxies more rapidly than in ΛCDM can be a good thing to some extent.
MOND is an example of a class of theories known as modified gravity, and is an alternative to the hypothesis that the dynamics of galaxies are determined by massive, invisible dark matter halosMOND reduces the discrepancy between the velocity dispersions and clusters' observed missing baryonic mass from a factor of around 10 to a factor of about 2Later still in 2021, Skordis and Zlosnik developed a subclass of their theory called ""RMOND"", for ""relativistic MOND"", which had ""been shown to reproduce in great detail the main observations in cosmology, including the cosmic-microwave-background power spectrum, and the matter structure power spectrum.""
Both MOND and dark matter halos stabilize disk galaxies, helping them retain their rotation-supported structure and preventing their transformation into elliptical galaxiesSince Milgrom's original proposal, proponents of MOND have claimed to successfully predict a variety of galactic phenomena that they state are difficult to understand as consequences of dark matterSince Milgrom's original proposal, proponents of MOND have claimed to successfully predict a variety of galactic phenomena that they state are difficult to understand as consequences of dark ","MOND is an example of a class of theories known as modified gravity, and is an alternative to the hypothesis that the dynamics of galaxies are determined by massive, invisible dark matter halosHowever, MOND and its generalizations do not adequately account for observed properties of galaxy clusters, and no satisfactory cosmological model has been constructed from the hypothesis.
Modified Newtonian dynamics (MOND) is a hypothesis that proposes a modification of Newton's law of universal gravitation to account for observed properties of galaxiesThis scale must be put in by hand in dark matter-based galaxy formation models.
Modified Newtonian dynamics (MOND) is a hypothesis that proposes a modification of Newton's law of universal gravitation to account for observed properties of galaxiesHowever, forming galaxies more rapidly than in ΛCDM can be a good thing to some extent.
MOND is an example of a class of theories known as modified gravity, and is an alternative to the hypothesis that the dynamics of galaxies are determined by massive, invisible dark matter halosMOND reduces the discrepancy between the velocity dispersions and clusters' observed missing baryonic mass from a factor of around 10 to a factor of about 2Later still in 2021, Skordis and Zlosnik developed a subclass of their theory called ""RMOND"", for ""relativistic MOND"", which had ""been shown to reproduce in great detail the main observations in cosmology, including the cosmic-microwave-background power spectrum, and the matter structure power spectrum.""
Both MOND and dark matter halos stabilize disk galaxies, helping them retain their rotation-supported structure and preventing their transformation into elliptical galaxiesSince Milgrom's original proposal, proponents of MOND have claimed to successfully predict a variety of galactic phenomena that they state are difficult to understand as consequences of dark matterSince Milgrom's original proposal, proponents of MOND have claimed to successfully predict a variety of galactic phenomena that they state are difficult to understand as consequences of dark [SEP]What is Modified Newtonian Dynamics (MOND)?","['C', 'D', 'B']",1.0
What is linear frame dragging?,"Linear frame dragging is the similarly inevitable result of the general principle of relativity, applied to linear momentum. Frame-dragging is an effect on spacetime, predicted by Albert Einstein's general theory of relativity, that is due to non-static stationary distributions of mass–energy. While not strictly a frame dragging effect (the term frame dragging is not used by Einstein), it is demonstrated by Einstein that it derives from the same equation of general relativity. Although it arguably has equal theoretical legitimacy to the ""rotational"" effect, the difficulty of obtaining an experimental verification of the effect means that it receives much less discussion and is often omitted from articles on frame-dragging (but see Einstein, 1921).Einstein, A The Meaning of Relativity (contains transcripts of his 1921 Princeton lectures). In 2015, new general-relativistic extensions of Newtonian rotation laws were formulated to describe geometric dragging of frames which incorporates a newly discovered antidragging effect. ==Effects== Rotational frame-dragging (the Lense–Thirring effect) appears in the general principle of relativity and similar theories in the vicinity of rotating massive objects. Qualitatively, frame-dragging can be viewed as the gravitational analog of electromagnetic induction. It is now the best known frame-dragging effect, partly thanks to the Gravity Probe B experiment. The first frame-dragging effect was derived in 1918, in the framework of general relativity, by the Austrian physicists Josef Lense and Hans Thirring, and is also known as the Lense–Thirring effect. A research group in Italy, USA, and UK also claimed success in verification of frame dragging with the Grace gravity model, published in a peer reviewed journal. The method of the moving frame, in this simple example, seeks to produce a ""preferred"" moving frame out of the kinematic properties of the observer. A moving frame, in these circumstances, is just that: a frame which varies from point to point. One may compare linear motion to general motion. In mathematics, a moving frame is a flexible generalization of the notion of an ordered basis of a vector space often used to study the extrinsic differential geometry of smooth manifolds embedded in a homogeneous space. ==Introduction== In lay terms, a frame of reference is a system of measuring rods used by an observer to measure the surrounding space by providing coordinates. A linear-motion bearing or linear slide is a bearing designed to provide free motion in one direction. All linear slides provide linear motion based on bearings, whether they are ball bearings, dovetail bearings, linear roller bearings, magnetic or fluid bearings. In fact, in the method of moving frames, one more often works with coframes rather than frames. In relativity and in Riemannian geometry, the most useful kind of moving frames are the orthogonal and orthonormal frames, that is, frames consisting of orthogonal (unit) vectors at each point. A moving frame is then a frame of reference which moves with the observer along a trajectory (a curve). By comparing the rate of orbital precession of two stars on different orbits, it is possible in principle to test the no-hair theorems of general relativity, in addition to measuring the spin of the black hole. ==Astronomical evidence== Relativistic jets may provide evidence for the reality of frame-dragging. In the case of linear frames, for instance, any two frames are related by an element of the general linear group. ",Linear frame dragging is the effect of the general principle of relativity applied to the mass of a body when other masses are placed nearby. It is a tiny effect that is difficult to confirm experimentally and often omitted from articles on frame-dragging.,"Linear frame dragging is the effect of the general principle of relativity applied to rotational momentum, which is a large effect that is easily confirmed experimentally and often discussed in articles on frame-dragging.","Linear frame dragging is the effect of the general principle of relativity applied to rotational momentum, which is similarly inevitable to the linear effect. It is a tiny effect that is difficult to confirm experimentally and often omitted from articles on frame-dragging.","Linear frame dragging is the effect of the general principle of relativity applied to linear momentum, which is similarly inevitable to the rotational effect. It is a tiny effect that is difficult to confirm experimentally and often omitted from articles on frame-dragging.","Linear frame dragging is the effect of the general principle of relativity applied to linear momentum, which is a large effect that is easily confirmed experimentally and often discussed in articles on frame-dragging.",D,kaggle200,"The much smaller frame-dragging effect is an example of gravitomagnetism. It is an analog of magnetism in classical electrodynamics, but caused by rotating masses rather than rotating electric charges. Previously, only two analyses of the laser-ranging data obtained by the two LAGEOS satellites, published in and , claimed to have found the frame-dragging effect with an accuracy of about 20% and 10% respectively, whereas Gravity Probe B aimed to measure the frame dragging effect to a precision of 1%. However, Lorenzo Iorio claimed that the level of total uncertainty of the tests conducted with the two LAGEOS satellites has likely been greatly underestimated. A recent analysis of Mars Global Surveyor data has claimed to have confirmed the frame dragging effect to a precision of 0.5%, although the accuracy of this claim is disputed. Also the Lense–Thirring effect of the Sun has been recently investigated in view of a possible detection with the inner planets in the near future.
The Gravity Probe B satellite, launched in 2004 and operated until 2005, detected frame-dragging and the geodetic effect. The experiment used four quartz spheres the size of ping pong balls coated with a superconductor. Data analysis continued through 2011 due to high noise levels and difficulties in modelling the noise accurately so that a useful signal could be found. Principal investigators at Stanford University reported on May 4, 2011, that they had accurately measured the frame dragging effect relative to the distant star IM Pegasi, and the calculations proved to be in line with the prediction of Einstein's theory. The results, published in ""Physical Review Letters"" measured the geodetic effect with an error of about 0.2 percent. The results reported the frame dragging effect (caused by Earth's rotation) added up to 37 milliarcseconds with an error of about 19 percent. Investigator Francis Everitt explained that a milliarcsecond ""is the width of a human hair seen at the distance of 10 miles"".
Static mass increase is a third effect noted by Einstein in the same paper. The effect is an increase in inertia of a body when other masses are placed nearby. While not strictly a frame dragging effect (the term frame dragging is not used by Einstein), it is demonstrated by Einstein that it derives from the same equation of general relativity. It is also a tiny effect that is difficult to confirm experimentally.
Linear frame dragging is the similarly inevitable result of the general principle of relativity, applied to linear momentum. Although it arguably has equal theoretical legitimacy to the ""rotational"" effect, the difficulty of obtaining an experimental verification of the effect means that it receives much less discussion and is often omitted from articles on frame-dragging (but see Einstein, 1921).","On May 4, 2011, the Stanford-based analysis group and NASA announced the final report, and in it the data from GP-B demonstrated the frame-dragging effect with an error of about 19 percent, and Einstein's predicted value was at the center of the confidence interval.NASA published claims of success in verification of frame dragging for the GRACE twin satellites and Gravity Probe B, both of which claims are still in public view. A research group in Italy, USA, and UK also claimed success in verification of frame dragging with the Grace gravity model, published in a peer reviewed journal. All the claims include recommendations for further research at greater accuracy and other gravity models.
Frame-dragging tests Tests of the Lense–Thirring precession, consisting of small secular precessions of the orbit of a test particle in motion around a central rotating mass, for example, a planet or a star, have been performed with the LAGEOS satellites, but many aspects of them remain controversial. The same effect may have been detected in the data of the Mars Global Surveyor (MGS) spacecraft, a former probe in orbit around Mars; also such a test raised a debate. First attempts to detect the Sun's Lense–Thirring effect on the perihelia of the inner planets have been recently reported as well. Frame dragging would cause the orbital plane of stars orbiting near a supermassive black hole to precess about the black hole spin axis. This effect should be detectable within the next few years via astrometric monitoring of stars at the center of the Milky Way galaxy. By comparing the rate of orbital precession of two stars on different orbits, it is possible in principle to test the no-hair theorems of general relativity.The Gravity Probe B satellite, launched in 2004 and operated until 2005, detected frame-dragging and the geodetic effect. The experiment used four quartz spheres the size of ping pong balls coated with a superconductor. Data analysis continued through 2011 due to high noise levels and difficulties in modelling the noise accurately so that a useful signal could be found. Principal investigators at Stanford University reported on May 4, 2011, that they had accurately measured the frame dragging effect relative to the distant star IM Pegasi, and the calculations proved to be in line with the prediction of Einstein's theory. The results, published in Physical Review Letters measured the geodetic effect with an error of about 0.2 percent. The results reported the frame dragging effect (caused by Earth's rotation) added up to 37 milliarcseconds with an error of about 19 percent. Investigator Francis Everitt explained that a milliarcsecond ""is the width of a human hair seen at the distance of 10 miles"".In January 2012, LARES satellite was launched on a Vega rocket to measure Lense–Thirring effect with an accuracy of about 1%, according to its proponents. This evaluation of the actual accuracy obtainable is a subject of debate.
Linear frame dragging is the similarly inevitable result of the general principle of relativity, applied to linear momentum. Although it arguably has equal theoretical legitimacy to the ""rotational"" effect, the difficulty of obtaining an experimental verification of the effect means that it receives much less discussion and is often omitted from articles on frame-dragging (but see Einstein, 1921).Static mass increase is a third effect noted by Einstein in the same paper. The effect is an increase in inertia of a body when other masses are placed nearby. While not strictly a frame dragging effect (the term frame dragging is not used by Einstein), it is demonstrated by Einstein that it derives from the same equation of general relativity. It is also a tiny effect that is difficult to confirm experimentally.","It is also a tiny effect that is difficult to confirm experimentally.
Linear frame dragging is the similarly inevitable result of the general principle of relativity, applied to linear momentumWhile not strictly a frame dragging effect (the term frame dragging is not used by Einstein), it is demonstrated by Einstein that it derives from the same equation of general relativity- The much smaller frame-dragging effect is an example of gravitomagnetismThis evaluation of the actual accuracy obtainable is a subject of debate.
Linear frame dragging is the similarly inevitable result of the general principle of relativity, applied to linear momentumAlthough it arguably has equal theoretical legitimacy to the ""rotational"" effect, the difficulty of obtaining an experimental verification of the effect means that it receives much less discussion and is often omitted from articles on frame-dragging (but see Einstein, 1921).Static mass increase is a third effect noted by Einstein in the same paperAlthough it arguably has equal theoretical legitimacy to the ""rotational"" effect, the difficulty of obtaining an experimental verification of the effect means that it receives much less discussion and is often omitted from articles on frame-dragging (but see Einstein, 1921).A research group in Italy, USA, and UK also claimed success in verification of frame dragging with the Grace gravity model, published in a peer reviewed journalPreviously, only two analyses of the laser-ranging data obtained by the two LAGEOS satellites, published in and , claimed to have found the frame-dragging effect with an accuracy of about 20% and 10% respectively, whereas Gravity Probe B aimed to measure the frame dragging effect to a precision of 1%A recent analysis of Mars Global Surveyor data has claimed to have confirmed the frame dragging effect to a precision of 0.5%, although the accuracy of this claim is disputedFrame dragging would cause the orbital plane of stars orbiting near a supermassive black hole to precess about the black hole spin axisThe results reported t","It is also a tiny effect that is difficult to confirm experimentally.
Linear frame dragging is the similarly inevitable result of the general principle of relativity, applied to linear momentumWhile not strictly a frame dragging effect (the term frame dragging is not used by Einstein), it is demonstrated by Einstein that it derives from the same equation of general relativity- The much smaller frame-dragging effect is an example of gravitomagnetismThis evaluation of the actual accuracy obtainable is a subject of debate.
Linear frame dragging is the similarly inevitable result of the general principle of relativity, applied to linear momentumAlthough it arguably has equal theoretical legitimacy to the ""rotational"" effect, the difficulty of obtaining an experimental verification of the effect means that it receives much less discussion and is often omitted from articles on frame-dragging (but see Einstein, 1921).Static mass increase is a third effect noted by Einstein in the same paperAlthough it arguably has equal theoretical legitimacy to the ""rotational"" effect, the difficulty of obtaining an experimental verification of the effect means that it receives much less discussion and is often omitted from articles on frame-dragging (but see Einstein, 1921).A research group in Italy, USA, and UK also claimed success in verification of frame dragging with the Grace gravity model, published in a peer reviewed journalPreviously, only two analyses of the laser-ranging data obtained by the two LAGEOS satellites, published in and , claimed to have found the frame-dragging effect with an accuracy of about 20% and 10% respectively, whereas Gravity Probe B aimed to measure the frame dragging effect to a precision of 1%A recent analysis of Mars Global Surveyor data has claimed to have confirmed the frame dragging effect to a precision of 0.5%, although the accuracy of this claim is disputedFrame dragging would cause the orbital plane of stars orbiting near a supermassive black hole to precess about the black hole spin axisThe results reported t[SEP]What is linear frame dragging?","['D', 'C', 'E']",1.0
What is explicit symmetry breaking in theoretical physics?,"In theoretical physics, explicit symmetry breaking is the breaking of a symmetry of a theory by terms in its defining equations of motion (most typically, to the Lagrangian or the Hamiltonian) that do not respect the symmetry. In the latter, the defining equations respect the symmetry but the ground state (vacuum) of the theory breaks it.Castellani, E. (2003) ""On the meaning of Symmetry Breaking"" in Brading, K. and Castellani, E. (eds) Symmetries in Physics: New Reflections, Cambridge: Cambridge University Press Explicit symmetry breaking is also associated with electromagnetic radiation. Symmetry breaking can be distinguished into two types, explicit and spontaneous. Explicit symmetry breaking differs from spontaneous symmetry breaking. The explicit symmetry breaking occurs at a smaller energy scale. Usually this term is used in situations where these symmetry- breaking terms are small, so that the symmetry is approximately respected by the theory. When a theory is symmetric with respect to a symmetry group, but requires that one element of the group be distinct, then spontaneous symmetry breaking has occurred. Roughly speaking there are three types of symmetry that can be broken: discrete, continuous and gauge, ordered in increasing technicality. In physics, symmetry breaking is a phenomenon where a disordered but symmetric state collapses into an ordered, but less symmetric state. Further, in this context the usage of 'symmetry breaking' while standard, is a misnomer, as gauge 'symmetry' is not really a symmetry but a redundancy in the description of the system. An example of its use is in finding the fine structure of atomic spectra. == Examples == Symmetry breaking can cover any of the following scenarios: :* The breaking of an exact symmetry of the underlying laws of physics by the apparently random formation of some structure; :* A situation in physics in which a minimal energy state has less symmetry than the system itself; :* Situations where the actual state of the system does not reflect the underlying symmetries of the dynamics because the manifestly symmetric state is unstable (stability is gained at the cost of local asymmetry); :* Situations where the equations of a theory may have certain symmetries, though their solutions may not (the symmetries are ""hidden""). A special case of this type of symmetry breaking is dynamical symmetry breaking. The chiral symmetries discussed, however, are only approximate symmetries in nature, given their small explicit breaking. These two types of symmetry breaking typically occur separately, and at different energy scales, and are not thought to be predicated on each other. Spontaneous symmetry breaking is a spontaneous process of symmetry breaking, by which a physical system in a symmetric state spontaneously ends up in an asymmetric state. Spontaneous symmetry breaking occurs when this relation breaks down, while the underlying physical laws remain symmetrical. In particle physics, chiral symmetry breaking is the spontaneous symmetry breaking of a chiral symmetry - usually by a gauge theory such as quantum chromodynamics, the quantum field theory of the strong interaction. For example in the Ising model, as the temperature of the system falls below the critical temperature the \mathbb{Z}_2 symmetry of the vacuum is broken, giving a phase transition of the system. ==Explicit symmetry breaking== In explicit symmetry breaking (ESB), the equations of motion describing a system are variant under the broken symmetry. Hence, the symmetry is said to be spontaneously broken in that theory. The term ""spontaneous symmetry breaking"" is a misnomer here as Elitzur's theorem states that local gauge symmetries can never be spontaneously broken. ","Explicit symmetry breaking is the breaking of a symmetry of a theory by terms in its defining equations of motion that do not respect the symmetry, always in situations where these symmetry-breaking terms are large, so that the symmetry is not respected by the theory.","Explicit symmetry breaking is the breaking of a symmetry of a theory by terms in its defining equations of motion that do not respect the symmetry, usually in situations where these symmetry-breaking terms are small, so that the symmetry is approximately respected by the theory.","Explicit symmetry breaking is the breaking of a symmetry of a theory by terms in its defining equations of motion that respect the symmetry, always in situations where these symmetry-breaking terms are small, so that the symmetry is approximately respected by the theory.","Explicit symmetry breaking is the breaking of a symmetry of a theory by terms in its defining equations of motion that respect the symmetry, always in situations where these symmetry-breaking terms are large, so that the symmetry is not respected by the theory.","Explicit symmetry breaking is the breaking of a symmetry of a theory by terms in its defining equations of motion that respect the symmetry, usually in situations where these symmetry-breaking terms are large, so that the symmetry is not respected by the theory.",B,kaggle200,"Symmetry breaking can be distinguished into two types, explicit symmetry breaking and spontaneous symmetry breaking, characterized by whether the equations of motion fail to be invariant or the ground state fails to be invariant.
where formula_5 is the term which explicitly breaks the symmetry. The resulting equations of motion will also not have formula_3-symmetry.
Explicit symmetry breaking differs from spontaneous symmetry breaking. In the latter, the defining equations respect the symmetry but the ground state (vacuum) of the theory breaks it.
In theoretical physics, explicit symmetry breaking is the breaking of a symmetry of a theory by terms in its defining equations of motion (most typically, to the Lagrangian or the Hamiltonian) that do not respect the symmetry. Usually this term is used in situations where these symmetry-breaking terms are small, so that the symmetry is approximately respected by the theory. An example is the spectral line splitting in the Zeeman effect, due to a magnetic interaction perturbation in the Hamiltonian of the atoms involved.","In explicit symmetry breaking (ESB), the equations of motion describing a system are variant under the broken symmetry. In Hamiltonian mechanics or Lagrangian Mechanics, this happens when there is at least one term in the Hamiltonian (or Lagrangian) that explicitly breaks the given symmetry.
Explicit symmetry breaking differs from spontaneous symmetry breaking. In the latter, the defining equations respect the symmetry but the ground state (vacuum) of the theory breaks it.Explicit symmetry breaking is also associated with electromagnetic radiation. A system of accelerated charges results in electromagnetic radiation when the geometric symmetry of the electric field in free space is explicitly broken by the associated electrodynamic structure under time varying excitation of the given system. This is quite evident in an antenna where the electric lines of field curl around or have rotational geometry around the radiating terminals in contrast to linear geometric orientation within a pair of transmission lines which does not radiate even under time varying excitation.
In theoretical physics, explicit symmetry breaking is the breaking of a symmetry of a theory by terms in its defining equations of motion (most typically, to the Lagrangian or the Hamiltonian) that do not respect the symmetry. Usually this term is used in situations where these symmetry-breaking terms are small, so that the symmetry is approximately respected by the theory. An example is the spectral line splitting in the Zeeman effect, due to a magnetic interaction perturbation in the Hamiltonian of the atoms involved.","In the latter, the defining equations respect the symmetry but the ground state (vacuum) of the theory breaks it.
In theoretical physics, explicit symmetry breaking is the breaking of a symmetry of a theory by terms in its defining equations of motion (most typically, to the Lagrangian or the Hamiltonian) that do not respect the symmetry- Symmetry breaking can be distinguished into two types, explicit symmetry breaking and spontaneous symmetry breaking, characterized by whether the equations of motion fail to be invariant or the ground state fails to be invariant.
where formula_5 is the term which explicitly breaks the symmetryIn Hamiltonian mechanics or Lagrangian Mechanics, this happens when there is at least one term in the Hamiltonian (or Lagrangian) that explicitly breaks the given symmetry.
Explicit symmetry breaking differs from spontaneous symmetry breakingThis is quite evident in an antenna where the electric lines of field curl around or have rotational geometry around the radiating terminals in contrast to linear geometric orientation within a pair of transmission lines which does not radiate even under time varying excitation.
In theoretical physics, explicit symmetry breaking is the breaking of a symmetry of a theory by terms in its defining equations of motion (most typically, to the Lagrangian or the Hamiltonian) that do not respect the symmetryIn explicit symmetry breaking (ESB), the equations of motion describing a system are variant under the broken symmetryIn the latter, the defining equations respect the symmetry but the ground state (vacuum) of the theory breaks it.Explicit symmetry breaking is also associated with electromagnetic radiationUsually this term is used in situations where these symmetry-breaking terms are small, so that the symmetry is approximately respected by the theoryThe resulting equations of motion will also not have formula_3-symmetry.
Explicit symmetry breaking differs from spontaneous symmetry breakingAn example is the spectral line splitting in the Zeeman effect, due to a magnetic interaction perturbation in the Hamilto","In the latter, the defining equations respect the symmetry but the ground state (vacuum) of the theory breaks it.
In theoretical physics, explicit symmetry breaking is the breaking of a symmetry of a theory by terms in its defining equations of motion (most typically, to the Lagrangian or the Hamiltonian) that do not respect the symmetry- Symmetry breaking can be distinguished into two types, explicit symmetry breaking and spontaneous symmetry breaking, characterized by whether the equations of motion fail to be invariant or the ground state fails to be invariant.
where formula_5 is the term which explicitly breaks the symmetryIn Hamiltonian mechanics or Lagrangian Mechanics, this happens when there is at least one term in the Hamiltonian (or Lagrangian) that explicitly breaks the given symmetry.
Explicit symmetry breaking differs from spontaneous symmetry breakingThis is quite evident in an antenna where the electric lines of field curl around or have rotational geometry around the radiating terminals in contrast to linear geometric orientation within a pair of transmission lines which does not radiate even under time varying excitation.
In theoretical physics, explicit symmetry breaking is the breaking of a symmetry of a theory by terms in its defining equations of motion (most typically, to the Lagrangian or the Hamiltonian) that do not respect the symmetryIn explicit symmetry breaking (ESB), the equations of motion describing a system are variant under the broken symmetryIn the latter, the defining equations respect the symmetry but the ground state (vacuum) of the theory breaks it.Explicit symmetry breaking is also associated with electromagnetic radiationUsually this term is used in situations where these symmetry-breaking terms are small, so that the symmetry is approximately respected by the theoryThe resulting equations of motion will also not have formula_3-symmetry.
Explicit symmetry breaking differs from spontaneous symmetry breakingAn example is the spectral line splitting in the Zeeman effect, due to a magnetic interaction perturbation in the Hamilto[SEP]What is explicit symmetry breaking in theoretical physics?","['B', 'E', 'D']",1.0
What is the role of the Higgs boson in the Standard Model?,"Fermions, such as the leptons and quarks in the Standard Model, can also acquire mass as a result of their interaction with the Higgs field, but not in the same way as the gauge bosons. === Structure of the Higgs field === In the standard model, the Higgs field is an SU(2) doublet (i.e. the standard representation with two complex components called isospin), which is a scalar under Lorentz transformations. In the Standard Model of particle physics, the Higgs mechanism is essential to explain the generation mechanism of the property ""mass"" for gauge bosons. The Alternative models to the Standard Higgs Model are models which are considered by many particle physicists to solve some of the Higgs boson's existing problems. In the basic Standard Model there is one field and one related Higgs boson; in some extensions to the Standard Model there are multiple fields and multiple Higgs bosons. Philosophically, the Higgs boson is either a composite state, built of more fundamental constituents, or it is connected to other states in nature by a symmetry such as supersymmetry (or some blend of these concepts). In the simplest models one finds a correlation between the Higgs mass and the mass M of the top partners,M. Redi and A. Tesi, Implications of a Light Higgs in Composite Models, JHEP 1210, 166 (2012) https://arxiv.org/abs/1205.0232. :m_h^2\sim \frac {3}{2\pi^2} \frac {M^2}{f^2} v^2 In models with f~TeV as suggested by naturalness this indicates fermionic resonances with mass around 1 TeV. The breaking of symmetry triggers the Higgs mechanism, causing the bosons it interacts with to have mass. The little Higgs models predict a naturally-light Higgs particle. ==Loop cancellation== The main idea behind the little Higgs models is that the one- loop contribution to the tachyonic Higgs boson mass coming from the top quark cancels.Other one-loop contributions are small enough that they don't really matter: The Yukawa coupling of the top quark is enormous because of its huge mass, and all the other fermions' Yukawa couplings and gauge couplings are negligible by comparison. This restricts the Higgs boson mass for about one order of magnitude, which is good enough to evade many of the precision electroweak constraints. ==History== Little Higgs theories were an outgrowth of dimensional deconstruction: In these theories, the gauge group has the form of a direct product of several copies of the same factor, for example SU(2) × SU(2). So in the Abelian Higgs model, the gauge field acquires a mass. In the Standard Model, the phrase ""Higgs mechanism"" refers specifically to the generation of masses for the W±, and Z weak gauge bosons through electroweak symmetry breaking. In particle physics, little Higgs models are based on the idea that the Higgs boson is a pseudo-Goldstone boson arising from some global symmetry breaking at a TeV energy scale. A more recent version of the Top Seesaw model of Dobrescu and Cheng has an acceptable light composite Higgs boson. The goal of little Higgs models is to use the spontaneous breaking of such approximate global symmetries to stabilize the mass of the Higgs boson(s) responsible for electroweak symmetry breaking. Without the Higgs mechanism, all bosons (one of the two classes of particles, the other being fermions) would be considered massless, but measurements show that the W+, W−, and Z0 bosons actually have relatively large masses of around 80 GeV/c2. The Higgs condensate in this model has infinitesimal charge, so interactions with the Higgs boson do not violate charge conservation. The mass of the Higgs boson is proportional to H, so the Higgs boson becomes infinitely massive and decouples, so is not present in the discussion. For these fields, the mass terms should always be replaced by a gauge-invariant ""Higgs"" mechanism. However, after symmetry breaking, these three of the four degrees of freedom in the Higgs field mix with the three W and Z bosons (, and ), and are only observable as components of these weak bosons, which are made massive by their inclusion; only the single remaining degree of freedom becomes a new scalar particle: the Higgs boson. In particle physics, composite Higgs models (CHM) are speculative extensions of the Standard Model (SM) where the Higgs boson is a bound state of new strong interactions. ",The Higgs boson is responsible for giving mass to the photon and gluon in the Standard Model.,The Higgs boson has no role in the Standard Model.,The Higgs boson is responsible for giving mass to all the elementary particles in the Standard Model.,"The Higgs boson is responsible for giving mass to all the elementary particles, except the photon and gluon, in the Standard Model.",The Higgs boson is responsible for giving mass to all the composite particles in the Standard Model.,D,kaggle200,"The Higgs boson plays a unique role in the Standard Model, by explaining why the other elementary particles, except the photon and gluon, are massive. In particular, the Higgs boson explains why the photon has no mass, while the W and Z bosons are very heavy. Elementary-particle masses, and the differences between electromagnetism (mediated by the photon) and the weak force (mediated by the W and Z bosons), are critical to many aspects of the structure of microscopic (and hence macroscopic) matter. In electroweak theory, the Higgs boson generates the masses of the leptons (electron, muon, and tau) and quarks. As the Higgs boson is massive, it must interact with itself.
On 22 December 2011, the DØ collaboration also reported limitations on the Higgs boson within the Minimal Supersymmetric Standard Model, an extension to the Standard Model. Proton-antiproton (p) collisions with a centre-of-mass energy of 1.96 TeV had allowed them to set an upper limit for Higgs boson production within MSSM ranging from 90 to 300 GeV, and excluding > 20–30 for masses of the Higgs boson below 180 GeV ( is the ratio of the two Higgs doublet vacuum expectation values).
In 2012, observations were considered consistent with the observed particle being the Standard Model Higgs boson. The particle decays into at least some of the predicted channels. Moreover, the production rates and branching ratios for the observed channels match the predictions by the Standard Model within the experimental uncertainties. However, the experimental uncertainties still left room for alternative explanations. It was therefore considered too early to conclude that the found particle was indeed the Standard Model Higgs boson.
The search for the Higgs boson was a 40-year effort by physicists to prove the existence or non-existence of the Higgs boson, first theorised in the 1960s. The Higgs boson was the last unobserved fundamental particle in the Standard Model of particle physics, and its discovery was described as being the ""ultimate verification"" of the Standard Model. In March 2013, the Higgs boson was officially confirmed to exist.","Particle physics Validation of the Standard Model The Higgs boson validates the Standard Model through the mechanism of mass generation. As more precise measurements of its properties are made, more advanced extensions may be suggested or excluded. As experimental means to measure the field's behaviours and interactions are developed, this fundamental field may be better understood. If the Higgs field had not been discovered, the Standard Model would have needed to be modified or superseded.
22 December 2011 – the DØ collaboration also sets limits on Higgs boson masses within the Minimal Supersymmetric Standard Model (an extension of the Standard Model), with an upper limit for production ranging from 90 to 300 GeV, and excluding tanβ>20–30 for Higgs boson masses below 180 GeV at 95% CL.
7 February 2012 – updating the December results, the ATLAS and CMS experiments constrain the Standard Model Higgs boson, if it exists, to the range 116–131 GeV and 115–127 GeV, respectively, with the same statistical significance as before.
Properties of the Higgs boson Since the Higgs field is scalar, the Higgs boson has no spin. The Higgs boson is also its own antiparticle, is CP-even, and has zero electric and colour charge.The Standard Model does not predict the mass of the Higgs boson. If that mass is between 115 and 180 GeV/c2 (consistent with empirical observations of 125 GeV/c2), then the Standard Model can be valid at energy scales all the way up to the Planck scale (1019 GeV/c2). It should be the only particle in the Standard Model that remains massive even at high energies. Many theorists expect new physics beyond the Standard Model to emerge at the TeV-scale, based on unsatisfactory properties of the Standard Model.","- The Higgs boson plays a unique role in the Standard Model, by explaining why the other elementary particles, except the photon and gluon, are massiveThe Higgs boson was the last unobserved fundamental particle in the Standard Model of particle physics, and its discovery was described as being the ""ultimate verification"" of the Standard ModelIf the Higgs field had not been discovered, the Standard Model would have needed to be modified or superseded.
22 December 2011 – the DØ collaboration also sets limits on Higgs boson masses within the Minimal Supersymmetric Standard Model (an extension of the Standard Model), with an upper limit for production ranging from 90 to 300 GeV, and excluding tanβ>20–30 for Higgs boson masses below 180 GeV at 95% CL.
7 February 2012 – updating the December results, the ATLAS and CMS experiments constrain the Standard Model Higgs boson, if it exists, to the range 116–131 GeV and 115–127 GeV, respectively, with the same statistical significance as before.
Properties of the Higgs boson Since the Higgs field is scalar, the Higgs boson has no spinParticle physics Validation of the Standard Model The Higgs boson validates the Standard Model through the mechanism of mass generationAs the Higgs boson is massive, it must interact with itself.
On 22 December 2011, the DØ collaboration also reported limitations on the Higgs boson within the Minimal Supersymmetric Standard Model, an extension to the Standard ModelThe Higgs boson is also its own antiparticle, is CP-even, and has zero electric and colour charge.The Standard Model does not predict the mass of the Higgs bosonIn particular, the Higgs boson explains why the photon has no mass, while the W and Z bosons are very heavyIn electroweak theory, the Higgs boson generates the masses of the leptons (electron, muon, and tau) and quarksProton-antiproton (p) collisions with a centre-of-mass energy of 1.96 TeV had allowed them to set an upper limit for Higgs boson production within MSSM ranging from 90 to 300 GeV, and excluding > 20–30 for masses of the Higgs boson below 180 GeV ( is the ratio of ","- The Higgs boson plays a unique role in the Standard Model, by explaining why the other elementary particles, except the photon and gluon, are massiveThe Higgs boson was the last unobserved fundamental particle in the Standard Model of particle physics, and its discovery was described as being the ""ultimate verification"" of the Standard ModelIf the Higgs field had not been discovered, the Standard Model would have needed to be modified or superseded.
22 December 2011 – the DØ collaboration also sets limits on Higgs boson masses within the Minimal Supersymmetric Standard Model (an extension of the Standard Model), with an upper limit for production ranging from 90 to 300 GeV, and excluding tanβ>20–30 for Higgs boson masses below 180 GeV at 95% CL.
7 February 2012 – updating the December results, the ATLAS and CMS experiments constrain the Standard Model Higgs boson, if it exists, to the range 116–131 GeV and 115–127 GeV, respectively, with the same statistical significance as before.
Properties of the Higgs boson Since the Higgs field is scalar, the Higgs boson has no spinParticle physics Validation of the Standard Model The Higgs boson validates the Standard Model through the mechanism of mass generationAs the Higgs boson is massive, it must interact with itself.
On 22 December 2011, the DØ collaboration also reported limitations on the Higgs boson within the Minimal Supersymmetric Standard Model, an extension to the Standard ModelThe Higgs boson is also its own antiparticle, is CP-even, and has zero electric and colour charge.The Standard Model does not predict the mass of the Higgs bosonIn particular, the Higgs boson explains why the photon has no mass, while the W and Z bosons are very heavyIn electroweak theory, the Higgs boson generates the masses of the leptons (electron, muon, and tau) and quarksProton-antiproton (p) collisions with a centre-of-mass energy of 1.96 TeV had allowed them to set an upper limit for Higgs boson production within MSSM ranging from 90 to 300 GeV, and excluding > 20–30 for masses of the Higgs boson below 180 GeV ( is the ratio of [SEP]What is the role of the Higgs boson in the Standard Model?","['D', 'C', 'E']",1.0
What is Lorentz symmetry or Lorentz invariance in relativistic physics?,"In relativistic physics, Lorentz symmetry or Lorentz invariance, named after the Dutch physicist Hendrik Lorentz, is an equivalence of observation or observational symmetry due to special relativity implying that the laws of physics stay the same for all observers that are moving with respect to one another within an inertial frame. Lorentz covariance has two distinct, but closely related meanings: # A physical quantity is said to be Lorentz covariant if it transforms under a given representation of the Lorentz group. The Lorentz group is a Lie group of symmetries of the spacetime of special relativity. In a relativistic theory of physics, a Lorentz scalar is an expression, formed from items of the theory, which evaluates to a scalar, invariant under any Lorentz transformation. Lorentz invariance follows from two independent postulates: the principle of relativity and the principle of constancy of the speed of light. In particular, a Lorentz covariant scalar (e.g., the space-time interval) remains the same under Lorentz transformations and is said to be a Lorentz invariant (i.e., they transform under the trivial representation). The third discrete symmetry entering in the CPT theorem along with and , charge conjugation symmetry , has nothing directly to do with Lorentz invariance. == Action on function spaces == If is a vector space of functions of a finite number of variables , then the action on a scalar function f \in V given by produces another function . Lorentz covariance, a related concept, is a property of the underlying spacetime manifold. A Lorentz scalar is not always immediately seen to be an invariant scalar in the mathematical sense, but the resulting scalar value is invariant under any basis transformation applied to the vector space, on which the considered theory is based. Invariants constructed from W, instances of Casimir invariants can be used to classify irreducible representations of the Lorentz group. ==Symmetries in quantum field theory and particle physics== ===Unitary groups in quantum field theory=== Group theory is an abstract way of mathematically analyzing symmetries. Symmetries in quantum mechanics describe features of spacetime and particles which are unchanged under some transformation, in the context of quantum mechanics, relativistic quantum mechanics and quantum field theory, and with applications in the mathematical formulation of the standard model and condensed matter physics. Dropping the latter while keeping the former leads to a new invariance, known as Fock–Lorentz symmetry or the projective Lorentz transformation. There is a generalization of this concept to cover Poincaré covariance and Poincaré invariance. ==Examples== In general, the (transformational) nature of a Lorentz tensor can be identified by its tensor order, which is the number of free indices it has. The Lorentz group is 6-dimensional. ===Pure rotations in spacetime=== The rotation matrices and rotation generators considered above form the spacelike part of a four-dimensional matrix, representing pure-rotation Lorentz transformations. This article outlines the connection between the classical form of continuous symmetries as well as their quantum operators, and relates them to the Lie groups, and relativistic transformations in the Lorentz group and Poincaré group. ==Notation== The notational conventions used in this article are as follows. A simple Lorentz scalar in Minkowski spacetime is the spacetime distance (""length"" of their difference) of two fixed events in spacetime. In general, symmetry in physics, invariance, and conservation laws, are fundamentally important constraints for formulating physical theories and models. They are relativistically invariant and their solutions transform under the Lorentz group as Lorentz scalars () and bispinors respectively (). * The laws of physics are symmetric under a deformation of the Lorentz or more generally, the Poincaré group, and this deformed symmetry is exact and unbroken. Lorentz symmetry violation is governed by an energy-dependent parameter which tends to zero as momentum decreases. ",Lorentz symmetry or Lorentz invariance is a property of the underlying spacetime manifold that describes the feature of nature that says experimental results are independent of the orientation or the boost velocity of the laboratory through space.,"Lorentz symmetry or Lorentz invariance is a measure of the curvature of spacetime caused by the presence of massive objects, which describes the feature of nature that says experimental results are independent of the orientation or the boost velocity of the laboratory through space.","Lorentz symmetry or Lorentz invariance is a physical quantity that transforms under a given representation of the Lorentz group, built out of scalars, four-vectors, four-tensors, and spinors.","Lorentz symmetry or Lorentz invariance is a measure of the time dilation and length contraction effects predicted by special relativity, which states that the laws of physics stay the same for all observers that are moving with respect to one another within an inertial frame.",Lorentz symmetry or Lorentz invariance is an equivalence of observation or observational symmetry due to special relativity implying that the laws of physics stay the same for all observers that are moving with respect to one another within an inertial frame.,E,kaggle200,"The Lorentz-violating contributions to the Lagrangian are built as observer Lorentz scalars by contracting standard field operators with controlling quantities called coefficients for Lorentz violation. These coefficients, arising from the spontaneous breaking of Lorentz symmetry, lead to non-standard effects that could be observed in current experiments. Tests of Lorentz symmetry attempt to measure these coefficients. A nonzero result would indicate Lorentz violation.
The results of experimental searches of Lorentz invariance violation in the photon sector of the SME are summarized in the Data Tables for Lorentz and CPT violation.
Lorentz invariance follows from two independent postulates: the principle of relativity and the principle of constancy of the speed of light. Dropping the latter while keeping the former leads to a new invariance, known as Fock–Lorentz symmetry or the projective Lorentz transformation. The general study of such theories began with Fock, who was motivated by the search for the general symmetry group preserving relativity without assuming the constancy of ""c"".
In relativistic physics, Lorentz symmetry or Lorentz invariance, named after the Dutch physicist Hendrik Lorentz, is an equivalence of observation or observational symmetry due to special relativity implying that the laws of physics stay the same for all observers that are moving with respect to one another within an inertial frame. It has also been described as ""the feature of nature that says experimental results are independent of the orientation or the boost velocity of the laboratory through space"".","In some models of broken Lorentz symmetry, it is postulated that the symmetry is still built into the most fundamental laws of physics, but that spontaneous symmetry breaking of Lorentz invariance shortly after the Big Bang could have left a ""relic field"" throughout the universe which causes particles to behave differently depending on their velocity relative to the field; however, there are also some models where Lorentz symmetry is broken in a more fundamental way. If Lorentz symmetry can cease to be a fundamental symmetry at the Planck scale or at some other fundamental scale, it is conceivable that particles with a critical speed different from the speed of light be the ultimate constituents of matter.
Lorentz invariance follows from two independent postulates: the principle of relativity and the principle of constancy of the speed of light. Dropping the latter while keeping the former leads to a new invariance, known as Fock–Lorentz symmetry or the projective Lorentz transformation. The general study of such theories began with Fock, who was motivated by the search for the general symmetry group preserving relativity without assuming the constancy of c.
In relativistic physics, Lorentz symmetry or Lorentz invariance, named after the Dutch physicist Hendrik Lorentz, is an equivalence of observation or observational symmetry due to special relativity implying that the laws of physics stay the same for all observers that are moving with respect to one another within an inertial frame. It has also been described as ""the feature of nature that says experimental results are independent of the orientation or the boost velocity of the laboratory through space"".Lorentz covariance, a related concept, is a property of the underlying spacetime manifold. Lorentz covariance has two distinct, but closely related meanings: A physical quantity is said to be Lorentz covariant if it transforms under a given representation of the Lorentz group. According to the representation theory of the Lorentz group, these quantities are built out of scalars, four-vectors, four-tensors, and spinors. In particular, a Lorentz covariant scalar (e.g., the space-time interval) remains the same under Lorentz transformations and is said to be a Lorentz invariant (i.e., they transform under the trivial representation).","The general study of such theories began with Fock, who was motivated by the search for the general symmetry group preserving relativity without assuming the constancy of c.
In relativistic physics, Lorentz symmetry or Lorentz invariance, named after the Dutch physicist Hendrik Lorentz, is an equivalence of observation or observational symmetry due to special relativity implying that the laws of physics stay the same for all observers that are moving with respect to one another within an inertial frameThe general study of such theories began with Fock, who was motivated by the search for the general symmetry group preserving relativity without assuming the constancy of ""c"".
In relativistic physics, Lorentz symmetry or Lorentz invariance, named after the Dutch physicist Hendrik Lorentz, is an equivalence of observation or observational symmetry due to special relativity implying that the laws of physics stay the same for all observers that are moving with respect to one another within an inertial frameIn particular, a Lorentz covariant scalar (e.g., the space-time interval) remains the same under Lorentz transformations and is said to be a Lorentz invariant (i.e., they transform under the trivial representation)Dropping the latter while keeping the former leads to a new invariance, known as Fock–Lorentz symmetry or the projective Lorentz transformationIf Lorentz symmetry can cease to be a fundamental symmetry at the Planck scale or at some other fundamental scale, it is conceivable that particles with a critical speed different from the speed of light be the ultimate constituents of matter.
Lorentz invariance follows from two independent postulates: the principle of relativity and the principle of constancy of the speed of lightIt has also been described as ""the feature of nature that says experimental results are independent of the orientation or the boost velocity of the laboratory through space"".Lorentz covariance, a related concept, is a property of the underlying spacetime manifoldIn some models of broken Lorentz symmetry, it is postulated that the symmetry is","The general study of such theories began with Fock, who was motivated by the search for the general symmetry group preserving relativity without assuming the constancy of c.
In relativistic physics, Lorentz symmetry or Lorentz invariance, named after the Dutch physicist Hendrik Lorentz, is an equivalence of observation or observational symmetry due to special relativity implying that the laws of physics stay the same for all observers that are moving with respect to one another within an inertial frameThe general study of such theories began with Fock, who was motivated by the search for the general symmetry group preserving relativity without assuming the constancy of ""c"".
In relativistic physics, Lorentz symmetry or Lorentz invariance, named after the Dutch physicist Hendrik Lorentz, is an equivalence of observation or observational symmetry due to special relativity implying that the laws of physics stay the same for all observers that are moving with respect to one another within an inertial frameIn particular, a Lorentz covariant scalar (e.g., the space-time interval) remains the same under Lorentz transformations and is said to be a Lorentz invariant (i.e., they transform under the trivial representation)Dropping the latter while keeping the former leads to a new invariance, known as Fock–Lorentz symmetry or the projective Lorentz transformationIf Lorentz symmetry can cease to be a fundamental symmetry at the Planck scale or at some other fundamental scale, it is conceivable that particles with a critical speed different from the speed of light be the ultimate constituents of matter.
Lorentz invariance follows from two independent postulates: the principle of relativity and the principle of constancy of the speed of lightIt has also been described as ""the feature of nature that says experimental results are independent of the orientation or the boost velocity of the laboratory through space"".Lorentz covariance, a related concept, is a property of the underlying spacetime manifoldIn some models of broken Lorentz symmetry, it is postulated that the symmetry is[SEP]What is Lorentz symmetry or Lorentz invariance in relativistic physics?","['E', 'D', 'A']",1.0
What is the significance of Baryon Acoustic Oscillations (BAOs) in the study of the universe?,"In cosmology, baryon acoustic oscillations (BAO) are fluctuations in the density of the visible baryonic matter (normal matter) of the universe, caused by acoustic density waves in the primordial plasma of the early universe. Therefore, the BAO technique helps constrain cosmological parameters and provide further insight into the nature of dark energy. ==See also== * Baryon Oscillation Spectroscopic Survey * BINGO (telescope) * Euclid (spacecraft) ==References== ==External links== * Martin White's Baryon Acoustic Oscillations and Dark Energy Web Page * * Review of Baryon Acoustic Oscillations * SDSS BAO Press Release Category:Physical cosmology Category:Baryons The BAO signal is a standard ruler such that the length of the sound horizon can be measured as a function of cosmic time. BAO measurements help cosmologists understand more about the nature of dark energy (which causes the accelerating expansion of the universe) by constraining cosmological parameters. ==The early universe== The early universe consisted of a hot, dense plasma of electrons and baryons (which include protons and neutrons). BINGO (Baryon Acoustic Oscillations from Integrated Neutral Gas Observations) is a transit radio telescope currently under construction that will observe redshifted hydrogen line emission (between z = 0.13 and 0.45) by intensity mapping to measure dark energy through baryon acoustic oscillations (BAO) in the radio frequency band. The SDSS catalog provides a picture of the distribution of matter in a large enough portion of the universe that one can search for a BAO signal by noting whether there is a statistically significant overabundance of galaxies separated by the predicted sound horizon distance. BAO can add to the body of knowledge about this acceleration by comparing observations of the sound horizon today (using clustering of galaxies) to that of the sound horizon at the time of recombination (using the CMB). SDSS confirmed the WMAP results that the sound horizon is ~ in today's universe. ==Detection in other galaxy surveys== The 2dFGRS collaboration and the SDSS collaboration reported a detection of the BAO signal in the power spectrum at around the same time in 2005. It is not possible to observe this preferred separation of galaxies on the sound horizon scale by eye, but one can measure this artifact statistically by looking at the separations of large numbers of galaxies. ==Standard ruler== The physics of the propagation of the baryon waves in the early universe is fairly simple; as a result cosmologists can predict the size of the sound horizon at the time of recombination. 11) Baryon acoustic oscillations from Integrated Neutral Gas Observations: Radio frequency interference measurements and telescope site selection. In the same way that supernovae provide a ""standard candle"" for astronomical observations, BAO matter clustering provides a ""standard ruler"" for length scale in cosmology. * Detailed analysis of the small fluctuations (anisotropies) in the cosmic microwave background (CMB), especially the second peak of the CMB power spectrum. 9) Baryon acoustic oscillations from Integrated Neutral Gas Observations: Broadband corrugated horn construction and testing. 10) Baryon Acoustic Oscillations from Integrated Neutral Gas Observations: an instrument to observe the 21cm hydrogen line in the redshift range 0.13 < z < 0.45 – status update. Cosmic microwave background radiation (CMBR) from outer space is also a form of cosmic noise. The BAO signal would show up as a bump in the correlation function at a comoving separation equal to the sound horizon. The device measures the tiny heating of the early universe by the first generation of stars and galaxies to form after the Big Bang. == Sources of cosmic noise == Cosmic noise refers to the background radio frequency radiation from galactic sources, which have constant intensity during geomagnetically quiet periods. === Sun flares === Cosmic noise can be traced from solar flares, which are sudden explosive releases of stored magnetic energy in the atmosphere of the Sun, causing sudden brightening of the photosphere. It is easier to detect the WHIM through highly ionized oxygen such as OVI and OVII absorption. == Universe composition == thumb|458x458px|The distribution of known baryons in the universe. CMB spectral distortions are tiny departures of the average cosmic microwave background (CMB) frequency spectrum from the predictions given by a perfect black body. In the future, the Uirapuru will serve as a prototype for a set of detectors called ""outriggers,"" designed to enhance BINGO's search for FRB signals. ==Papers== 1) The BINGO project - I. Baryon acoustic oscillations from integrated neutral gas observations. ","BAOs establish a preferred length scale for baryons, which can be used to detect a subtle preference for pairs of galaxies to be separated by 147 Mpc, compared to those separated by 130-160 Mpc.",BAOs help to determine the average temperature of the Universe by measuring the temperature of the cosmic microwave background radiation.,BAOs provide a way to measure the time it takes for a signal to reach its destination compared to the time it takes for background noise to dissipate.,BAOs can be used to make a two-dimensional map of the galaxy distribution in the Universe.,BAOs are used to measure the speed of light in the Universe.,A,kaggle200,"In general relativity, the expansion of the universe is parametrized by a scale factor formula_1 which is related to redshift:
Observational evidence of the acceleration of the universe implies that (at present time) formula_13. Therefore, the following are possible explanations:
In cosmology, baryon acoustic oscillations (BAO) are fluctuations in the density of the visible baryonic matter (normal matter) of the universe, caused by acoustic density waves in the primordial plasma of the early universe. In the same way that supernovae provide a ""standard candle"" for astronomical observations, BAO matter clustering provides a ""standard ruler"" for length scale in cosmology.
Baryon acoustic oscillations (BAO) are fluctuations in the density of the visible baryonic matter (normal matter) of the universe on large scales. These are predicted to arise in the Lambda-CDM model due to acoustic oscillations in the photon–baryon fluid of the early universe, and can be observed in the cosmic microwave background angular power spectrum. BAOs set up a preferred length scale for baryons. As the dark matter and baryons clumped together after recombination, the effect is much weaker in the galaxy distribution in the nearby universe, but is detectable as a subtle (≈1 percent) preference for pairs of galaxies to be separated by 147 Mpc, compared to those separated by 130–160 Mpc. This feature was predicted theoretically in the 1990s and then discovered in 2005, in two large galaxy redshift surveys, the Sloan Digital Sky Survey and the 2dF Galaxy Redshift Survey. Combining the CMB observations with BAO measurements from galaxy redshift surveys provides a precise estimate of the Hubble constant and the average matter density in the Universe. The results support the Lambda-CDM model.","In cosmology, baryon acoustic oscillations (BAO) are fluctuations in the density of the visible baryonic matter (normal matter) of the universe, caused by acoustic density waves in the primordial plasma of the early universe. In the same way that supernovae provide a ""standard candle"" for astronomical observations, BAO matter clustering provides a ""standard ruler"" for length scale in cosmology. The length of this standard ruler is given by the maximum distance the acoustic waves could travel in the primordial plasma before the plasma cooled to the point where it became neutral atoms (the epoch of recombination), which stopped the expansion of the plasma density waves, ""freezing"" them into place. The length of this standard ruler (≈490 million light years in today's universe) can be measured by looking at the large scale structure of matter using astronomical surveys. BAO measurements help cosmologists understand more about the nature of dark energy (which causes the accelerating expansion of the universe) by constraining cosmological parameters.
The primary anatomic components of BAOS include stenotic nares (pinched or narrowed nostrils), and elongated soft palate, tracheal hypoplasia (reduced trachea size), and nasopharyngeal turbinates.Other risk factors for BAOS include a lower craniofacial ratio (shorter muzzle in comparison to the overall head length), a higher neck girth, a higher body condition score, and neuter status.Recent studies led by the Roslin Institute at the University of Edinburgh's Royal School of Veterinary Studies has found that a DNA mutation in a gene called ADAMTS3 that is not dependent on skull shape is linked to upper airway syndrome in Norwich Terriers and is also common in French and English bulldogs. This is yet another indication that at least some of what is being called brachycephalic airway syndrome is not linked to skull shape and has previously been found to cause fluid retention and swelling
Sky surveys and baryon acoustic oscillations Baryon acoustic oscillations (BAO) are fluctuations in the density of the visible baryonic matter (normal matter) of the universe on large scales. These are predicted to arise in the Lambda-CDM model due to acoustic oscillations in the photon–baryon fluid of the early universe, and can be observed in the cosmic microwave background angular power spectrum. BAOs set up a preferred length scale for baryons. As the dark matter and baryons clumped together after recombination, the effect is much weaker in the galaxy distribution in the nearby universe, but is detectable as a subtle (≈1 percent) preference for pairs of galaxies to be separated by 147 Mpc, compared to those separated by 130–160 Mpc. This feature was predicted theoretically in the 1990s and then discovered in 2005, in two large galaxy redshift surveys, the Sloan Digital Sky Survey and the 2dF Galaxy Redshift Survey. Combining the CMB observations with BAO measurements from galaxy redshift surveys provides a precise estimate of the Hubble constant and the average matter density in the Universe. The results support the Lambda-CDM model.","In cosmology, baryon acoustic oscillations (BAO) are fluctuations in the density of the visible baryonic matter (normal matter) of the universe, caused by acoustic density waves in the primordial plasma of the early universeTherefore, the following are possible explanations:
In cosmology, baryon acoustic oscillations (BAO) are fluctuations in the density of the visible baryonic matter (normal matter) of the universe, caused by acoustic density waves in the primordial plasma of the early universeIn the same way that supernovae provide a ""standard candle"" for astronomical observations, BAO matter clustering provides a ""standard ruler"" for length scale in cosmology.
Baryon acoustic oscillations (BAO) are fluctuations in the density of the visible baryonic matter (normal matter) of the universe on large scalesThese are predicted to arise in the Lambda-CDM model due to acoustic oscillations in the photon–baryon fluid of the early universe, and can be observed in the cosmic microwave background angular power spectrumThis is yet another indication that at least some of what is being called brachycephalic airway syndrome is not linked to skull shape and has previously been found to cause fluid retention and swelling
Sky surveys and baryon acoustic oscillations Baryon acoustic oscillations (BAO) are fluctuations in the density of the visible baryonic matter (normal matter) of the universe on large scalesBAOs set up a preferred length scale for baryonsIn the same way that supernovae provide a ""standard candle"" for astronomical observations, BAO matter clustering provides a ""standard ruler"" for length scale in cosmologyBAO measurements help cosmologists understand more about the nature of dark energy (which causes the accelerating expansion of the universe) by constraining cosmological parameters.
The primary anatomic components of BAOS include stenotic nares (pinched or narrowed nostrils), and elongated soft palate, tracheal hypoplasia (reduced trachea size), and nasopharyngeal turbinates.Other risk factors for BAOS include a lower craniofacial ratio (shorter muzzle in com","In cosmology, baryon acoustic oscillations (BAO) are fluctuations in the density of the visible baryonic matter (normal matter) of the universe, caused by acoustic density waves in the primordial plasma of the early universeTherefore, the following are possible explanations:
In cosmology, baryon acoustic oscillations (BAO) are fluctuations in the density of the visible baryonic matter (normal matter) of the universe, caused by acoustic density waves in the primordial plasma of the early universeIn the same way that supernovae provide a ""standard candle"" for astronomical observations, BAO matter clustering provides a ""standard ruler"" for length scale in cosmology.
Baryon acoustic oscillations (BAO) are fluctuations in the density of the visible baryonic matter (normal matter) of the universe on large scalesThese are predicted to arise in the Lambda-CDM model due to acoustic oscillations in the photon–baryon fluid of the early universe, and can be observed in the cosmic microwave background angular power spectrumThis is yet another indication that at least some of what is being called brachycephalic airway syndrome is not linked to skull shape and has previously been found to cause fluid retention and swelling
Sky surveys and baryon acoustic oscillations Baryon acoustic oscillations (BAO) are fluctuations in the density of the visible baryonic matter (normal matter) of the universe on large scalesBAOs set up a preferred length scale for baryonsIn the same way that supernovae provide a ""standard candle"" for astronomical observations, BAO matter clustering provides a ""standard ruler"" for length scale in cosmologyBAO measurements help cosmologists understand more about the nature of dark energy (which causes the accelerating expansion of the universe) by constraining cosmological parameters.
The primary anatomic components of BAOS include stenotic nares (pinched or narrowed nostrils), and elongated soft palate, tracheal hypoplasia (reduced trachea size), and nasopharyngeal turbinates.Other risk factors for BAOS include a lower craniofacial ratio (shorter muzzle in com[SEP]What is the significance of Baryon Acoustic Oscillations (BAOs) in the study of the universe?","['A', 'E', 'D']",1.0
What can be inferred about the electronic entropy of insulators and metals based on their densities of states at the Fermi level?,"As the density of states at the Fermi level varies widely between systems, this approximation is a reasonable heuristic for inferring when it may be necessary to include electronic entropy in the thermodynamic description of a system; only systems with large densities of states at the Fermi level should exhibit non-negligible electronic entropy (where large may be approximately defined as ). == Application to different materials classes == Insulators have zero density of states at the Fermi level due to their band gaps. Metals have non-zero density of states at the Fermi level. Metals with free-electron-like band structures (e.g. alkali metals, alkaline earth metals, Cu, and Al) generally exhibit relatively low density of states at the Fermi level, and therefore exhibit fairly low electronic entropies. Electronic entropy is thus most relevant for the thermodynamics of condensed phases, where the density of states at the Fermi level can be quite large, and the electronic entropy can thus contribute substantially to thermodynamic behavior. Several other approximations can be made, but they all indicate that the electronic entropy should, to first order, be proportional to the temperature and the density of states at the Fermi level. Thus, the density of states- based electronic entropy is essentially zero in these systems. Electronic entropy is the entropy of a system attributable to electrons' probabilistic occupation of states. One can then re-write the entropy as: :S=-k_{\rm B} \int n(E) \left [ f \ln f +(1- f) \ln \left ( 1- f \right ) \right ]dE This is the general formulation of the density-of-states based electronic entropy. ===Useful approximation=== It is useful to recognize that the only states within ~ of the Fermi level contribute significantly to the entropy. As the entropy is given by a sum over the probabilities of occupation of those states, there is an entropy associated with the occupation of the various electronic states. However, when oxides are metallic (i.e. the Fermi level lies within an unfilled, flat set of bands), oxides exhibit some of the largest electronic entropies of any material. Transition metals, wherein the flat d-bands lie close to the Fermi level, generally exhibit much larger electronic entropies than the free-electron like metals. A second form of electronic entropy can be attributed to the configurational entropy associated with localized electrons and holes. To a first approximation (i.e. assuming that the charges are distributed randomly), the molar configurational electronic entropy is given by: :S \approx n_\text{sites} \left [ x \ln x + (1-x) \ln (1-x) \right ] where is the fraction of sites on which a localized electron/hole could reside (typically a transition metal site), and is the concentration of localized electrons/holes. Instead of engineering band filling, one may also engineer the shape of the band structure itself via introduction of nanostructures or quantum wells to the materials. ==Configurational electronic entropy== Configurational electronic entropy is usually observed in mixed- valence transition metal oxides, as the charges in these systems are both localized (the system is ionic), and capable of changing (due to the mixed valency). The distinction between the valence and conduction bands is meaningless in metals, because conduction occurs in one or more partially filled bands that take on the properties of both the valence and conduction bands. == Band gap == In semiconductors and insulators the two bands are separated by a band gap, while in conductors the bands overlap. Switching from summing over individual states to integrating over energy levels, the entropy can be written as: :S=-k_{\rm B} \int n(E) \left [ p(E) \ln p(E) +(1- p(E)) \ln \left ( 1- p(E)\right ) \right ]dE where is the density of states of the solid. In nonmetals, the valence band is the highest range of electron energies in which electrons are normally present at absolute zero temperature, while the conduction band is the lowest range of vacant electronic states. Electronic entropy can substantially modify phase behavior, as in lithium ion battery electrodes, high temperature superconductors, and some perovskites. More specifically, thermoelectric materials are intentionally doped to exhibit only partially filled bands at the Fermi level, resulting in high electronic entropies. In solid-state physics, the valence band and conduction band are the bands closest to the Fermi level, and thus determine the electrical conductivity of the solid. ","Insulators and metals have zero density of states at the Fermi level, and therefore, their density of states-based electronic entropy is essentially zero.","Insulators have zero density of states at the Fermi level, and therefore, their density of states-based electronic entropy is essentially zero. Metals have non-zero density of states at the Fermi level, and thus, their electronic entropy should be proportional to the temperature and density of states at the Fermi level.","Insulators have non-zero density of states at the Fermi level, and therefore, their density of states-based electronic entropy is proportional to the temperature and density of states at the Fermi level. Metals have zero density of states at the Fermi level, and thus, their electronic entropy is essentially zero.","Insulators and metals have varying densities of states at the Fermi level, and thus, their electronic entropy may or may not be proportional to the temperature and density of states at the Fermi level.","Insulators and metals have non-zero density of states at the Fermi level, and thus, their electronic entropy should be proportional to the temperature and density of states at the Fermi level.",B,kaggle200,"It is useful to recognize that the only states within ~ of the Fermi level contribute significantly to the entropy. Other states are either fully occupied, , or completely unoccupied, . In either case, these states do not contribute to the entropy. If one assumes that the density of states is constant within of the Fermi level, one can derive that the electron heat capacity, equal to:
Metals have non-zero density of states at the Fermi level. Metals with free-electron-like band structures (e.g. alkali metals, alkaline earth metals, Cu, and Al) generally exhibit relatively low density of states at the Fermi level, and therefore exhibit fairly low electronic entropies. Transition metals, wherein the flat d-bands lie close to the Fermi level, generally exhibit much larger electronic entropies than the free-electron like metals.
Insulators have zero density of states at the Fermi level due to their band gaps. Thus, the density of states-based electronic entropy is essentially zero in these systems.
where is the density of states (number of levels per unit energy) at the Fermi level. Several other approximations can be made, but they all indicate that the electronic entropy should, to first order, be proportional to the temperature and the density of states at the Fermi level. As the density of states at the Fermi level varies widely between systems, this approximation is a reasonable heuristic for inferring when it may be necessary to include electronic entropy in the thermodynamic description of a system; only systems with large densities of states at the Fermi level should exhibit non-negligible electronic entropy (where large may be approximately defined as ).","Metals have non-zero density of states at the Fermi level. Metals with free-electron-like band structures (e.g. alkali metals, alkaline earth metals, Cu, and Al) generally exhibit relatively low density of states at the Fermi level, and therefore exhibit fairly low electronic entropies. Transition metals, wherein the flat d-bands lie close to the Fermi level, generally exhibit much larger electronic entropies than the free-electron like metals.
Insulators have zero density of states at the Fermi level due to their band gaps. Thus, the density of states-based electronic entropy is essentially zero in these systems.
Useful approximation It is useful to recognize that the only states within ~±kBT of the Fermi level contribute significantly to the entropy. Other states are either fully occupied, f = 1, or completely unoccupied, f = 0. In either case, these states do not contribute to the entropy. If one assumes that the density of states is constant within ±kBT of the Fermi level, one can derive that the electron heat capacity, equal to: CV=T(∂S∂T)T,V=π23kB2Tn(EF) where n(EF) is the density of states (number of levels per unit energy) at the Fermi level. Several other approximations can be made, but they all indicate that the electronic entropy should, to first order, be proportional to the temperature and the density of states at the Fermi level. As the density of states at the Fermi level varies widely between systems, this approximation is a reasonable heuristic for inferring when it may be necessary to include electronic entropy in the thermodynamic description of a system; only systems with large densities of states at the Fermi level should exhibit non-negligible electronic entropy (where large may be approximately defined as n(EF) ≥ (k2BT)−1).","Several other approximations can be made, but they all indicate that the electronic entropy should, to first order, be proportional to the temperature and the density of states at the Fermi levelThus, the density of states-based electronic entropy is essentially zero in these systems.
Useful approximation It is useful to recognize that the only states within ~±kBT of the Fermi level contribute significantly to the entropyAs the density of states at the Fermi level varies widely between systems, this approximation is a reasonable heuristic for inferring when it may be necessary to include electronic entropy in the thermodynamic description of a system; only systems with large densities of states at the Fermi level should exhibit non-negligible electronic entropy (where large may be approximately defined as n(EF) ≥ (k2BT)−1)Thus, the density of states-based electronic entropy is essentially zero in these systems.
where is the density of states (number of levels per unit energy) at the Fermi levelAs the density of states at the Fermi level varies widely between systems, this approximation is a reasonable heuristic for inferring when it may be necessary to include electronic entropy in the thermodynamic description of a system; only systems with large densities of states at the Fermi level should exhibit non-negligible electronic entropy (where large may be approximately defined as ).Transition metals, wherein the flat d-bands lie close to the Fermi level, generally exhibit much larger electronic entropies than the free-electron like metals.
Insulators have zero density of states at the Fermi level due to their band gapsMetals have non-zero density of states at the Fermi levelIf one assumes that the density of states is constant within of the Fermi level, one can derive that the electron heat capacity, equal to:
Metals have non-zero density of states at the Fermi level- It is useful to recognize that the only states within ~ of the Fermi level contribute significantly to the entropyalkali metals, alkaline earth metals, Cu, and Al) generally exhibit relatively low dens","Several other approximations can be made, but they all indicate that the electronic entropy should, to first order, be proportional to the temperature and the density of states at the Fermi levelThus, the density of states-based electronic entropy is essentially zero in these systems.
Useful approximation It is useful to recognize that the only states within ~±kBT of the Fermi level contribute significantly to the entropyAs the density of states at the Fermi level varies widely between systems, this approximation is a reasonable heuristic for inferring when it may be necessary to include electronic entropy in the thermodynamic description of a system; only systems with large densities of states at the Fermi level should exhibit non-negligible electronic entropy (where large may be approximately defined as n(EF) ≥ (k2BT)−1)Thus, the density of states-based electronic entropy is essentially zero in these systems.
where is the density of states (number of levels per unit energy) at the Fermi levelAs the density of states at the Fermi level varies widely between systems, this approximation is a reasonable heuristic for inferring when it may be necessary to include electronic entropy in the thermodynamic description of a system; only systems with large densities of states at the Fermi level should exhibit non-negligible electronic entropy (where large may be approximately defined as ).Transition metals, wherein the flat d-bands lie close to the Fermi level, generally exhibit much larger electronic entropies than the free-electron like metals.
Insulators have zero density of states at the Fermi level due to their band gapsMetals have non-zero density of states at the Fermi levelIf one assumes that the density of states is constant within of the Fermi level, one can derive that the electron heat capacity, equal to:
Metals have non-zero density of states at the Fermi level- It is useful to recognize that the only states within ~ of the Fermi level contribute significantly to the entropyalkali metals, alkaline earth metals, Cu, and Al) generally exhibit relatively low dens[SEP]What can be inferred about the electronic entropy of insulators and metals based on their densities of states at the Fermi level?","['B', 'A', 'D']",1.0
What are permutation-inversion groups?,"In mathematics, a permutation group is a group G whose elements are permutations of a given set M and whose group operation is the composition of permutations in G (which are thought of as bijective functions from the set M to itself). If a permutation is assigned to each inversion set using the place-based definition, the resulting order of permutations is that of the permutohedron, where an edge corresponds to the swapping of two elements with consecutive values. A permutation's inversion set using place-based notation is the same as the inverse permutation's inversion set using element-based notation with the two components of each ordered pair exchanged. If a permutation were assigned to each inversion set using the element-based definition, the resulting order of permutations would be that of a Cayley graph, where an edge corresponds to the swapping of two elements on consecutive places. The permutation matrix of the inverse is the transpose, therefore v of a permutation is r of its inverse, and vice versa. ==Example: All permutations of four elements== thumb|The six possible inversions of a 4-element permutation The following sortable table shows the 24 permutations of four elements (in the \pi column) with their place-based inversion sets (in the p-b column), inversion related vectors (in the v, l, and r columns), and inversion numbers (in the # column). Inversions are usually defined for permutations, but may also be defined for sequences: Let S be a sequence (or multiset permutation). Likewise, a permutation's inversion set using element-based notation is the same as the inverse permutation's inversion set using place-based notation with the two components of each ordered pair exchanged. A permutation and its inverse have the same inversion number. The way in which the elements of a permutation group permute the elements of the set is called its group action. Group actions have applications in the study of symmetries, combinatorics and many other branches of mathematics, physics and chemistry. == Basic properties and terminology == Being a subgroup of a symmetric group, all that is necessary for a set of permutations to satisfy the group axioms and be a permutation group is that it contain the identity permutation, the inverse permutation of each permutation it contains, and be closed under composition of its permutations. The term permutation group thus means a subgroup of the symmetric group. In mathematics, the term permutation representation of a (typically finite) group G can refer to either of two closely related notions: a representation of G as a group of permutations, or as a group of permutation matrices. Permutation Groups. Inversion table may refer to: * An object used in inversion therapy * A list of numbers encoding a permutation In computer science and discrete mathematics, an inversion in a sequence is a pair of elements that are out of their natural order. == Definitions == ===Inversion=== Let \pi be a permutation. Permutation group algorithms. This permutation group is known, as an abstract group, as the dihedral group of order 8. ==Group actions== In the above example of the symmetry group of a square, the permutations ""describe"" the movement of the vertices of the square induced by the group of symmetries. Having an associative product, an identity element, and inverses for all its elements, makes the set of all permutations of M into a group, Sym(M); a permutation group. == Examples == Consider the following set G1 of permutations of the set M = {1, 2, 3, 4}: * e = (1)(2)(3)(4) = (1) **This is the identity, the trivial permutation which fixes each element. * a = (1 2)(3)(4) = (1 2) **This permutation interchanges 1 and 2, and fixes 3 and 4. * b = (1)(2)(3 4) = (3 4) **Like the previous one, but exchanging 3 and 4, and fixing the others. * ab = (1 2)(3 4) **This permutation, which is the composition of the previous two, exchanges simultaneously 1 with 2, and 3 with 4. This Cayley graph of the symmetric group is similar to its permutohedron, but with each permutation replaced by its inverse. == See also == * Factorial number system * Permutation graph * Transpositions, simple transpositions, inversions and sorting * Damerau–Levenshtein distance * Parity of a permutation Sequences in the OEIS: * Sequences related to factorial base representation * Factorial numbers: and * Inversion numbers: * Inversion sets of finite permutations interpreted as binary numbers: (related permutation: ) * Finite permutations that have only 0s and 1s in their inversion vectors: (their inversion sets: ) * Number of permutations of n elements with k inversions; Mahonian numbers: (their row maxima; Kendall-Mann numbers: ) * Number of connected labeled graphs with n edges and n nodes: == References == === Source bibliography === * * * * * * * * * * * === Further reading === * === Presortedness measures === * * * Category:Permutations Category:Order theory Category:String metrics Category:Sorting algorithms Category:Combinatorics Category:Discrete mathematics The inversions of this permutation using element-based notation are: (3, 1), (3, 2), (5, 1), (5, 2), and (5,4). ",Permutation-inversion groups are groups of symmetry operations that are energetically feasible inversions of identical nuclei or rotation with respect to the center of mass.,"Permutation-inversion groups are groups of symmetry operations that are energetically feasible inversions of identical nuclei or rotation with respect to the center of mass, or a combination of both.",Permutation-inversion groups are groups of symmetry operations that are energetically feasible rotations of the entire molecule about the C3 axis.,Permutation-inversion groups are groups of symmetry operations that are energetically feasible inversions of the entire molecule about the C3 axis.,"Permutation-inversion groups are groups of symmetry operations that are energetically feasible permutations of identical nuclei or inversion with respect to the center of mass, or a combination of both.",E,kaggle200,"Groups Groups are like tags or folders in the Finder. Groups can be created at will and notes can be dragged into particular groups to organize them.
One can determine the symmetry operations of the point group for a particular molecule by considering the geometrical symmetry of its molecular model. However, when one uses a point group to classify molecular states, the operations in it are not to be interpreted in the same way. Instead the operations are interpreted as rotating and/or reflecting the vibronic (vibration-electronic) coordinates and these operations commute with the vibronic Hamiltonian. They are ""symmetry operations"" for that vibronic Hamiltonian. The point group is used to classify by symmetry the vibronic eigenstates of a rigid molecule. The symmetry classification of the rotational levels, the eigenstates of the full (rotation-vibration-electronic) Hamiltonian, requires the use of the appropriate permutation-inversion group as introduced by Longuet-Higgins. Point groups describe the geometrical symmetry of a molecule whereas permutation-inversion groups describe the energy-invariant symmetry.
centrifugal distortion. The permutation-inversion groups required for the complete study of CH and H are ""T""(M) and ""D""(M), respectively.
As discussed above in the section Point groups and permutation-inversion groups, point groups are useful for classifying the vibrational and electronic states of ""rigid"" molecules (sometimes called ""semi-rigid"" molecules) which undergo only small oscillations about a single equilibrium geometry. Longuet-Higgins introduced a more general type of symmetry group suitable not only for classifying the vibrational and electronic states of rigid molecules but also for classifying their rotational and nuclear spin states. Further, such groups can be used to classify the states of ""non-rigid"" (or ""fluxional"") molecules that tunnel between equivalent geometries (called ""versions"") and to allow for the distorting effects of molecular rotation. These groups are known as ""permutation-inversion"" groups, because the symmetry operations in them are energetically feasible permutations of identical nuclei, or inversion with respect to the center of mass (the parity operation), or a combination of the two.","Additionally, as examples, the methane (CH4) and H3+ molecules have highly symmetric equilibrium structures with Td and D3h point group symmetries respectively; they lack permanent electric dipole moments but they do have very weak pure rotation spectra because of rotational centrifugal distortion. The permutation-inversion groups required for the complete study of CH4 and H3+ are Td(M) and D3h(M), respectively. In its ground (N) electronic state the ethylene molecule C2H4 has D2h point group symmetry whereas in the excited (V) state it has D2d symmetry. To treat these two states together it is necessary to allow torsion and to use the double group of the permutation-inversion group G16.A second and less general approach to the symmetry of nonrigid molecules is due to Altmann. In this approach the symmetry groups are known as Schrödinger supergroups and consist of two types of operations (and their combinations): (1) the geometric symmetry operations (rotations, reflections, inversions) of rigid molecules, and (2) isodynamic operations, which take a nonrigid molecule into an energetically equivalent form by a physically reasonable process such as rotation about a single bond (as in ethane) or a molecular inversion (as in ammonia).
Point groups and permutation-inversion groups The successive application (or composition) of one or more symmetry operations of a molecule has an effect equivalent to that of some single symmetry operation of the molecule. For example, a C2 rotation followed by a σv reflection is seen to be a σv' symmetry operation: σv*C2 = σv'. (""Operation A followed by B to form C"" is written BA = C). Moreover, the set of all symmetry operations (including this composition operation) obeys all the properties of a group, given above. So (S,*) is a group, where S is the set of all symmetry operations of some molecule, and * denotes the composition (repeated application) of symmetry operations.
As discussed above in the section Point groups and permutation-inversion groups, point groups are useful for classifying the vibrational and electronic states of rigid molecules (sometimes called semi-rigid molecules) which undergo only small oscillations about a single equilibrium geometry. Longuet-Higgins introduced a more general type of symmetry group suitable not only for classifying the vibrational and electronic states of rigid molecules but also for classifying their rotational and nuclear spin states. Further, such groups can be used to classify the states of non-rigid (or fluxional) molecules that tunnel between equivalent geometries (called versions) and to allow for the distorting effects of molecular rotation. These groups are known as permutation-inversion groups, because the symmetry operations in them are energetically feasible permutations of identical nuclei, or inversion with respect to the center of mass (the parity operation), or a combination of the two.","These groups are known as permutation-inversion groups, because the symmetry operations in them are energetically feasible permutations of identical nuclei, or inversion with respect to the center of mass (the parity operation), or a combination of the twoThese groups are known as ""permutation-inversion"" groups, because the symmetry operations in them are energetically feasible permutations of identical nuclei, or inversion with respect to the center of mass (the parity operation), or a combination of the two.The permutation-inversion groups required for the complete study of CH and H are ""T""(M) and ""D""(M), respectively.
As discussed above in the section Point groups and permutation-inversion groups, point groups are useful for classifying the vibrational and electronic states of ""rigid"" molecules (sometimes called ""semi-rigid"" molecules) which undergo only small oscillations about a single equilibrium geometryPoint groups describe the geometrical symmetry of a molecule whereas permutation-inversion groups describe the energy-invariant symmetry.
centrifugal distortionThe symmetry classification of the rotational levels, the eigenstates of the full (rotation-vibration-electronic) Hamiltonian, requires the use of the appropriate permutation-inversion group as introduced by Longuet-HigginsThe permutation-inversion groups required for the complete study of CH4 and H3+ are Td(M) and D3h(M), respectivelyIn this approach the symmetry groups are known as Schrödinger supergroups and consist of two types of operations (and their combinations): (1) the geometric symmetry operations (rotations, reflections, inversions) of rigid molecules, and (2) isodynamic operations, which take a nonrigid molecule into an energetically equivalent form by a physically reasonable process such as rotation about a single bond (as in ethane) or a molecular inversion (as in ammonia).
Point groups and permutation-inversion groups The successive application (or composition) of one or more symmetry operations of a molecule has an effect equivalent to that of some single symmetry operation of the mol","These groups are known as permutation-inversion groups, because the symmetry operations in them are energetically feasible permutations of identical nuclei, or inversion with respect to the center of mass (the parity operation), or a combination of the twoThese groups are known as ""permutation-inversion"" groups, because the symmetry operations in them are energetically feasible permutations of identical nuclei, or inversion with respect to the center of mass (the parity operation), or a combination of the two.The permutation-inversion groups required for the complete study of CH and H are ""T""(M) and ""D""(M), respectively.
As discussed above in the section Point groups and permutation-inversion groups, point groups are useful for classifying the vibrational and electronic states of ""rigid"" molecules (sometimes called ""semi-rigid"" molecules) which undergo only small oscillations about a single equilibrium geometryPoint groups describe the geometrical symmetry of a molecule whereas permutation-inversion groups describe the energy-invariant symmetry.
centrifugal distortionThe symmetry classification of the rotational levels, the eigenstates of the full (rotation-vibration-electronic) Hamiltonian, requires the use of the appropriate permutation-inversion group as introduced by Longuet-HigginsThe permutation-inversion groups required for the complete study of CH4 and H3+ are Td(M) and D3h(M), respectivelyIn this approach the symmetry groups are known as Schrödinger supergroups and consist of two types of operations (and their combinations): (1) the geometric symmetry operations (rotations, reflections, inversions) of rigid molecules, and (2) isodynamic operations, which take a nonrigid molecule into an energetically equivalent form by a physically reasonable process such as rotation about a single bond (as in ethane) or a molecular inversion (as in ammonia).
Point groups and permutation-inversion groups The successive application (or composition) of one or more symmetry operations of a molecule has an effect equivalent to that of some single symmetry operation of the mol[SEP]What are permutation-inversion groups?","['E', 'B', 'D']",1.0
What is the relationship between dielectric loss and the transparency of a material?,"If a dielectric material does not include light-absorbent additive molecules (pigments, dyes, colorants), it is usually transparent to the spectrum of visible light. In other words, a translucent material is made up of components with different indices of refraction. In electrical engineering, dielectric loss quantifies a dielectric material's inherent dissipation of electromagnetic energy (e.g. heat).http://www.ece.rutgers.edu/~orfanidi/ewa/ch01.pdf It can be parameterized in terms of either the loss angle or the corresponding loss tangent . Such frequencies of light waves are said to be transmitted. ===Transparency in insulators=== An object may be not transparent either because it reflects the incoming light or because it absorbs the incoming light. Electromagnetically Induced Transparency. In the field of optics, transparency (also called pellucidity or diaphaneity) is the physical property of allowing light to pass through the material without appreciable scattering of light. Materials which do not allow the transmission of any light wave frequencies are called opaque. Thus a reduction of the original particle size well below the wavelength of visible light (about 1/15 of the light wavelength or roughly 600/15 = 40 nanometers) eliminates much of light scattering, resulting in a translucent or even transparent material. In a dielectric, one of the conduction electrons or the dipole relaxation typically dominates loss in a particular dielectric and manufacturing method. Materials which do not transmit light are called opaque. Materials that allow the transmission of light waves through them are called optically transparent. A transparent material is made up of components with a uniform index of refraction. In attenuating media, the same relation is used, but the permittivity is allowed to be a complex number, called complex electric permittivity: \underline{n} = \mathrm{c}\sqrt{\mu \underline{\varepsilon}}\quad \text{(SI)},\qquad \underline{n} = \sqrt{\mu \underline{\varepsilon}}\quad \text{(cgs)}, where _ε_ is the complex electric permittivity of the medium. The ability of liquids to ""heal"" internal defects via viscous flow is one of the reasons why some fibrous materials (e.g., paper or fabric) increase their apparent transparency when wetted. However, transparency is difficult for bodies made of materials that have different refractive indices from seawater. If the object is transparent, then the light waves are passed on to neighboring atoms through the bulk of the material and re-emitted on the opposite side of the object. Some materials, such as plate glass and clean water, transmit much of the light that falls on them and reflect little of it; such materials are called optically transparent. The electric loss tangent can be similarly defined: : \tan \delta_e = \frac{\varepsilon} {\varepsilon'} , upon introduction of an effective dielectric conductivity (see relative permittivity#Lossy medium). ==Discrete circuit perspective== A capacitor is a discrete electrical circuit component typically made of a dielectric placed between conductors. Another effect of dielectric absorption is sometimes described as ""soakage"". Transparent materials appear clear, with the overall appearance of one color, or any combination leading up to a brilliant spectrum of every color. ","Dielectric loss in a material can cause refraction, which can decrease the material's transparency at higher frequencies.","Dielectric loss in a material can cause absorption, which can reduce the material's transparency at higher frequencies.","Dielectric loss in a material can cause reflection, which can increase the material's transparency at higher frequencies.",Dielectric loss in a material has no effect on the material's transparency at any frequency.,"Dielectric loss in a material can cause scattering, which can increase the material's transparency at higher frequencies.",B,kaggle200,"Some of the power that is fed into a transmission line is lost because of its resistance. This effect is called ""ohmic"" or ""resistive"" loss (see ohmic heating). At high frequencies, another effect called ""dielectric loss"" becomes significant, adding to the losses caused by resistance. Dielectric loss is caused when the insulating material inside the transmission line absorbs energy from the alternating electric field and converts it to heat (see dielectric heating). The transmission line is modelled with a resistance (R) and inductance (L) in series with a capacitance (C) and conductance (G) in parallel. The resistance and conductance contribute to the loss in a transmission line.
Once the complex permittivity of the material is known, we can easily calculate its effective conductivity formula_36 and dielectric loss tangent formula_37 as:
Dielectric loss quantifies a dielectric material's inherent dissipation of electromagnetic energy (e.g. heat). It can be parameterized in terms of either the loss angle ""δ"" or the corresponding loss tangent tan ""δ"". Both refer to the phasor in the complex plane whose real and imaginary parts are the resistive (lossy) component of an electromagnetic field and its reactive (lossless) counterpart.
Dielectric loss and non-zero DC conductivity in materials cause absorption. Good dielectric materials such as glass have extremely low DC conductivity, and at low frequencies the dielectric loss is also negligible, resulting in almost no absorption. However, at higher frequencies (such as visible light), dielectric loss may increase absorption significantly, reducing the material's transparency to these frequencies.","Variants of the Debye equation Cole–Cole equation This equation is used when the dielectric loss peak shows symmetric broadening.
Cole–Davidson equation This equation is used when the dielectric loss peak shows asymmetric broadening.
Havriliak–Negami relaxation This equation considers both symmetric and asymmetric broadening.
Kohlrausch–Williams–Watts function Fourier transform of stretched exponential function.
Curie–von Schweidler law This shows the response of dielectrics to an applied DC field to behave according to a power law, which can be expressed as an integral over weighted exponential functions.
Djordjevic-Sarkar approximation This is used when the dielectric loss is approximately constant for a wide range of frequencies.
Both n and κ are dependent on the frequency. In most circumstances κ > 0 (light is absorbed) or κ = 0 (light travels forever without loss). In special situations, especially in the gain medium of lasers, it is also possible that κ < 0, corresponding to an amplification of the light.
An alternative convention uses n = n + iκ instead of n = n − iκ, but where κ > 0 still corresponds to loss. Therefore, these two conventions are inconsistent and should not be confused. The difference is related to defining sinusoidal time dependence as Re[exp(−iωt)] versus Re[exp(+iωt)]. See Mathematical descriptions of opacity.
Dielectric loss and non-zero DC conductivity in materials cause absorption. Good dielectric materials such as glass have extremely low DC conductivity, and at low frequencies the dielectric loss is also negligible, resulting in almost no absorption. However, at higher frequencies (such as visible light), dielectric loss may increase absorption significantly, reducing the material's transparency to these frequencies.
In electrical engineering, dielectric loss quantifies a dielectric material's inherent dissipation of electromagnetic energy (e.g. heat). It can be parameterized in terms of either the loss angle δ or the corresponding loss tangent tan(δ). Both refer to the phasor in the complex plane whose real and imaginary parts are the resistive (lossy) component of an electromagnetic field and its reactive (lossless) counterpart.","However, at higher frequencies (such as visible light), dielectric loss may increase absorption significantly, reducing the material's transparency to these frequencies.
In electrical engineering, dielectric loss quantifies a dielectric material's inherent dissipation of electromagnetic energy (e.gHowever, at higher frequencies (such as visible light), dielectric loss may increase absorption significantly, reducing the material's transparency to these frequencies.The resistance and conductance contribute to the loss in a transmission line.
Once the complex permittivity of the material is known, we can easily calculate its effective conductivity formula_36 and dielectric loss tangent formula_37 as:
Dielectric loss quantifies a dielectric material's inherent dissipation of electromagnetic energy (e.gSee Mathematical descriptions of opacity.
Dielectric loss and non-zero DC conductivity in materials cause absorptionDielectric loss is caused when the insulating material inside the transmission line absorbs energy from the alternating electric field and converts it to heat (see dielectric heating)At high frequencies, another effect called ""dielectric loss"" becomes significant, adding to the losses caused by resistanceGood dielectric materials such as glass have extremely low DC conductivity, and at low frequencies the dielectric loss is also negligible, resulting in almost no absorptionBoth refer to the phasor in the complex plane whose real and imaginary parts are the resistive (lossy) component of an electromagnetic field and its reactive (lossless) counterpart.
Dielectric loss and non-zero DC conductivity in materials cause absorptionVariants of the Debye equation Cole–Cole equation This equation is used when the dielectric loss peak shows symmetric broadening.
Cole–Davidson equation This equation is used when the dielectric loss peak shows asymmetric broadening.
Havriliak–Negami relaxation This equation considers both symmetric and asymmetric broadening.
Kohlrausch–Williams–Watts function Fourier transform of stretched exponential function.
Curie–von Schweidler law ","However, at higher frequencies (such as visible light), dielectric loss may increase absorption significantly, reducing the material's transparency to these frequencies.
In electrical engineering, dielectric loss quantifies a dielectric material's inherent dissipation of electromagnetic energy (e.gHowever, at higher frequencies (such as visible light), dielectric loss may increase absorption significantly, reducing the material's transparency to these frequencies.The resistance and conductance contribute to the loss in a transmission line.
Once the complex permittivity of the material is known, we can easily calculate its effective conductivity formula_36 and dielectric loss tangent formula_37 as:
Dielectric loss quantifies a dielectric material's inherent dissipation of electromagnetic energy (e.gSee Mathematical descriptions of opacity.
Dielectric loss and non-zero DC conductivity in materials cause absorptionDielectric loss is caused when the insulating material inside the transmission line absorbs energy from the alternating electric field and converts it to heat (see dielectric heating)At high frequencies, another effect called ""dielectric loss"" becomes significant, adding to the losses caused by resistanceGood dielectric materials such as glass have extremely low DC conductivity, and at low frequencies the dielectric loss is also negligible, resulting in almost no absorptionBoth refer to the phasor in the complex plane whose real and imaginary parts are the resistive (lossy) component of an electromagnetic field and its reactive (lossless) counterpart.
Dielectric loss and non-zero DC conductivity in materials cause absorptionVariants of the Debye equation Cole–Cole equation This equation is used when the dielectric loss peak shows symmetric broadening.
Cole–Davidson equation This equation is used when the dielectric loss peak shows asymmetric broadening.
Havriliak–Negami relaxation This equation considers both symmetric and asymmetric broadening.
Kohlrausch–Williams–Watts function Fourier transform of stretched exponential function.
Curie–von Schweidler law [SEP]What is the relationship between dielectric loss and the transparency of a material?","['B', 'E', 'D']",1.0
What is the purpose of measuring the Larmor precession fields at about 100 microtesla with highly sensitive superconducting quantum interference devices (SQUIDs) in ultra-low field MRI?,"Retrieved: 14 October 2010. ==Low-temperature superconductivity== === Magnetic resonance imaging (MRI) and nuclear magnetic resonance (NMR)=== The biggest application for superconductivity is in producing the large-volume, stable, and high-intensity magnetic fields required for MRI and NMR. By using a lock-in amplifier the device can read only the frequency corresponding to the magnetic field, ignoring many other sources of noise. ==Instrumentation== A Scanning SQUID Microscope is a sensitive near-field imaging system for the measurement of weak magnetic fields by moving a Superconducting Quantum Interference Device (SQUID) across an area. In condensed matter physics, scanning SQUID microscopy is a technique where a superconducting quantum interference device (SQUID) is used to image surface magnetic field strength with micrometre-scale resolution. Further description of the physics of SQUIDs and SQUID microscopy can be found elsewhere.""Current Imaging using Magnetic Field Sensors"" L.A. Knauss, S.I. Woods and A. OrozcoJ. A magnetic field image can be converted to a current density image in about 1 or 2 seconds. ==Applications== The scanning SQUID microscope was originally developed for an experiment to test the pairing symmetry of the high- temperature cuprate superconductor YBCO. For magnetic current imaging systems, a small (about 30 µm wide) high temperature SQUID is used. With this post-processing of a magnetic image and the low noise present in SQUID images, it is possible to enhance the spatial resolution by factors of 5 or more over the near-field limited magnetic image. In addition such devices require extensive vibration dampening if precise height control is to be maintained. ===High temperature scanning SQUID microscope=== thumb|Scanning SQUID microscope A high temperature Scanning SQUID Microscope using a YBCO SQUID is capable of measuring magnetic fields as small as 20 pT (about 2 million times weaker than the earth's magnetic field). Kirtley, IEEE Spectrum p. 40, Dec. (1996) ===Magnetic field detection using SQUID=== Magnetic current imaging uses the magnetic fields produced by currents in electronic devices to obtain images of those currents. As the SQUID is the most sensitive detector of magnetic fields available and can be constructed at submicrometre widths via lithography, the scanning SQUID microscope allows magnetic fields to be measured with unparalleled resolution and sensitivity. Tsuei et al. used a scanning SQUID microscope to measure the local magnetic field at each of the devices in the figure, and observed a field in ring A approximately equal in magnitude Φ0/2A, where A was the area of the ring. As noted, the coordinate axes selected for this analysis are shown in Figure 1. ===Magnetic Current Imaging=== SQUIDs are the most sensitive magnetic sensors known. The Scanning SQUID Microscopy (SSM) data are current density images and current peak images. In the same property behind the scanning SQUID microscope, the phase of the wavefunction is also altered by the amount of magnetic flux passing through the junction, following the relationship Δφ=π(Φ0). With enough electrons moving, the aggregate magnetic field can be detected by superconducting sensors. * Design and applications of a scanning SQUID microscope * Center for Superconductivity Research, University of Maryland * Neocera LLC Category:Josephson effect Category:Measuring instruments Category:Microscopy Category:Scanning probe microscopy Category:Superconductivity As the SQUID material must be superconducting, measurements must be performed at low temperatures. To use the DC SQUID to measure standard magnetic fields, one must either count the number of oscillations in the voltage as the field is changed, which is very difficult in practice, or use a separate DC bias magnetic field parallel to the device to maintain a constant voltage and consequently constant magnetic flux through the loop. The SQUID itself can be used as the pickup coil for measuring the magnetic field, in which case the resolution of the device is proportional to the size of the SQUID. As a result, alone a SQUID can only be used to measure the change in magnetic field from some known value, unless the magnetic field or device size is very small such that Φ < Φ0. ",To measure the magnetization in the same direction as the static magnetic field in T1 relaxation.,"To create a T1-weighted image that is useful for assessing the cerebral cortex, identifying fatty tissue, and characterizing focal liver lesions.","To obtain sufficient signal quality in the microtesla-to-millitesla range, where MRI has been demonstrated recently.",To measure the independent relaxation processes of T1 and T2 in each tissue after excitation.,To change the repetition time (TR) and obtain morphological information in post-contrast imaging.,C,kaggle200,"To create a T1-weighted image, magnetization is allowed to recover before measuring the MR signal by changing the repetition time (TR). This image weighting is useful for assessing the cerebral cortex, identifying fatty tissue, characterizing focal liver lesions, and in general, obtaining morphological information, as well as for post-contrast imaging.
Aluminium oxide is an electrical insulator used as a substrate (silicon on sapphire) for integrated circuits but also as a tunnel barrier for the fabrication of superconducting devices such as single-electron transistors and superconducting quantum interference devices (SQUIDs).
To create a T1-weighted image, magnetization is allowed to recover before measuring the MR signal by changing the repetition time (TR). This image weighting is useful for assessing the cerebral cortex, identifying fatty tissue, characterizing focal liver lesions, and in general, obtaining morphological information, as well as for post-contrast imaging.
MRI requires a magnetic field that is both strong and uniform to a few parts per million across the scan volume. The field strength of the magnet is measured in teslas – and while the majority of systems operate at 1.5 T, commercial systems are available between 0.2 and 7 T. Whole-body MRI systems for research application operate in e.g. 9.4T, 10.5T, 11.7T. Even higher field whole-body MRI systems e.g. 14 T and beyond are in conceptual proposal or in engineering design. Most clinical magnets are superconducting magnets, which require liquid helium to keep them at low temperatures. Lower field strengths can be achieved with permanent magnets, which are often used in ""open"" MRI scanners for claustrophobic patients. Lower field strengths are also used in a portable MRI scanner approved by the FDA in 2020. Recently, MRI has been demonstrated also at ultra-low fields, i.e., in the microtesla-to-millitesla range, where sufficient signal quality is made possible by prepolarization (on the order of 10–100 mT) and by measuring the Larmor precession fields at about 100 microtesla with highly sensitive superconducting quantum interference devices (SQUIDs).","T1 and T2 Each tissue returns to its equilibrium state after excitation by the independent relaxation processes of T1 (spin-lattice; that is, magnetization in the same direction as the static magnetic field) and T2 (spin-spin; transverse to the static magnetic field).
To create a T1-weighted image, magnetization is allowed to recover before measuring the MR signal by changing the repetition time (TR). This image weighting is useful for assessing the cerebral cortex, identifying fatty tissue, characterizing focal liver lesions, and in general, obtaining morphological information, as well as for post-contrast imaging.
To create a T2-weighted image, magnetization is allowed to decay before measuring the MR signal by changing the echo time (TE). This image weighting is useful for detecting edema and inflammation, revealing white matter lesions, and assessing zonal anatomy in the prostate and uterus.
The standard display of MR images is to represent fluid characteristics in black-and-white images, where different tissues turn out as follows:
T1 and T2 Each tissue returns to its equilibrium state after excitation by the independent relaxation processes of T1 (spin-lattice; that is, magnetization in the same direction as the static magnetic field) and T2 (spin-spin; transverse to the static magnetic field).
To create a T1-weighted image, magnetization is allowed to recover before measuring the MR signal by changing the repetition time (TR). This image weighting is useful for assessing the cerebral cortex, identifying fatty tissue, characterizing focal liver lesions, and in general, obtaining morphological information, as well as for post-contrast imaging.
To create a T2-weighted image, magnetization is allowed to decay before measuring the MR signal by changing the echo time (TE). This image weighting is useful for detecting edema and inflammation, revealing white matter lesions, and assessing zonal anatomy in the prostate and uterus.
MRI requires a magnetic field that is both strong and uniform to a few parts per million across the scan volume. The field strength of the magnet is measured in teslas – and while the majority of systems operate at 1.5 T, commercial systems are available between 0.2 and 7 T. Whole-body MRI systems for research application operate in e.g. 9.4T, 10.5T, 11.7T. Even higher field whole-body MRI systems e.g. 14 T and beyond are in conceptual proposal or in engineering design. Most clinical magnets are superconducting magnets, which require liquid helium to keep them at low temperatures. Lower field strengths can be achieved with permanent magnets, which are often used in ""open"" MRI scanners for claustrophobic patients. Lower field strengths are also used in a portable MRI scanner approved by the FDA in 2020. Recently, MRI has been demonstrated also at ultra-low fields, i.e., in the microtesla-to-millitesla range, where sufficient signal quality is made possible by prepolarization (on the order of 10–100 mT) and by measuring the Larmor precession fields at about 100 microtesla with highly sensitive superconducting quantum interference devices (SQUIDs).","Recently, MRI has been demonstrated also at ultra-low fields, i.e., in the microtesla-to-millitesla range, where sufficient signal quality is made possible by prepolarization (on the order of 10–100 mT) and by measuring the Larmor precession fields at about 100 microtesla with highly sensitive superconducting quantum interference devices (SQUIDs)Recently, MRI has been demonstrated also at ultra-low fields, i.e., in the microtesla-to-millitesla range, where sufficient signal quality is made possible by prepolarization (on the order of 10–100 mT) and by measuring the Larmor precession fields at about 100 microtesla with highly sensitive superconducting quantum interference devices (SQUIDs).This image weighting is useful for assessing the cerebral cortex, identifying fatty tissue, characterizing focal liver lesions, and in general, obtaining morphological information, as well as for post-contrast imaging.
Aluminium oxide is an electrical insulator used as a substrate (silicon on sapphire) for integrated circuits but also as a tunnel barrier for the fabrication of superconducting devices such as single-electron transistors and superconducting quantum interference devices (SQUIDs).
To create a T1-weighted image, magnetization is allowed to recover before measuring the MR signal by changing the repetition time (TR)Lower field strengths can be achieved with permanent magnets, which are often used in ""open"" MRI scanners for claustrophobic patientsEven higher field whole-body MRI systems e.gLower field strengths are also used in a portable MRI scanner approved by the FDA in 2020- To create a T1-weighted image, magnetization is allowed to recover before measuring the MR signal by changing the repetition time (TR)Most clinical magnets are superconducting magnets, which require liquid helium to keep them at low temperaturesT1 and T2 Each tissue returns to its equilibrium state after excitation by the independent relaxation processes of T1 (spin-lattice; that is, magnetization in the same direction as the static magnetic field) and T2 (spin-spin; transverse to the static magne","Recently, MRI has been demonstrated also at ultra-low fields, i.e., in the microtesla-to-millitesla range, where sufficient signal quality is made possible by prepolarization (on the order of 10–100 mT) and by measuring the Larmor precession fields at about 100 microtesla with highly sensitive superconducting quantum interference devices (SQUIDs)Recently, MRI has been demonstrated also at ultra-low fields, i.e., in the microtesla-to-millitesla range, where sufficient signal quality is made possible by prepolarization (on the order of 10–100 mT) and by measuring the Larmor precession fields at about 100 microtesla with highly sensitive superconducting quantum interference devices (SQUIDs).This image weighting is useful for assessing the cerebral cortex, identifying fatty tissue, characterizing focal liver lesions, and in general, obtaining morphological information, as well as for post-contrast imaging.
Aluminium oxide is an electrical insulator used as a substrate (silicon on sapphire) for integrated circuits but also as a tunnel barrier for the fabrication of superconducting devices such as single-electron transistors and superconducting quantum interference devices (SQUIDs).
To create a T1-weighted image, magnetization is allowed to recover before measuring the MR signal by changing the repetition time (TR)Lower field strengths can be achieved with permanent magnets, which are often used in ""open"" MRI scanners for claustrophobic patientsEven higher field whole-body MRI systems e.gLower field strengths are also used in a portable MRI scanner approved by the FDA in 2020- To create a T1-weighted image, magnetization is allowed to recover before measuring the MR signal by changing the repetition time (TR)Most clinical magnets are superconducting magnets, which require liquid helium to keep them at low temperaturesT1 and T2 Each tissue returns to its equilibrium state after excitation by the independent relaxation processes of T1 (spin-lattice; that is, magnetization in the same direction as the static magnetic field) and T2 (spin-spin; transverse to the static magne[SEP]What is the purpose of measuring the Larmor precession fields at about 100 microtesla with highly sensitive superconducting quantum interference devices (SQUIDs) in ultra-low field MRI?","['B', 'C', 'D']",0.5
What is the difference between illuminance and luminance?,"As visual perception varies logarithmically, it is helpful to have an appreciation of both illuminance and luminance by orders of magnitude. ==Illuminance== To help compare different orders of magnitude, the following list describes various source of lux, which is measured in lumens per square metre. Luminance is a photometric measure of the luminous intensity per unit area of light travelling in a given direction. More generally, the luminance along a light ray can be defined as L_\mathrm{v} = n^2\frac{\mathrm{d}\Phi_\mathrm{v}}{\mathrm{d}G} where * d is the etendue of an infinitesimally narrow beam containing the specified ray, * dv is the luminous flux carried by this beam, * is the index of refraction of the medium. ==Relation to illuminance== thumb|upright=1.5|Comparison of photometric and radiometric quantities The luminance of a reflecting surface is related to the illuminance it receives: \int_{\Omega_\Sigma} L_\text{v} \mathrm{d}\Omega_\Sigma \cos \theta_\Sigma = M_\text{v} = E_\text{v} R, where the integral covers all the directions of emission , * v is the surface's luminous exitance, * v is the received illuminance, * is the reflectance. Then the relationship is simply L_\text{v} = \frac{E_\text{v} R}{\pi}. ==Units== A variety of units have been used for luminance, besides the candela per square metre. ==See also== *Relative luminance *Orders of magnitude (luminance) *Diffuse reflection *Etendue * *Lambertian reflectance *Lightness (color) *Luma, the representation of luminance in a video monitor *Lumen (unit) *Radiance, radiometric quantity analogous to luminance *Brightness, the subjective impression of luminance *Glare (vision) ===Table of SI light- related units=== ==References== == External links == * A Kodak guide to Estimating Luminance and Illuminance using a camera's exposure meter. Luminance levels indicate how much luminous power could be detected by the human eye looking at a particular surface from a particular angle of view. Luminance is thus an indicator of how bright the surface will appear. Illuminants D represent variations of daylight, illuminant E is the equal-energy illuminant, while illuminants F represent fluorescent lamps of various composition. The simplest devices measure the luminance in a single direction while imaging luminance meters measure luminance in a way similar to the way a digital camera records color images. == Formulation == right|thumb|Parameters for defining the luminance The luminance of a specified point of a light source, in a specified direction, is defined by the mixed partial derivative L_\mathrm{v} = \frac{\mathrm{d}^2\Phi_\mathrm{v}}{\mathrm{d}\Sigma\,\mathrm{d}\Omega_\Sigma \cos \theta_\Sigma} where * v is the luminance (cd/m2), * d2v is the luminous flux (lm) leaving the area d in any direction contained inside the solid angle dΣ, * d is an infinitesimal area (m2) of the source containing the specified point, * dΣ is an infinitesimal solid angle (sr) containing the specified direction, * Σ is the angle between the normal nΣ to the surface d and the specified direction. Luminance is used in the video industry to characterize the brightness of displays. As an example, if one uses a lens to form an image that is smaller than the source object, the luminous power is concentrated into a smaller area, meaning that the illuminance is higher at the image. This standard was prepared as Standard CIE S 009:2002 by the International Commission on Illumination. ==Luminance meter== A luminance meter is a device used in photometry that can measure the luminance in a particular direction and with a particular solid angle. Brightness is the term for the subjective impression of the objective luminance measurement standard (see for the importance of this contrast). Both the International Electrotechnical Commission (IEC) and the Illuminating Engineering Society (IES) recommend the term luminaire for technical use. ==History== Fixture manufacturing began soon after production of the incandescent light bulb. In the case of a perfectly diffuse reflector (also called a Lambertian reflector), the luminance is isotropic, per Lambert's cosine law. A standard illuminant is a theoretical source of visible light with a spectral power distribution that is published. A light fixture (US English), light fitting (UK English), or luminaire is an electrical device containing an electric lamp that provides illumination. This means that for an ideal optical system, the luminance at the output is the same as the input luminance. Manufacturers sometimes compare light sources against illuminant E to calculate the excitation purity. ===Illuminant series F=== The F series of illuminants represent various types of fluorescent lighting. Lighting of larger areas is beyond the scope of task lighting. == Task lighting == === Localized average lighting === Localized lighting consists of a luminaire that provides ambient light as well as task light. The process of calculating the white point discards a great deal of information about the profile of the illuminant, and so although it is true that for every illuminant the exact white point can be calculated, it is not the case that knowing the white point of an image alone tells you a great deal about the illuminant that was used to record it. ===White points of standard illuminants=== ==References== ==External links== * Selected colorimetric tables in Excel, as published in CIE 15:2004 * Konica Minolta Sensing: Light sources & Illuminants Category:Light Category:Color ","Illuminance is the amount of light absorbed by a surface per unit area, while luminance is the amount of light reflected by a surface per unit area.","Illuminance is the amount of light falling on a surface per unit area, while luminance is the amount of light emitted by a source per unit area.","Illuminance is the amount of light concentrated into a smaller area, while luminance is the amount of light filling a larger solid angle.","Illuminance is the amount of light emitted by a source per unit area, while luminance is the amount of light falling on a surface per unit area.","Illuminance is the amount of light reflected by a surface per unit area, while luminance is the amount of light absorbed by a surface per unit area.",B,kaggle200,"where both sides now have units of power (energy emitted per unit time) per unit area of emitting surface, per unit solid angle.
Troland does not directly convert to other units, being a retinal luminance per unit area of a pupil.
Luminance is a photometric measure of the luminous intensity per unit area of light travelling in a given direction. It describes the amount of light that passes through, is emitted from, or is reflected from a particular area, and falls within a given solid angle.
The lux (lx) is this SI unit for illuminance, that is the amount of light that illuminates a surface (the road, in the case of a bike light) per unit area at a given point, weighted according to the sensitivity of the human eye to various colours of light. Some manufacturers indicate the illuminance their front lights provide to the road at a point located a standard distance right in front of the bicycle.","Luminance is a photometric measure of the luminous intensity per unit area of light travelling in a given direction. It describes the amount of light that passes through, is emitted from, or is reflected from a particular area, and falls within a given solid angle. The procedure for conversion from spectral radiance to luminance is standardized by the CIE and ISO.Brightness is the term for the subjective impression of the objective luminance measurement standard (see Objectivity (science) § Objectivity in measurement for the importance of this contrast).
In photometry, illuminance is the total luminous flux incident on a surface, per unit area. It is a measure of how much the incident light illuminates the surface, wavelength-weighted by the luminosity function to correlate with human brightness perception. Similarly, luminous emittance is the luminous flux per unit area emitted from a surface. Luminous emittance is also known as luminous exitance.In SI units illuminance is measured in lux (lx), or equivalently in lumens per square metre (lm·m−2). Luminous exitance is measured in lm·m−2 only, not lux. In the CGS system, the unit of illuminance is the phot, which is equal to 10000 lux. The foot-candle is a non-metric unit of illuminance that is used in photography.Illuminance was formerly often called brightness, but this leads to confusion with other uses of the word, such as to mean luminance. ""Brightness"" should never be used for quantitative description, but only for nonquantitative references to physiological sensations and perceptions of light.
Illuminance at a given distance in lux The lux (lx) is this SI unit for illuminance, that is the amount of light that illuminates a surface (the road, in the case of a bike light) per unit area at a given point, weighted according to the sensitivity of the human eye to various colours of light. Some manufacturers indicate the illuminance their front lights provide to the road at a point located a standard distance right in front of the bicycle."," The procedure for conversion from spectral radiance to luminance is standardized by the CIE and ISO.Brightness is the term for the subjective impression of the objective luminance measurement standard (see Objectivity (science) § Objectivity in measurement for the importance of this contrast).
In photometry, illuminance is the total luminous flux incident on a surface, per unit areaLuminance is a photometric measure of the luminous intensity per unit area of light travelling in a given direction""Brightness"" should never be used for quantitative description, but only for nonquantitative references to physiological sensations and perceptions of light.
Illuminance at a given distance in lux The lux (lx) is this SI unit for illuminance, that is the amount of light that illuminates a surface (the road, in the case of a bike light) per unit area at a given point, weighted according to the sensitivity of the human eye to various colours of lightLuminous emittance is also known as luminous exitance.In SI units illuminance is measured in lux (lx), or equivalently in lumens per square metre (lm·m−2)Similarly, luminous emittance is the luminous flux per unit area emitted from a surfaceIt is a measure of how much the incident light illuminates the surface, wavelength-weighted by the luminosity function to correlate with human brightness perceptionIt describes the amount of light that passes through, is emitted from, or is reflected from a particular area, and falls within a given solid angle.
The lux (lx) is this SI unit for illuminance, that is the amount of light that illuminates a surface (the road, in the case of a bike light) per unit area at a given point, weighted according to the sensitivity of the human eye to various colours of lightThe foot-candle is a non-metric unit of illuminance that is used in photography.Illuminance was formerly often called brightness, but this leads to confusion with other uses of the word, such as to mean luminanceIn the CGS system, the unit of illuminance is the phot, which is equal to 10000 lux- where both sides now have units of power "," The procedure for conversion from spectral radiance to luminance is standardized by the CIE and ISO.Brightness is the term for the subjective impression of the objective luminance measurement standard (see Objectivity (science) § Objectivity in measurement for the importance of this contrast).
In photometry, illuminance is the total luminous flux incident on a surface, per unit areaLuminance is a photometric measure of the luminous intensity per unit area of light travelling in a given direction""Brightness"" should never be used for quantitative description, but only for nonquantitative references to physiological sensations and perceptions of light.
Illuminance at a given distance in lux The lux (lx) is this SI unit for illuminance, that is the amount of light that illuminates a surface (the road, in the case of a bike light) per unit area at a given point, weighted according to the sensitivity of the human eye to various colours of lightLuminous emittance is also known as luminous exitance.In SI units illuminance is measured in lux (lx), or equivalently in lumens per square metre (lm·m−2)Similarly, luminous emittance is the luminous flux per unit area emitted from a surfaceIt is a measure of how much the incident light illuminates the surface, wavelength-weighted by the luminosity function to correlate with human brightness perceptionIt describes the amount of light that passes through, is emitted from, or is reflected from a particular area, and falls within a given solid angle.
The lux (lx) is this SI unit for illuminance, that is the amount of light that illuminates a surface (the road, in the case of a bike light) per unit area at a given point, weighted according to the sensitivity of the human eye to various colours of lightThe foot-candle is a non-metric unit of illuminance that is used in photography.Illuminance was formerly often called brightness, but this leads to confusion with other uses of the word, such as to mean luminanceIn the CGS system, the unit of illuminance is the phot, which is equal to 10000 lux- where both sides now have units of power [SEP]What is the difference between illuminance and luminance?","['B', 'D', 'E']",1.0
What is a magnetic monopole in particle physics?,"In particle physics, a magnetic monopole is a hypothetical elementary particle that is an isolated magnet with only one magnetic pole (a north pole without a south pole or vice versa). A magnetic monopole, if it exists, would have the defining property of producing a magnetic field whose monopole term is non-zero. A true magnetic monopole would be a new elementary particle, and would violate Gauss's law for magnetism . (See below.) ==Poles and magnetism in ordinary matter== All matter isolated to date, including every atom on the periodic table and every particle in the Standard Model, has zero magnetic monopole charge. A magnetic monopole would have a net north or south ""magnetic charge"". In some theoretical models, magnetic monopoles are unlikely to be observed, because they are too massive to create in particle accelerators (see below), and also too rare in the Universe to enter a particle detector with much probability. Coleman, ""The Magnetic Monopole 50 years Later"", reprinted in Aspects of Symmetry The known elementary particles that have electric charge are electric monopoles. The hypothetical existence of a magnetic monopole would imply that the electric charge must be quantized in certain units; also, the existence of the electric charges implies that the magnetic charges of the hypothetical magnetic monopoles, if they exist, must be quantized in units inversely proportional to the elementary electric charge. Electric monopole, or object with non-zero divergency of electrical field may refer to: * Electric charge ==See also== * Magnetic monopole (non-zero divergency of magnetic field) In mathematics, a monopole is a connection over a principal bundle G with a section of the associated adjoint bundle. ==Physical interpretation== Physically, the section can be interpreted as a Higgs field, where the connection and Higgs field should satisfy the Bogomolny equations and be of finite action. == See also == * Nahm equations * Instanton * Magnetic monopole * Yang–Mills theory == References == * * * Category:Differential geometry Category:Mathematical physics However, in the multipole expansion of a magnetic field, the ""monopole"" term is always exactly zero (for ordinary matter). While these should not be confused with hypothetical elementary monopoles existing in the vacuum, they nonetheless have similar properties and can be probed using similar techniques. For instance, a wide class of particles known as the X and Y bosons are predicted to mediate the coupling of the electroweak and strong forces, but these particles are extremely heavy and well beyond the capabilities of any reasonable particle accelerator to create. == Searches for magnetic monopoles == Experimental searches for magnetic monopoles can be placed in one of two categories: those that try to detect preexisting magnetic monopoles and those that try to create and detect new magnetic monopoles. This constitutes the first example of a quasi-magnetic monopole observed within a system governed by quantum field theory. ==See also== * Bogomolny equations * Dirac string * Dyon * Felix Ehrenhaft * Flatness problem * Gauss's law for magnetism * Ginzburg–Landau theory * Halbach array * Horizon problem * Instanton * Magnetic monopole problem * Meron * Soliton * 't Hooft–Polyakov monopole * Wu–Yang monopole * Magnetic current ==Notes== ==References== ===Bibliography=== * * * * * * * * * * * ==External links== Category:Hypothetical elementary particles Category:Magnetism Category:Gauge theories Category:Hypothetical particles Category:Unsolved problems in physics Multipole magnets are magnets built from multiple individual magnets, typically used to control beams of charged particles. A magnetic dipole is something whose magnetic field is predominantly or exactly described by the magnetic dipole term of the multipole expansion. Magnetism in bar magnets and electromagnets is not caused by magnetic monopoles, and indeed, there is no known experimental or observational evidence that magnetic monopoles exist. Further advances in theoretical particle physics, particularly developments in grand unified theories and quantum gravity, have led to more compelling arguments (detailed below) that monopoles do exist. Nevertheless, Pierre Curie pointed out in 1894 that magnetic monopoles could conceivably exist, despite not having been seen so far. ===Quantum mechanics=== The quantum theory of magnetic charge started with a paper by the physicist Paul Dirac in 1931. Retrieved February 1, 2014. has never been observed in experiments.Magnetic Monopoles, report from Particle data group, updated August 2015 by D. Milstead and E.J. Weinberg. ",A hypothetical elementary particle that is an isolated electric charge with both positive and negative poles.,A hypothetical elementary particle that is an isolated magnet with no magnetic poles.,"A hypothetical elementary particle that is an isolated electric charge with only one electric pole, either a positive pole or a negative pole.",A hypothetical elementary particle that is an isolated magnet with both north and south poles.,"A hypothetical elementary particle that is an isolated magnet with only one magnetic pole, either a north pole or a south pole.",E,kaggle200,"A magnetic dipole is something whose magnetic field is predominantly or exactly described by the magnetic dipole term of the multipole expansion. The term ""dipole"" means ""two poles"", corresponding to the fact that a dipole magnet typically contains a ""north pole"" on one side and a ""south pole"" on the other side. This is analogous to an electric dipole, which has positive charge on one side and negative charge on the other. However, an electric dipole and magnetic dipole are fundamentally quite different. In an electric dipole made of ordinary matter, the positive charge is made of protons and the negative charge is made of electrons, but a magnetic dipole does ""not"" have different types of matter creating the north pole and south pole. Instead, the two magnetic poles arise simultaneously from the aggregate effect of all the currents and intrinsic moments throughout the magnet. Because of this, the two poles of a magnetic dipole must always have equal and opposite strength, and the two poles cannot be separated from each other.
The axino is a hypothetical elementary particle predicted by some theories of particle physics. Peccei–Quinn theory attempts to explain the observed phenomenon known as the strong CP problem by introducing a hypothetical real scalar particle called the axion. Adding supersymmetry to the model predicts the existence of a fermionic superpartner for the axion, the axino, and a bosonic superpartner, the ""saxion"". They are all bundled up in a chiral superfield.
The pole model usually treats magnetic charge as a mathematical abstraction, rather than a physical property of particles. However, a magnetic monopole is a hypothetical particle (or class of particles) that physically has only one magnetic pole (either a north pole or a south pole). In other words, it would possess a ""magnetic charge"" analogous to an electric charge. Magnetic field lines would start or end on magnetic monopoles, so if they exist, they would give exceptions to the rule that magnetic field lines neither start nor end. Some theories (such as Grand Unified Theories) have predicted the existence of magnetic monopoles, but so far, none have been observed.
In particle physics, a magnetic monopole is a hypothetical elementary particle that is an isolated magnet with only one magnetic pole (a north pole without a south pole or vice versa). A magnetic monopole would have a net north or south ""magnetic charge"". Modern interest in the concept stems from particle theories, notably the grand unified and superstring theories, which predict their existence. The known elementary particles that have electric charge are electric monopoles.","Magnetic monopoles Since a bar magnet gets its ferromagnetism from electrons distributed evenly throughout the bar, when a bar magnet is cut in half, each of the resulting pieces is a smaller bar magnet. Even though a magnet is said to have a north pole and a south pole, these two poles cannot be separated from each other. A monopole—if such a thing exists—would be a new and fundamentally different kind of magnetic object. It would act as an isolated north pole, not attached to a south pole, or vice versa. Monopoles would carry ""magnetic charge"" analogous to electric charge. Despite systematic searches since 1931, as of 2010, they have never been observed, and could very well not exist.Nevertheless, some theoretical physics models predict the existence of these magnetic monopoles. Paul Dirac observed in 1931 that, because electricity and magnetism show a certain symmetry, just as quantum theory predicts that individual positive or negative electric charges can be observed without the opposing charge, isolated South or North magnetic poles should be observable. Using quantum theory Dirac showed that if magnetic monopoles exist, then one could explain the quantization of electric charge—that is, why the observed elementary particles carry charges that are multiples of the charge of the electron.
The axino is a hypothetical elementary particle predicted by some theories of particle physics. Peccei–Quinn theory attempts to explain the observed phenomenon known as the strong CP problem by introducing a hypothetical real scalar particle called the axion. Adding supersymmetry to the model predicts the existence of a fermionic superpartner for the axion, the axino, and a bosonic superpartner, the saxion. They are all bundled up in a chiral superfield.
In particle physics, a magnetic monopole is a hypothetical elementary particle that is an isolated magnet with only one magnetic pole (a north pole without a south pole or vice versa). A magnetic monopole would have a net north or south ""magnetic charge"". Modern interest in the concept stems from particle theories, notably the grand unified and superstring theories, which predict their existence. The known elementary particles that have electric charge are electric monopoles.","They are all bundled up in a chiral superfield.
In particle physics, a magnetic monopole is a hypothetical elementary particle that is an isolated magnet with only one magnetic pole (a north pole without a south pole or vice versa)Some theories (such as Grand Unified Theories) have predicted the existence of magnetic monopoles, but so far, none have been observed.
In particle physics, a magnetic monopole is a hypothetical elementary particle that is an isolated magnet with only one magnetic pole (a north pole without a south pole or vice versa)However, a magnetic monopole is a hypothetical particle (or class of particles) that physically has only one magnetic pole (either a north pole or a south pole)A magnetic monopole would have a net north or south ""magnetic charge""A monopole—if such a thing exists—would be a new and fundamentally different kind of magnetic objectMonopoles would carry ""magnetic charge"" analogous to electric chargeDespite systematic searches since 1931, as of 2010, they have never been observed, and could very well not exist.Nevertheless, some theoretical physics models predict the existence of these magnetic monopolesMagnetic monopoles Since a bar magnet gets its ferromagnetism from electrons distributed evenly throughout the bar, when a bar magnet is cut in half, each of the resulting pieces is a smaller bar magnetUsing quantum theory Dirac showed that if magnetic monopoles exist, then one could explain the quantization of electric charge—that is, why the observed elementary particles carry charges that are multiples of the charge of the electron.
The axino is a hypothetical elementary particle predicted by some theories of particle physicsThe known elementary particles that have electric charge are electric monopoles- A magnetic dipole is something whose magnetic field is predominantly or exactly described by the magnetic dipole term of the multipole expansionThe known elementary particles that have electric charge are electric monopoles.They are all bundled up in a chiral superfield.
The pole model usually treats magnetic charge as a mathema","They are all bundled up in a chiral superfield.
In particle physics, a magnetic monopole is a hypothetical elementary particle that is an isolated magnet with only one magnetic pole (a north pole without a south pole or vice versa)Some theories (such as Grand Unified Theories) have predicted the existence of magnetic monopoles, but so far, none have been observed.
In particle physics, a magnetic monopole is a hypothetical elementary particle that is an isolated magnet with only one magnetic pole (a north pole without a south pole or vice versa)However, a magnetic monopole is a hypothetical particle (or class of particles) that physically has only one magnetic pole (either a north pole or a south pole)A magnetic monopole would have a net north or south ""magnetic charge""A monopole—if such a thing exists—would be a new and fundamentally different kind of magnetic objectMonopoles would carry ""magnetic charge"" analogous to electric chargeDespite systematic searches since 1931, as of 2010, they have never been observed, and could very well not exist.Nevertheless, some theoretical physics models predict the existence of these magnetic monopolesMagnetic monopoles Since a bar magnet gets its ferromagnetism from electrons distributed evenly throughout the bar, when a bar magnet is cut in half, each of the resulting pieces is a smaller bar magnetUsing quantum theory Dirac showed that if magnetic monopoles exist, then one could explain the quantization of electric charge—that is, why the observed elementary particles carry charges that are multiples of the charge of the electron.
The axino is a hypothetical elementary particle predicted by some theories of particle physicsThe known elementary particles that have electric charge are electric monopoles- A magnetic dipole is something whose magnetic field is predominantly or exactly described by the magnetic dipole term of the multipole expansionThe known elementary particles that have electric charge are electric monopoles.They are all bundled up in a chiral superfield.
The pole model usually treats magnetic charge as a mathema[SEP]What is a magnetic monopole in particle physics?","['E', 'C', 'A']",1.0
What is the difference between redshift due to the expansion of the universe and Doppler redshift?,"The redshift due to expansion of the universe depends upon the recessional velocity in a fashion determined by the cosmological model chosen to describe the expansion of the universe, which is very different from how Doppler redshift depends upon local velocity.. There is a distinction between a redshift in cosmological context as compared to that witnessed when nearby objects exhibit a local Doppler-effect redshift. The redshifts of galaxies include both a component related to recessional velocity from expansion of the universe, and a component related to peculiar motion (Doppler shift). Conversely, Doppler effect redshifts () are associated with objects receding (moving away) from the observer with the light shifting to lower energies. Popular literature often uses the expression ""Doppler redshift"" instead of ""cosmological redshift"" to describe the redshift of galaxies dominated by the expansion of spacetime, but the cosmological redshift is not found using the relativistic Doppler equationOdenwald & Fienberg 1993 which is instead characterized by special relativity; thus is impossible while, in contrast, is possible for cosmological redshifts because the space which separates the objects (for example, a quasar from the Earth) can expand faster than the speed of light.Speed faster than light is allowed because the expansion of the spacetime metric is described by general relativity in terms of sequences of only locally valid inertial frames as opposed to a global Minkowski metric. Redshift is a shift in the spectrum of the emitted electromagnetic radiation from an object toward lower energies and frequencies, associated with the phenomenon of the Doppler effect. \---- Using a model of the expansion of the universe, redshift can be related to the age of an observed object, the so-called cosmic time–redshift relation. For these reasons and others, the consensus among astronomers is that the redshifts they observe are due to some combination of the three established forms of Doppler-like redshifts. In physics, a redshift is an increase in the wavelength, and corresponding decrease in the frequency and photon energy, of electromagnetic radiation (such as light). In standard inflationary cosmological models, the redshift of cosmological bodies is ascribed to the expansion of the universe, with greater redshift indicating greater cosmic distance from the Earth (see Hubble's Law). In the widely accepted cosmological model based on general relativity, redshift is mainly a result of the expansion of space: this means that the farther away a galaxy is from us, the more the space has expanded in the time since the light left that galaxy, so the more the light has been stretched, the more redshifted the light is, and so the faster it appears to be moving away from us. Rather than cosmological redshifts being a consequence of the relative velocities that are subject to the laws of special relativity (and thus subject to the rule that no two locally separated objects can have relative velocities with respect to each other faster than the speed of light), the photons instead increase in wavelength and redshift because of a global feature of the spacetime through which they are traveling. A more complete treatment of the Doppler redshift requires considering relativistic effects associated with motion of sources close to the speed of light. Redshift (and blueshift) may be characterized by the relative difference between the observed and emitted wavelengths (or frequency) of an object. The effect is due to the peculiar velocities of the galaxies causing a Doppler shift in addition to the redshift caused by the cosmological expansion. (This article is useful for explaining the cosmological redshift mechanism as well as clearing up misconceptions regarding the physics of the expansion of space.) ===Books=== * * * * * * * * * * * See also physical cosmology textbooks for applications of the cosmological and gravitational redshifts. ==External links== * Ned Wright's Cosmology tutorial * Cosmic reference guide entry on redshift * Mike Luciuk's Astronomical Redshift tutorial * Animated GIF of Cosmological Redshift by Wayne Hu * Category:Astronomical spectroscopy Category:Doppler effects Category:Effects of gravitation Category:Physical cosmology Category:Physical quantities Category:Concepts in astronomy Consequently, this type of redshift is called the Doppler redshift. Otherwise, redshifts combine as :1+z=(1+z_{\mathrm{Doppler}})(1+z_{\mathrm{expansion}}) which yields solutions where certain objects that ""recede"" are blueshifted and other objects that ""approach"" are redshifted. The Hubble law's linear relationship between distance and redshift assumes that the rate of expansion of the universe is constant. To derive the redshift effect, use the geodesic equation for a light wave, which is :ds^2=0=-c^2dt^2+\frac{a^2 dr^2}{1-kr^2} where * is the spacetime interval * is the time interval * is the spatial interval * is the speed of light * is the time-dependent cosmic scale factor * is the curvature per unit area. ","Redshift due to the expansion of the universe depends on the rate of change of a(t) at the times of emission or absorption, while Doppler redshift depends on the increase of a(t) in the whole period from emission to absorption.","Redshift due to the expansion of the universe depends on the local velocity of the object emitting the light, while Doppler redshift depends on the cosmological model chosen to describe the expansion of the universe.",There is no difference between redshift due to the expansion of the universe and Doppler redshift.,"Redshift due to the expansion of the universe depends on the cosmological model chosen to describe the expansion of the universe, while Doppler redshift depends on the local velocity of the object emitting the light.","Redshift due to the expansion of the universe depends on the increase of a(t) in the whole period from emission to absorption, while Doppler redshift depends on the rate of change of a(t) at the times of emission or absorption.",D,kaggle200,"Redshift is a GPU-accelerated 3D rendering software developed by Redshift Rendering Technologies Inc., a subsidiary of Maxon.
Using a model of the expansion of the universe, redshift can be related to the age of an observed object, the so-called ""cosmic time–redshift relation"". Denote a density ratio as :
Astronomers often refer to the cosmological redshift as a Doppler shift which can lead to a misconception. Although similar, the cosmological redshift is not identical to the classically derived Doppler redshift because most elementary derivations of the Doppler redshift do not accommodate the expansion of space. Accurate derivation of the cosmological redshift requires the use of general relativity, and while a treatment using simpler Doppler effect arguments gives nearly identical results for nearby galaxies, interpreting the redshift of more distant galaxies as due to the simplest Doppler redshift treatments can cause confusion.
The redshifts of galaxies include both a component related to recessional velocity from expansion of the universe, and a component related to peculiar motion (Doppler shift). The redshift due to expansion of the universe depends upon the recessional velocity in a fashion determined by the cosmological model chosen to describe the expansion of the universe, which is very different from how Doppler redshift depends upon local velocity. Describing the cosmological expansion origin of redshift, cosmologist Edward Robert Harrison said, ""Light leaves a galaxy, which is stationary in its local region of space, and is eventually received by observers who are stationary in their own local region of space. Between the galaxy and the observer, light travels through vast regions of expanding space. As a result, all wavelengths of the light are stretched by the expansion of space. It is as simple as that..."" Steven Weinberg clarified, ""The increase of wavelength from emission to absorption of light does not depend on the rate of change of [here is the Robertson–Walker scale factor] at the times of emission or absorption, but on the increase of in the whole period from emission to absorption.""","In physics, a redshift is an increase in the wavelength, and corresponding decrease in the frequency and photon energy, of electromagnetic radiation (such as light). The opposite change, a decrease in wavelength and simultaneous increase in frequency and energy, is known as a negative redshift, or blueshift. The terms derive from the colours red and blue which form the extremes of the visible light spectrum. The three main causes of electromagnetic redshift in astronomy and cosmology are, first, radiation traveling between objects that are moving apart (""relativistic"" redshift, an example of the relativistic Doppler effect); second, the gravitational redshift due to radiation traveling towards an object in a weaker gravitational potential; and third, the cosmological redshift due to radiation traveling through expanding space. All sufficiently distant light sources show redshift for a velocity proportionate to their distance from Earth, a fact known as Hubble's law.
Redshift is a GPU-accelerated 3D rendering software developed by Redshift Rendering Technologies Inc., a subsidiary of Maxon.
Distinguishing between cosmological and local effects For cosmological redshifts of z < 0.01 additional Doppler redshifts and blueshifts due to the peculiar motions of the galaxies relative to one another cause a wide scatter from the standard Hubble Law. The resulting situation can be illustrated by the Expanding Rubber Sheet Universe, a common cosmological analogy used to describe the expansion of space. If two objects are represented by ball bearings and spacetime by a stretching rubber sheet, the Doppler effect is caused by rolling the balls across the sheet to create peculiar motion. The cosmological redshift occurs when the ball bearings are stuck to the sheet and the sheet is stretched.The redshifts of galaxies include both a component related to recessional velocity from expansion of the universe, and a component related to peculiar motion (Doppler shift). The redshift due to expansion of the universe depends upon the recessional velocity in a fashion determined by the cosmological model chosen to describe the expansion of the universe, which is very different from how Doppler redshift depends upon local velocity. Describing the cosmological expansion origin of redshift, cosmologist Edward Robert Harrison said, ""Light leaves a galaxy, which is stationary in its local region of space, and is eventually received by observers who are stationary in their own local region of space. Between the galaxy and the observer, light travels through vast regions of expanding space. As a result, all wavelengths of the light are stretched by the expansion of space. It is as simple as that..."" Steven Weinberg clarified, ""The increase of wavelength from emission to absorption of light does not depend on the rate of change of a(t) [here a(t) is the Robertson–Walker scale factor] at the times of emission or absorption, but on the increase of a(t) in the whole period from emission to absorption.""Popular literature often uses the expression ""Doppler redshift"" instead of ""cosmological redshift"" to describe the redshift of galaxies dominated by the expansion of spacetime, but the cosmological redshift is not found using the relativistic Doppler equation which is instead characterized by special relativity; thus v ≥ c is impossible while, in contrast, v ≥ c is possible for cosmological redshifts because the space which separates the objects (for example, a quasar from the Earth) can expand faster than the speed of light. More mathematically, the viewpoint that ""distant galaxies are receding"" and the viewpoint that ""the space between galaxies is expanding"" are related by changing coordinate systems. Expressing this precisely requires working with the mathematics of the Friedmann–Robertson–Walker metric.If the universe were contracting instead of expanding, we would see distant galaxies blueshifted by an amount proportional to their distance instead of redshifted.","The redshift due to expansion of the universe depends upon the recessional velocity in a fashion determined by the cosmological model chosen to describe the expansion of the universe, which is very different from how Doppler redshift depends upon local velocityAlthough similar, the cosmological redshift is not identical to the classically derived Doppler redshift because most elementary derivations of the Doppler redshift do not accommodate the expansion of spaceAccurate derivation of the cosmological redshift requires the use of general relativity, and while a treatment using simpler Doppler effect arguments gives nearly identical results for nearby galaxies, interpreting the redshift of more distant galaxies as due to the simplest Doppler redshift treatments can cause confusion.
The redshifts of galaxies include both a component related to recessional velocity from expansion of the universe, and a component related to peculiar motion (Doppler shift)It is as simple as that..."" Steven Weinberg clarified, ""The increase of wavelength from emission to absorption of light does not depend on the rate of change of a(t) [here a(t) is the Robertson–Walker scale factor] at the times of emission or absorption, but on the increase of a(t) in the whole period from emission to absorption.""Popular literature often uses the expression ""Doppler redshift"" instead of ""cosmological redshift"" to describe the redshift of galaxies dominated by the expansion of spacetime, but the cosmological redshift is not found using the relativistic Doppler equation which is instead characterized by special relativity; thus v ≥ c is impossible while, in contrast, v ≥ c is possible for cosmological redshifts because the space which separates the objects (for example, a quasar from the Earth) can expand faster than the speed of lightThe cosmological redshift occurs when the ball bearings are stuck to the sheet and the sheet is stretched.The redshifts of galaxies include both a component related to recessional velocity from expansion of the universe, and a component related to peculiar motion (Doppler ","The redshift due to expansion of the universe depends upon the recessional velocity in a fashion determined by the cosmological model chosen to describe the expansion of the universe, which is very different from how Doppler redshift depends upon local velocityAlthough similar, the cosmological redshift is not identical to the classically derived Doppler redshift because most elementary derivations of the Doppler redshift do not accommodate the expansion of spaceAccurate derivation of the cosmological redshift requires the use of general relativity, and while a treatment using simpler Doppler effect arguments gives nearly identical results for nearby galaxies, interpreting the redshift of more distant galaxies as due to the simplest Doppler redshift treatments can cause confusion.
The redshifts of galaxies include both a component related to recessional velocity from expansion of the universe, and a component related to peculiar motion (Doppler shift)It is as simple as that..."" Steven Weinberg clarified, ""The increase of wavelength from emission to absorption of light does not depend on the rate of change of a(t) [here a(t) is the Robertson–Walker scale factor] at the times of emission or absorption, but on the increase of a(t) in the whole period from emission to absorption.""Popular literature often uses the expression ""Doppler redshift"" instead of ""cosmological redshift"" to describe the redshift of galaxies dominated by the expansion of spacetime, but the cosmological redshift is not found using the relativistic Doppler equation which is instead characterized by special relativity; thus v ≥ c is impossible while, in contrast, v ≥ c is possible for cosmological redshifts because the space which separates the objects (for example, a quasar from the Earth) can expand faster than the speed of lightThe cosmological redshift occurs when the ball bearings are stuck to the sheet and the sheet is stretched.The redshifts of galaxies include both a component related to recessional velocity from expansion of the universe, and a component related to peculiar motion (Doppler [SEP]What is the difference between redshift due to the expansion of the universe and Doppler redshift?","['E', 'D', 'A']",0.5
What is the relationship between Coordinated Universal Time (UTC) and Universal Time (UT1)?,"UTC (on which civil time is usually based) is a compromise, stepping with atomic seconds but periodically reset by a leap second to match UT1. A leap second is a one-second adjustment that is occasionally applied to Coordinated Universal Time (UTC), to accommodate the difference between precise time (International Atomic Time (TAI), as measured by atomic clocks) and imprecise observed solar time (UT1), which varies due to irregularities and long-term slowdown in the Earth's rotation. Universal Time (UT or UT1) is a time standard based on Earth's rotation. Since 1972, UTC is calculated by subtracting the accumulated leap seconds from International Atomic Time (TAI), which is a coordinate time scale tracking notional proper time on the rotating surface of the Earth (the geoid). Leap seconds are inserted as necessary to keep UTC within 0.9 seconds of the UT1 variant of universal time. The difference between UT1 and UTC is known as DUT1. ===Adoption in various countries=== The table shows the dates of adoption of time zones based on the Greenwich meridian, including half-hour zones. The current version of UTC is defined by International Telecommunication Union Recommendation (ITU-R TF.460-6), Standard-frequency and time-signal emissions, and is based on International Atomic Time (TAI) with leap seconds added at irregular intervals to compensate for the accumulated difference between TAI and time measured by Earth's rotation. :DUT1 = UT1 − UTC UTC is maintained via leap seconds, such that DUT1 remains within the range −0.9 s < DUT1 < +0.9 s. However, there are also several other infrequently used time standards that are referred to as Universal Time, which agree within 0.03 seconds with UT1: * UT0 is Universal Time determined at an observatory by observing the diurnal motion of stars or extragalactic radio sources, and also from ranging observations of the Moon and artificial Earth satellites. In 1972, the leap-second system was introduced so that the UTC seconds could be set exactly equal to the standard SI second, while still maintaining the UTC time of day and changes of UTC date synchronized with those of UT1. The UTC offset is the difference in hours and minutes between Coordinated Universal Time (UTC) and local solar time, at a particular place. See the ""Current number of leap seconds"" section for the number of leap seconds inserted to date. ==Etymology== The official abbreviation for Coordinated Universal Time is UTC. Coordinated Universal Time or UTC is the primary time standard by which the world regulates clocks and time. UTC (and TAI) would be more and more ahead of UT; it would coincide with local mean time along a meridian drifting eastward faster and faster. This caused engineers worldwide to discuss a negative leap second and other possible timekeeping measures of which some could eliminate leap seconds. ==Future of leap seconds== The TAI and UT1 time scales are precisely defined, the former by atomic clocks (and thus independent of Earth's rotation) and the latter by astronomical observations (that measure actual planetary rotation and thus the solar time at the Greenwich meridian). Whenever a level of accuracy better than one second is not required, UTC can be used as an approximation of UT1. Those astronomical observatories and other users that require UT1 could run off UT1 – although in many cases these users already download UT1-UTC from the IERS, and apply corrections in software. ==See also== * Clock drift, phenomenon where a clock gains or loses time compared to another clock * DUT1, which describes the difference between coordinated universal time (UTC) and universal time (UT1) * Dynamical time scale * Leap year, a year containing one extra day or month ==Notes== ==References== ==Further reading== * * * * * ==External links== *IERS Bulletins, including Bulletin C (leap second announcements) *LeapSecond.com – A web site dedicated to precise time and frequency *NIST FAQ about leap year and leap second *The leap second: its history and possible future * * * * * Judah Levine's Everyday Time and Atomic Time series ** ** ** ** ** Category:Timekeeping Category:1972 introductions Category:1972 in science Starting January 1, 1972, UTC was defined to follow UT1 within 0.9 seconds rather than UT2, marking the decline of UT2. GPS time always remains exactly 19 seconds behind TAI (neither system is affected by the leap seconds introduced in UTC). ===Time zones=== Time zones are usually defined as differing from UTC by an integer number of hours, although the laws of each jurisdiction would have to be consulted if sub-second accuracy was required. For example, local time on the east coast of the United States is five hours behind UTC during winter, but four hours behind while daylight saving is observed there. ==History== In 1928, the term Universal Time (UT) was introduced by the International Astronomical Union to refer to GMT, with the day starting at midnight. ",UTC and Universal Time (UT1) are identical time scales that are used interchangeably in science and engineering.,"UTC is a time scale that is completely independent of Universal Time (UT1). UTC is kept within 0.9 second of UT1 by the introduction of one-second steps to UTC, the ""leap second"".","UTC is an atomic time scale designed to approximate Universal Time (UT1), but it differs from UT1 by a non-integral number of seconds. UTC is kept within 0.9 second of UT1 by the introduction of one-second steps to UTC, the ""leap second"".","UTC is an atomic time scale designed to approximate Universal Time (UT1), but it differs from UT1 by an integral number of seconds. UTC is kept within 0.9 second of UT1 by the introduction of one-second steps to UTC, the ""leap second"".",UTC is a time scale that is based on the irregularities in Earth's rotation and is completely independent of Universal Time (UT1).,D,kaggle200,"Coordinated Mars Time (MTC) or Martian Coordinated Time is a proposed Mars analog to Universal Time (UT1) on Earth. It is defined as the mean solar time at Mars's prime meridian. The name ""MTC"" is intended to parallel the Terran Coordinated Universal Time (UTC), but this is somewhat misleading: what distinguishes UTC from other forms of UT is its leap seconds, but MTC does not use any such scheme. MTC is more closely analogous to UT1.
UT1 is the principal form of Universal Time. However, there are also several other infrequently-used time standards that are referred to as Universal Time, which agree within 0.03 seconds with UT1:
Coordinated Universal Time (UTC) is an atomic time scale designed to approximate Universal Time. UTC differs from TAI by an integral number of seconds. UTC is kept within 0.9 second of UT1 by the introduction of one-second steps to UTC, the ""leap second"". To date these steps (and difference ""TAI-UTC"") have always been positive.
International Atomic Time (TAI) is the primary international time standard from which other time standards are calculated. Universal Time (UT1) is mean solar time at 0° longitude, computed from astronomical observations. It varies from TAI because of the irregularities in Earth's rotation. Coordinated Universal Time (UTC) is an atomic time scale designed to approximate Universal Time. UTC differs from TAI by an integral number of seconds. UTC is kept within 0.9 second of UT1 by the introduction of one-second steps to UTC, the ""leap second"". The Global Positioning System broadcasts a very precise time signal based on UTC time.","When introduced, broadcast time signals were based on UT, and hence on the rotation of the Earth. In 1955 the BIH adopted a proposal by William Markowitz, effective January 1, 1956, dividing UT into UT0 (UT as formerly computed), UT1 (UT0 corrected for polar motion) and UT2 (UT0 corrected for polar motion and seasonal variation). UT1 was the version sufficient for ""many astronomical and geodetic applications"", while UT2 was to be broadcast over radio to the public.UT0 and UT2 soon became irrelevant due to the introduction of Coordinated Universal Time (UTC). Starting in 1956, WWV broadcast an atomic clock signal stepped by 20 ms increments to bring it into agreement with UT1. The up to 20 ms error from UT1 is on the same order of magnitude as the differences between UT0, UT1, and UT2. By 1960, the U.S. Naval Observatory, the Royal Greenwich Observatory, and the UK National Physical Laboratory had developed UTC, with a similar stepping approach. The 1960 URSI meeting recommended that all time services should follow the lead of the UK and US and broadcast coordinated time using a frequency offset from cesium aimed to match the predicted progression of UT2 with occasional steps as needed. Starting January 1, 1972, UTC was defined to follow UT1 within 0.9 seconds rather than UT2, marking the decline of UT2.Modern civil time generally follows UTC. In some countries, the term Greenwich Mean Time persists in common usage to this day in reference to UT1, in civil timekeeping as well as in astronomical almanacs and other references. Whenever a level of accuracy better than one second is not required, UTC can be used as an approximation of UT1. The difference between UT1 and UTC is known as DUT1.
Universal Time (UT1) is the Earth Rotation Angle (ERA) linearly scaled to match historical definitions of mean solar time at 0° longitude. At high precision, Earth's rotation is irregular and is determined from the positions of distant quasars using long baseline interferometry, laser ranging of the Moon and artificial satellites, as well as GPS satellite orbits.
Coordinated Universal Time (UTC) is an atomic time scale designed to approximate UT1. UTC differs from TAI by an integral number of seconds. UTC is kept within 0.9 second of UT1 by the introduction of one-second steps to UTC, the ""leap second"". To date these steps (and difference ""TAI-UTC"") have always been positive.
The Global Positioning System broadcasts a very precise time signal worldwide, along with instructions for converting GPS time (GPST) to UTC. It was defined with a constant offset from TAI: GPST = TAI - 19 s. The GPS time standard is maintained independently but regularly synchronized with or from, UTC time.
International Atomic Time (TAI) is the primary international time standard from which other time standards are calculated. Universal Time (UT1) is mean solar time at 0° longitude, computed from astronomical observations. It varies from TAI because of the irregularities in Earth's rotation. Coordinated Universal Time (UTC) is an atomic time scale designed to approximate Universal Time. UTC differs from TAI by an integral number of seconds. UTC is kept within 0.9 second of UT1 by the introduction of one-second steps to UTC, the ""leap second"". The Global Positioning System broadcasts a very precise time signal based on UTC time.","The difference between UT1 and UTC is known as DUT1.
Universal Time (UT1) is the Earth Rotation Angle (ERA) linearly scaled to match historical definitions of mean solar time at 0° longitudeHowever, there are also several other infrequently-used time standards that are referred to as Universal Time, which agree within 0.03 seconds with UT1:
Coordinated Universal Time (UTC) is an atomic time scale designed to approximate Universal TimeUTC is kept within 0.9 second of UT1 by the introduction of one-second steps to UTC, the ""leap second""Coordinated Universal Time (UTC) is an atomic time scale designed to approximate Universal TimeWhenever a level of accuracy better than one second is not required, UTC can be used as an approximation of UT1Starting January 1, 1972, UTC was defined to follow UT1 within 0.9 seconds rather than UT2, marking the decline of UT2.Modern civil time generally follows UTCMTC is more closely analogous to UT1.
UT1 is the principal form of Universal TimeUT1 was the version sufficient for ""many astronomical and geodetic applications"", while UT2 was to be broadcast over radio to the public.UT0 and UT2 soon became irrelevant due to the introduction of Coordinated Universal Time (UTC)Universal Time (UT1) is mean solar time at 0° longitude, computed from astronomical observationsTo date these steps (and difference ""TAI-UTC"") have always been positive.
International Atomic Time (TAI) is the primary international time standard from which other time standards are calculatedThe GPS time standard is maintained independently but regularly synchronized with or from, UTC time.
International Atomic Time (TAI) is the primary international time standard from which other time standards are calculatedTo date these steps (and difference ""TAI-UTC"") have always been positive.
The Global Positioning System broadcasts a very precise time signal worldwide, along with instructions for converting GPS time (GPST) to UTCUTC differs from TAI by an integral number of secondsWhen introduced, broadcast time signals were based on UT, and hence on the rotation of the EarthAt high ","The difference between UT1 and UTC is known as DUT1.
Universal Time (UT1) is the Earth Rotation Angle (ERA) linearly scaled to match historical definitions of mean solar time at 0° longitudeHowever, there are also several other infrequently-used time standards that are referred to as Universal Time, which agree within 0.03 seconds with UT1:
Coordinated Universal Time (UTC) is an atomic time scale designed to approximate Universal TimeUTC is kept within 0.9 second of UT1 by the introduction of one-second steps to UTC, the ""leap second""Coordinated Universal Time (UTC) is an atomic time scale designed to approximate Universal TimeWhenever a level of accuracy better than one second is not required, UTC can be used as an approximation of UT1Starting January 1, 1972, UTC was defined to follow UT1 within 0.9 seconds rather than UT2, marking the decline of UT2.Modern civil time generally follows UTCMTC is more closely analogous to UT1.
UT1 is the principal form of Universal TimeUT1 was the version sufficient for ""many astronomical and geodetic applications"", while UT2 was to be broadcast over radio to the public.UT0 and UT2 soon became irrelevant due to the introduction of Coordinated Universal Time (UTC)Universal Time (UT1) is mean solar time at 0° longitude, computed from astronomical observationsTo date these steps (and difference ""TAI-UTC"") have always been positive.
International Atomic Time (TAI) is the primary international time standard from which other time standards are calculatedThe GPS time standard is maintained independently but regularly synchronized with or from, UTC time.
International Atomic Time (TAI) is the primary international time standard from which other time standards are calculatedTo date these steps (and difference ""TAI-UTC"") have always been positive.
The Global Positioning System broadcasts a very precise time signal worldwide, along with instructions for converting GPS time (GPST) to UTCUTC differs from TAI by an integral number of secondsWhen introduced, broadcast time signals were based on UT, and hence on the rotation of the EarthAt high [SEP]What is the relationship between Coordinated Universal Time (UTC) and Universal Time (UT1)?","['D', 'B', 'C']",1.0
What is the reason for heating metals to a temperature just above the upper critical temperature?,"Bringing a metal to its forging temperature allows the metal's shape to be changed by applying a relatively small force, without creating cracks. Forging temperature is the temperature at which a metal becomes substantially more soft, but is lower than the melting temperature, such that it can be reshaped by forging. Selecting the maximum forging temperature allows metals to be forged more easily, lowering the forging pressure and thus the wear on metal-forming dies. The temperature at which a metal is forged can affect the homogeneity in microstructure and mechanical properties of forged products, which can highly affect the performance of products used in manufacturing. The alloy exhibits a higher creep resistance and strength at high temperatures, making service temperatures of above 1060 °C possible for the material. Partly due to the high melting point, refractory metals are stable against creep deformation to very high temperatures. ==Definition== Most definitions of the term 'refractory metals' list the extraordinarily high melting point as a key requirement for inclusion. In metals, the starting of creep correlates with the melting point of the material; the creep in aluminium alloys starts at 200 °C, while for refractory metals temperatures above 1500 °C are necessary. This resistance against deformation at high temperatures makes the refractory metals suitable against strong forces at high temperature, for example in jet engines, or tools used during forging. ===Chemical=== The refractory metals show a wide variety of chemical properties because they are members of three distinct groups in the periodic table. However its rarity makes it the most expensive of the refractory metals. ==Advantages and shortfalls== The strength and high-temperature stability of refractory metals make them suitable for hot metalworking applications and for vacuum furnace technology. Their high melting points make powder metallurgy the method of choice for fabricating components from these metals. Material Forging Temperature Melting point Celsius Fahrenheit °C Carbon steel - 0.50% carbon content 1230 2246 ~1425-1540 Stainless steel (Nonmagnetic) 1150 2102 ~1400-1530 Stainless steel (Magnetic) 1095 2003 ~1400-1530 Nickel 1095 2003 1453 Titanium 955 1751 1660 Copper 900 1652 1083 Brass (25 alloy types with varying ratios of copper and zinc) 815 1499 ~900-940 Commercial bronze (90% copper and 10% tin) 900 to 419.53 1652 to 787.154 ~950 Aluminium 300 - 480 'Aluminum and Aluminum Alloys"" edited by Joseph R. Davis, p248 600 - 900 660 Zinc 419.53 787.154 420 Lead 327.46 621.428 327 Iron 1371 2500 1535 Tin 231.93 449.474 232 ==See also== * Plasticity * == Notes == ==References== Category:Thermodynamics Category:Plasticity (physics) Category:Metals Category: Zinc The high- temperature creep strain of alloys must be limited for them to be used. For most metals, forging temperature is approximately 70% of the absolute temperature (usually measured in kelvins) of its melting point. However, poor low-temperature fabricability and extreme oxidability at high temperatures are shortcomings of most refractory metals. They all share some properties, including a melting point above 2000 °C and high hardness at room temperature. It is unique in that it can be worked through annealing to achieve a wide range of strength and ductility, and is the least dense of the refractory metals. It is useful as an alloy to other refractory metals, where it adds ductility and tensile strength. Tungsten and its alloys are often used in applications where high temperatures are present but still a high strength is necessary and the high density is not troublesome. Ceramics such as alumina, zirconia, and especially magnesia will tolerate the highest temperatures. These high melting points define most of their applications. ","To prevent the grains of solution from growing too large, which decreases mechanical properties such as toughness, shear strength, and tensile strength.","To increase the size of the grains of solution, which enhances mechanical properties such as toughness, shear strength, and tensile strength.","To prevent the grains of solution from growing too large, which enhances mechanical properties such as toughness, shear strength, and tensile strength.","To prevent the grains of solution from growing too small, which enhances mechanical properties such as toughness, shear strength, and tensile strength.","To increase the size of the grains of solution, which decreases mechanical properties such as toughness, shear strength, and tensile strength.",C,kaggle200,"Kawabata evaluation system measures the mechanical properties of the textiles, such as tensile strength, shear strength, surface friction, and roughness, The Kawabata evaluation system predicts human responses and understands the perception of softness. It can also be used to figure out the short-term heat transfer properties that are responsible for the feeling of coolness when fabrics touch the skin while being worn.
Mechanical properties include elasticity and plasticity, tensile strength, compressive strength, shear strength, fracture toughness, ductility (low in brittle materials), and indentation hardness. Solid mechanics is the study of the behavior of solid matter under external actions such as external forces and temperature changes.
There are no published standard values for shear strength like with tensile and yield strength. Instead, it is common for it to be estimated as 60% of the ultimate tensile strength. Shear strength can be measured by a torsion test where it is equal to their torsional strength.
Because a smaller grain size usually enhances mechanical properties, such as toughness, shear strength and tensile strength, these metals are often heated to a temperature that is just above the upper critical temperature, in order to prevent the grains of solution from growing too large. For instance, when steel is heated above the upper critical-temperature, small grains of austenite form. These grow larger as the temperature is increased. When cooled very quickly, during a martensite transformation, the austenite grain-size directly affects the martensitic grain-size. Larger grains have large grain-boundaries, which serve as weak spots in the structure. The grain size is usually controlled to reduce the probability of breakage.","As a very rough guide relating tensile, yield, and shear strengths: USS: Ultimate Shear Strength, UTS: Ultimate Tensile Strength, SYS: Shear Yield Stress, TYS: Tensile Yield Stress There are no published standard values for shear strength like with tensile and yield strength. Instead, it is common for it to be estimated as 60% of the ultimate tensile strength. Shear strength can be measured by a torsion test where it is equal to their torsional strength.
Temperatures elevated above 300 °C (572 °F) degrade the mechanical properties of concrete, including compressive strength, fracture strength, tensile strength, and elastic modulus, with respect to deleterious effect on its structural changes.
Because a smaller grain size usually enhances mechanical properties, such as toughness, shear strength and tensile strength, these metals are often heated to a temperature that is just above the upper critical temperature, in order to prevent the grains of solution from growing too large. For instance, when steel is heated above the upper critical-temperature, small grains of austenite form. These grow larger as the temperature is increased. When cooled very quickly, during a martensite transformation, the austenite grain-size directly affects the martensitic grain-size. Larger grains have large grain-boundaries, which serve as weak spots in the structure. The grain size is usually controlled to reduce the probability of breakage.The diffusion transformation is very time-dependent. Cooling a metal will usually suppress the precipitation to a much lower temperature. Austenite, for example, usually only exists above the upper critical temperature. However, if the austenite is cooled quickly enough, the transformation may be suppressed for hundreds of degrees below the lower critical temperature. Such austenite is highly unstable and, if given enough time, will precipitate into various microstructures of ferrite and cementite. The cooling rate can be used to control the rate of grain growth or can even be used to produce partially martensitic microstructures. However, the martensite transformation is time-independent. If the alloy is cooled to the martensite transformation (Ms) temperature before other microstructures can fully form, the transformation will usually occur at just under the speed of sound.When austenite is cooled slow enough that a martensite transformation does not occur, the austenite grain size will have an effect on the rate of nucleation, but it is generally temperature and the rate of cooling that controls the grain size and microstructure. When austenite is cooled extremely slowly, it will form large ferrite crystals filled with spherical inclusions of cementite. This microstructure is referred to as ""sphereoidite"". If cooled a little faster, then coarse pearlite will form. Even faster, and fine pearlite will form. If cooled even faster, bainite will form. Similarly, these microstructures will also form, if cooled to a specific temperature and then held there for a certain time.Most non-ferrous alloys are also heated in order to form a solution. Most often, these are then cooled very quickly to produce a martensite transformation, putting the solution into a supersaturated state. The alloy, being in a much softer state, may then be cold worked. This causes work hardening that increases the strength and hardness of the alloy. Moreover, the defects caused by plastic deformation tend to speed up precipitation, increasing the hardness beyond what is normal for the alloy. Even if not cold worked, the solutes in these alloys will usually precipitate, although the process may take much longer. Sometimes these metals are then heated to a temperature that is below the lower critical (A1) temperature, preventing recrystallization, in order to speed-up the precipitation.","Sometimes these metals are then heated to a temperature that is below the lower critical (A1) temperature, preventing recrystallization, in order to speed-up the precipitationFor instance, when steel is heated above the upper critical-temperature, small grains of austenite formAustenite, for example, usually only exists above the upper critical temperatureShear strength can be measured by a torsion test where it is equal to their torsional strength.
Because a smaller grain size usually enhances mechanical properties, such as toughness, shear strength and tensile strength, these metals are often heated to a temperature that is just above the upper critical temperature, in order to prevent the grains of solution from growing too largeHowever, if the austenite is cooled quickly enough, the transformation may be suppressed for hundreds of degrees below the lower critical temperatureThis causes work hardening that increases the strength and hardness of the alloyThe cooling rate can be used to control the rate of grain growth or can even be used to produce partially martensitic microstructuresShear strength can be measured by a torsion test where it is equal to their torsional strength.
Temperatures elevated above 300 °C (572 °F) degrade the mechanical properties of concrete, including compressive strength, fracture strength, tensile strength, and elastic modulus, with respect to deleterious effect on its structural changes.
Because a smaller grain size usually enhances mechanical properties, such as toughness, shear strength and tensile strength, these metals are often heated to a temperature that is just above the upper critical temperature, in order to prevent the grains of solution from growing too largeCooling a metal will usually suppress the precipitation to a much lower temperatureSimilarly, these microstructures will also form, if cooled to a specific temperature and then held there for a certain time.Most non-ferrous alloys are also heated in order to form a solutionThe alloy, being in a much softer state, may then be cold workedThese grow larger as the temper","Sometimes these metals are then heated to a temperature that is below the lower critical (A1) temperature, preventing recrystallization, in order to speed-up the precipitationFor instance, when steel is heated above the upper critical-temperature, small grains of austenite formAustenite, for example, usually only exists above the upper critical temperatureShear strength can be measured by a torsion test where it is equal to their torsional strength.
Because a smaller grain size usually enhances mechanical properties, such as toughness, shear strength and tensile strength, these metals are often heated to a temperature that is just above the upper critical temperature, in order to prevent the grains of solution from growing too largeHowever, if the austenite is cooled quickly enough, the transformation may be suppressed for hundreds of degrees below the lower critical temperatureThis causes work hardening that increases the strength and hardness of the alloyThe cooling rate can be used to control the rate of grain growth or can even be used to produce partially martensitic microstructuresShear strength can be measured by a torsion test where it is equal to their torsional strength.
Temperatures elevated above 300 °C (572 °F) degrade the mechanical properties of concrete, including compressive strength, fracture strength, tensile strength, and elastic modulus, with respect to deleterious effect on its structural changes.
Because a smaller grain size usually enhances mechanical properties, such as toughness, shear strength and tensile strength, these metals are often heated to a temperature that is just above the upper critical temperature, in order to prevent the grains of solution from growing too largeCooling a metal will usually suppress the precipitation to a much lower temperatureSimilarly, these microstructures will also form, if cooled to a specific temperature and then held there for a certain time.Most non-ferrous alloys are also heated in order to form a solutionThe alloy, being in a much softer state, may then be cold workedThese grow larger as the temper[SEP]What is the reason for heating metals to a temperature just above the upper critical temperature?","['C', 'A', 'B']",1.0
What is the cause of the observed change in the periods of moons orbiting a distant planet when measured from Earth?,"Since the speed of the Earth varies according to its position in its orbit as measured from its perihelion, Earth's speed when in a solstice or equinox point changes over time: if such a point moves toward perihelion, the interval between two passages decreases a little from year to year; if the point moves towards aphelion, that period increases a little from year to year. * Tidal drag between the Earth and the Moon and Sun increases the length of the day and of the month (by transferring angular momentum from the rotation of the Earth to the revolution of the Moon); since the apparent mean solar day is the unit with which we measure the length of the year in civil life, the length of the year appears to decrease. This period is associated with the apparent size of the full moon, and also with the varying duration of the synodic month. * The positions of the equinox and solstice points with respect to the apsides of Earth's orbit change: the equinoxes and solstices move westward relative to the stars because of precession, and the apsides move in the other direction because of the long-term effects of gravitational pull by the other planets. The length of day of other planets also varies, particularly of the planet Venus, which has such a dynamic and strong atmosphere that its length of day fluctuates by up to 20 minutes. ==Observations== thumb|Deviation of day length from SI based day Any change of the axial component of the atmospheric angular momentum (AAM) must be accompanied by a corresponding change of the angular momentum of Earth's crust and mantle (due to the law of conservation of angular momentum). A year is the orbital period of a planetary body, for example, the Earth, moving in its orbit around the Sun. Due to the Earth's axial tilt, the course of a year sees the passing of the seasons, marked by change in weather, the hours of daylight, and, consequently, vegetation and soil fertility. The satellite revisit period is the time elapsed between observations of the same point on Earth by a satellite. The orbit of the Earth is elliptical; the extreme points, called apsides, are the perihelion, where the Earth is closest to the Sun, and the aphelion, where the Earth is farthest from the Sun. Moreover, it causes long-term changes in its orbit, and therefore also long-term changes in these periods. Such a planet would be slightly closer to the Sun than Earth's mean distance. The lunar geological timescale (or selenological timescale) divides the history of Earth's Moon into five generally recognized periods: the Copernican, Eratosthenian, Imbrian (Late and Early epochs), Nectarian, and Pre-Nectarian. The length of the day (LOD), which has increased over the long term of Earth's history due to tidal effects, is also subject to fluctuations on a shorter scale of time. Furthermore, as the oldest geological periods of the Moon are based exclusively on the times of individual impact events (in particular, Nectaris, Imbrium, and Orientale), these punctual events will most likely not correspond to any specific geological event on the other terrestrial planets, such as Mercury, Venus, Earth, or Mars. The average over the full orbit does not change because of this, so the length of the average tropical year does not change because of this second-order effect. Its average duration is 365.259636 days (365 d 6 h 13 min 52.6 s) (at the epoch J2011.0). === Draconic year === The draconic year, draconitic year, eclipse year, or ecliptic year is the time taken for the Sun (as seen from the Earth) to complete one revolution with respect to the same lunar node (a point where the Moon's orbit intersects the ecliptic). Similarly, year can mean the orbital period of any planet; for example, a Martian year and a Venusian year refer to the time those planets take to transit one complete orbit. The younger boundary of this period is defined based on the recognition that freshly excavated materials on the lunar surface are generally bright and that they become darker over time as a result of space weathering processes. This term is sometimes erroneously used for the draconic or nodal period of lunar precession, that is the period of a complete revolution of the Moon's ascending node around the ecliptic: Julian years ( days; at the epoch J2000.0). === Full moon cycle === The full moon cycle is the time for the Sun (as seen from the Earth) to complete one revolution with respect to the perigee of the Moon's orbit. Its Earth equivalent consists of most of the Mesoarchean and Neoarchean eras (Archean eon), Paleoproterozoic and Mesoproterozoic eras (Proterozoic eon). ==Examples== Other than Eratosthenes itself, examples of large Eratosthenian craters on the near side of the moon include Langrenus, Macrobius, Aristoteles, Hausen, Moretus, Pythagoras, Scoresby, Bullialdus, Plutarch, and Cavalerius. The boundaries of this time scale are related to large impact events that have modified the lunar surface, changes in crater formation through time, and the size-frequency distribution of craters superposed on geological units. ","The difference in the size of the planet's moons when the Earth is at the point in its orbit that is closest to its planet than when the Earth is at the farthest point in its orbit, the difference in distance being the diameter of the Earth's orbit around the Sun.","The difference in the speed of light when the Earth is at the point in its orbit that is closest to its planet than when the Earth is at the farthest point in its orbit, the difference in distance being the diameter of the Earth's orbit around the Sun.","The difference in distance travelled by light from the planet (or its moon) to Earth when the Earth is at the point in its orbit that is closest to its planet than when the Earth is at the farthest point in its orbit, the difference in distance being the diameter of the Earth's orbit around the Sun.","The difference in the atmospheric conditions of the planet when the Earth is at the point in its orbit that is closest to its planet than when the Earth is at the farthest point in its orbit, the difference in distance being the diameter of the Earth's orbit around the Sun.","The difference in the gravitational pull of the planet on its moons when the Earth is at the point in its orbit that is closest to its planet than when the Earth is at the farthest point in its orbit, the difference in distance being the diameter of the Earth's orbit around the Sun.",C,kaggle200,"Earth's co-orbital asteroids population consists of quasi-satellites, objects with a horseshoe orbit and trojans. There are at least five quasi-satellites, including 469219 Kamoʻoalewa. A trojan asteroid companion, , is librating around the leading Lagrange triangular point, L4, in Earth's orbit around the Sun. The tiny near-Earth asteroid makes close approaches to the Earth–Moon system roughly every twenty years. During these approaches, it can orbit Earth for brief periods of time.
In astronomy, a planet's elongation is the angular separation between the Sun and the planet, with Earth as the reference point. The greatest elongation of a given inferior planet occurs when this planet's position, in its orbital path around the Sun, is at tangent to the observer on Earth. Since an inferior planet is well within the area of Earth's orbit around the Sun, observation of its elongation should not pose that much a challenge (compared to deep-sky objects, for example). When a planet is at its greatest elongation, it appears farthest from the Sun as viewed from Earth, so its apparition is also best at that point.
In orbital mechanics, a libration point orbit (LPO) is a quasiperiodic orbit around a Lagrange point. Libration is a form of orbital motion exhibited, for example, in the Earth–Moon system. Trojan bodies also exhibit libration dynamics.
Ole Christensen Rømer used an astronomical measurement to make the first quantitative estimate of the speed of light in the year 1676. When measured from Earth, the periods of moons orbiting a distant planet are shorter when the Earth is approaching the planet than when the Earth is receding from it. The distance travelled by light from the planet (or its moon) to Earth is shorter when the Earth is at the point in its orbit that is closest to its planet than when the Earth is at the farthest point in its orbit, the difference in distance being the diameter of the Earth's orbit around the Sun. The observed change in the moon's orbital period is caused by the difference in the time it takes light to traverse the shorter or longer distance. Rømer observed this effect for Jupiter's innermost major moon Io and deduced that light takes 22 minutes to cross the diameter of the Earth's orbit.","Other small natural objects in orbit around the Sun may enter orbit around Earth for a short amount of time, becoming temporary natural satellites. As of 2020, the only confirmed examples have been 2006 RH120 in Earth orbit during 2006 and 2007, and 2020 CD3 in Earth orbit between 2018 and 2020.
In orbital mechanics, a libration point orbit (LPO) is a quasiperiodic orbit around a Lagrange point. Libration is a form of orbital motion exhibited, for example, in the Earth–Moon system. Trojan bodies also exhibit libration dynamics.
Ole Christensen Rømer used an astronomical measurement to make the first quantitative estimate of the speed of light in the year 1676. When measured from Earth, the periods of moons orbiting a distant planet are shorter when the Earth is approaching the planet than when the Earth is receding from it. The distance travelled by light from the planet (or its moon) to Earth is shorter when the Earth is at the point in its orbit that is closest to its planet than when the Earth is at the farthest point in its orbit, the difference in distance being the diameter of the Earth's orbit around the Sun. The observed change in the moon's orbital period is caused by the difference in the time it takes light to traverse the shorter or longer distance. Rømer observed this effect for Jupiter's innermost major moon Io and deduced that light takes 22 minutes to cross the diameter of the Earth's orbit.","When measured from Earth, the periods of moons orbiting a distant planet are shorter when the Earth is approaching the planet than when the Earth is receding from itThe observed change in the moon's orbital period is caused by the difference in the time it takes light to traverse the shorter or longer distanceThe distance travelled by light from the planet (or its moon) to Earth is shorter when the Earth is at the point in its orbit that is closest to its planet than when the Earth is at the farthest point in its orbit, the difference in distance being the diameter of the Earth's orbit around the SunRømer observed this effect for Jupiter's innermost major moon Io and deduced that light takes 22 minutes to cross the diameter of the Earth's orbitLibration is a form of orbital motion exhibited, for example, in the Earth–Moon systemRømer observed this effect for Jupiter's innermost major moon Io and deduced that light takes 22 minutes to cross the diameter of the Earth's orbit.The greatest elongation of a given inferior planet occurs when this planet's position, in its orbital path around the Sun, is at tangent to the observer on EarthDuring these approaches, it can orbit Earth for brief periods of time.
In astronomy, a planet's elongation is the angular separation between the Sun and the planet, with Earth as the reference pointThe tiny near-Earth asteroid makes close approaches to the Earth–Moon system roughly every twenty yearsWhen a planet is at its greatest elongation, it appears farthest from the Sun as viewed from Earth, so its apparition is also best at that point.
In orbital mechanics, a libration point orbit (LPO) is a quasiperiodic orbit around a Lagrange pointA trojan asteroid companion, , is librating around the leading Lagrange triangular point, L4, in Earth's orbit around the SunSince an inferior planet is well within the area of Earth's orbit around the Sun, observation of its elongation should not pose that much a challenge (compared to deep-sky objects, for example)- Earth's co-orbital asteroids population consists of quasi-satellites, objects with a","When measured from Earth, the periods of moons orbiting a distant planet are shorter when the Earth is approaching the planet than when the Earth is receding from itThe observed change in the moon's orbital period is caused by the difference in the time it takes light to traverse the shorter or longer distanceThe distance travelled by light from the planet (or its moon) to Earth is shorter when the Earth is at the point in its orbit that is closest to its planet than when the Earth is at the farthest point in its orbit, the difference in distance being the diameter of the Earth's orbit around the SunRømer observed this effect for Jupiter's innermost major moon Io and deduced that light takes 22 minutes to cross the diameter of the Earth's orbitLibration is a form of orbital motion exhibited, for example, in the Earth–Moon systemRømer observed this effect for Jupiter's innermost major moon Io and deduced that light takes 22 minutes to cross the diameter of the Earth's orbit.The greatest elongation of a given inferior planet occurs when this planet's position, in its orbital path around the Sun, is at tangent to the observer on EarthDuring these approaches, it can orbit Earth for brief periods of time.
In astronomy, a planet's elongation is the angular separation between the Sun and the planet, with Earth as the reference pointThe tiny near-Earth asteroid makes close approaches to the Earth–Moon system roughly every twenty yearsWhen a planet is at its greatest elongation, it appears farthest from the Sun as viewed from Earth, so its apparition is also best at that point.
In orbital mechanics, a libration point orbit (LPO) is a quasiperiodic orbit around a Lagrange pointA trojan asteroid companion, , is librating around the leading Lagrange triangular point, L4, in Earth's orbit around the SunSince an inferior planet is well within the area of Earth's orbit around the Sun, observation of its elongation should not pose that much a challenge (compared to deep-sky objects, for example)- Earth's co-orbital asteroids population consists of quasi-satellites, objects with a[SEP]What is the cause of the observed change in the periods of moons orbiting a distant planet when measured from Earth?","['C', 'E', 'D']",1.0
What is the origin of the radio emission observed from supernova remnants?,"Remnants which could only be created by significantly higher ejection energies than a standard supernova are called hypernova remnants, after the high-energy hypernova explosion that is assumed to have created them. ==Origin of cosmic rays== Supernova remnants are considered the major source of galactic cosmic rays. A supernova remnant (SNR) is the structure resulting from the explosion of a star in a supernova. One of the best observed young supernova remnants was formed by SN 1987A, a supernova in the Large Magellanic Cloud that was observed in February 1987. In the late 1990s it was proposed that recent supernova remnants could be found by looking for gamma rays from the decay of titanium-44. It was the first supernova to be detected through its neutrino emission and the first to be observed across every band of the electromagnetic spectrum. Later called SN 1572, this supernova was associated with a remnant during the 1960s. They identified S Andromedae, what they considered a typical supernova, as an explosive event that released radiation approximately equal to the Sun's total energy output for 107 years. The supernova remnant is bounded by an expanding shock wave, and consists of ejected material expanding from the explosion, and the interstellar material it sweeps up and shocks along the way. The most likely explanations involve the efficient conversion of explosive kinetic energy to radiation by interaction with circumstellar material, similar to a type IIn supernova but on a larger scale. The Green Catalogue of Supernova Remnants lists supernova remnants (SNR) within the Milky Way Galaxy. This is a list of observed supernova remnants (SNRs) in the Milky Way, as well as galaxies nearby enough to resolve individual nebulae, such as the Large and Small Magellanic Clouds and the Andromeda Galaxy. Supernova remnants typically only survive for a few tens of thousands of years, making all known SNRs fairly young compared to many other astronomical objects. These observations are consistent with the appearance of a supernova, and this is believed to be the oldest confirmed record of a supernova event by humankind. The remnant of this supernova was identified in 1941 at the Mount Wilson Observatory. ==Telescope observation== The true nature of the supernova remained obscure for some time. Supernova remnants can provide the energetic shock fronts required to generate ultra-high energy cosmic rays. When the supernova remnant slows to the speed of the random velocities in the surrounding medium, after roughly 30,000 years, it will merge into the general turbulent flow, contributing its remaining kinetic energy to the turbulence. ==Types of supernova remnant== There are three types of supernova remnant: * Shell-like, such as Cassiopeia A * Composite, in which a shell contains a central pulsar wind nebula, such as G11.2-0.3 or G21.5-0.9. Successful models of supernova behavior have also been developed, and the role of supernovae in the star formation process is now increasingly understood. ==Early history== Year Observed location Maximum brightness Certainty of suggestion 185 Centaurus −6 Suggested SN, also suggested comet 386 Sagittarius +1,5 Uncertain, suggested SN, possible nova or supernova 393 Scorpius −3 Possible SN, possible nova 1006 Lupus −7,5±0,4 Certain: known SNR 1054 Taurus −6 Certain: known SNR and pulsar 1181 Cassiopeia −2 likely not SN (suggested, rejected), but activity of WR-star 1572 Cassiopeia −4 Certain: known SNR 1604 Ophiuchus −2 Certain: known SNR The earliest possible recorded supernova, known as HB9, could have been viewed and recorded by unknown Indian observers in . *Chandra Galactic SNR gallery *SNRcat, the online high-energy catalogue of supernova remnants Supernova Remnants * Supernova Remnants, list of Category:Articles containing video clips The relative proximity of this supernova has allowed detailed observation, and it provided the first opportunity for modern theories of supernova formation to be tested against observations. Other well-known supernova remnants include the Crab Nebula; Tycho, the remnant of SN 1572, named after Tycho Brahe who recorded the brightness of its original explosion; and Kepler, the remnant of SN 1604, named after Johannes Kepler. ",The radio emission from supernova remnants originates from the rebound of gas falling inward during the supernova explosion. This emission is a form of non-thermal emission called synchrotron emission.,The radio emission from supernova remnants originates from high-velocity electrons oscillating within magnetic fields. This emission is a form of non-thermal emission called synchrotron emission.,The radio emission from supernova remnants originates from the fusion of hydrogen and helium in the core of the star. This emission is a form of non-thermal emission called synchrotron emission.,The radio emission from supernova remnants originates from the expansion of the shell of gas during the supernova explosion. This emission is a form of thermal emission called synchrotron emission.,The radio emission from supernova remnants originates from the ionized gas present in the remnants. This emission is a form of thermal emission called synchrotron emission.,B,kaggle200,"Different radio emission processes also exist for certain pre-main-sequence stars, along with post-main sequence stars such as neutron stars. These objects have very high rotation rates, which leads to very intense magnetic fields that are capable of accelerating large amounts of particles to highly-relativistic speeds. Of particular interest is the fact that there is no consensus yet on the coherent radio emission mechanism responsible for pulsars, which cannot be explained by the two well-established coherent mechanisms discussed here, plasma emission and electron cyclotron maser emission. Proposed mechanisms for pulsar radio emission include coherent curvature emission, relativistic plasma emission, anomalous Doppler emission, and linear acceleration emission or free-electron maser emission. All of these processes still involve the transfer of energy from moving electrons into radiation. However, in this case the electrons are moving at nearly the speed of light, and the debate revolves around what process accelerates these electrons and how their energy is converted into radiation.
Supernova remnants often show diffuse radio emission. Examples include Cassiopeia A, the brightest extrasolar radio source in the sky, and the Crab Nebula.
Due to its proximity to Earth, the Sun is the brightest source of astronomical radio emission. But of course, other stars also produce radio emission and may produce much more intense radiation in absolute terms than is observed from the Sun. For ""normal"" main sequence stars, the mechanisms that produce stellar radio emission are the same as those that produce solar radio emission. However, emission from ""radio stars"" may exhibit significantly different properties compared to the Sun, and the relative importance of the different mechanisms may change depending on the properties of the star, particularly with respect to size and rotation rate, the latter of which largely determines the strength of a star's magnetic field. Notable examples of stellar radio emission include quiescent steady emission from stellar chromospheres and coronae, radio bursts from flare stars, radio emission from massive stellar winds, and radio emission associated with close binary stars. Pre-main-sequence stars such as T Tauri stars also exhibit radio emission through reasonably well-understood processes, namely gyrosynchrotron and electron cyclotron maser emission.
A supernova occurs when a high-mass star reaches the end of its life. When nuclear fusion in the core of the star stops, the star collapses. The gas falling inward either rebounds or gets so strongly heated that it expands outwards from the core, thus causing the star to explode. The expanding shell of gas forms a supernova remnant, a special diffuse nebula. Although much of the optical and X-ray emission from supernova remnants originates from ionized gas, a great amount of the radio emission is a form of non-thermal emission called synchrotron emission. This emission originates from high-velocity electrons oscillating within magnetic fields.","Gyroresonance and gyrosynchrotron are most-important in the solar context, although there may be special cases in which synchrotron emission also operates. For any sub-type, gyromagnetic emission occurs near the electron gyrofrequency ( fB ) from Equation 2 or one of its harmonics. This mechanism dominates when the magnetic field strengths are large such that fB > fp . This is mainly true in the chromosphere, where gyroresonance emission is the primary source of quiescent (non-burst) radio emission, producing microwave radiation in the GHz range. Gyroresonance emission can also be observed from the densest structures in the corona, where it can be used to measure the coronal magnetic field strength. Gyrosynchrotron emission is responsible for certain types of microwave radio bursts from the chromosphere and is also likely responsible for certain types of coronal radio bursts.
Due to its proximity to Earth, the Sun is the brightest source of astronomical radio emission. But of course, other stars also produce radio emission and may produce much more intense radiation in absolute terms than is observed from the Sun. For ""normal"" main sequence stars, the mechanisms that produce stellar radio emission are the same as those that produce solar radio emission. However, emission from ""radio stars"" may exhibit significantly different properties compared to the Sun, and the relative importance of the different mechanisms may change depending on the properties of the star, particularly with respect to size and rotation rate, the latter of which largely determines the strength of a star's magnetic field. Notable examples of stellar radio emission include quiescent steady emission from stellar chromospheres and coronae, radio bursts from flare stars, radio emission from massive stellar winds, and radio emission associated with close binary stars. Pre-main-sequence stars such as T Tauri stars also exhibit radio emission through reasonably well-understood processes, namely gyrosynchrotron and electron cyclotron maser emission.Different radio emission processes also exist for certain pre-main-sequence stars, along with post-main sequence stars such as neutron stars. These objects have very high rotation rates, which leads to very intense magnetic fields that are capable of accelerating large amounts of particles to highly-relativistic speeds. Of particular interest is the fact that there is no consensus yet on the coherent radio emission mechanism responsible for pulsars, which cannot be explained by the two well-established coherent mechanisms discussed here, plasma emission and electron cyclotron maser emission. Proposed mechanisms for pulsar radio emission include coherent curvature emission, relativistic plasma emission, anomalous Doppler emission, and linear acceleration emission or free-electron maser emission. All of these processes still involve the transfer of energy from moving electrons into radiation. However, in this case the electrons are moving at nearly the speed of light, and the debate revolves around what process accelerates these electrons and how their energy is converted into radiation.
Supernova remnants A supernova occurs when a high-mass star reaches the end of its life. When nuclear fusion in the core of the star stops, the star collapses. The gas falling inward either rebounds or gets so strongly heated that it expands outwards from the core, thus causing the star to explode. The expanding shell of gas forms a supernova remnant, a special diffuse nebula. Although much of the optical and X-ray emission from supernova remnants originates from ionized gas, a great amount of the radio emission is a form of non-thermal emission called synchrotron emission. This emission originates from high-velocity electrons oscillating within magnetic fields.","Although much of the optical and X-ray emission from supernova remnants originates from ionized gas, a great amount of the radio emission is a form of non-thermal emission called synchrotron emissionHowever, in this case the electrons are moving at nearly the speed of light, and the debate revolves around what process accelerates these electrons and how their energy is converted into radiation.
Supernova remnants often show diffuse radio emissionFor ""normal"" main sequence stars, the mechanisms that produce stellar radio emission are the same as those that produce solar radio emission- Different radio emission processes also exist for certain pre-main-sequence stars, along with post-main sequence stars such as neutron starsNotable examples of stellar radio emission include quiescent steady emission from stellar chromospheres and coronae, radio bursts from flare stars, radio emission from massive stellar winds, and radio emission associated with close binary starsPre-main-sequence stars such as T Tauri stars also exhibit radio emission through reasonably well-understood processes, namely gyrosynchrotron and electron cyclotron maser emission.
A supernova occurs when a high-mass star reaches the end of its lifePre-main-sequence stars such as T Tauri stars also exhibit radio emission through reasonably well-understood processes, namely gyrosynchrotron and electron cyclotron maser emission.Different radio emission processes also exist for certain pre-main-sequence stars, along with post-main sequence stars such as neutron starsGyrosynchrotron emission is responsible for certain types of microwave radio bursts from the chromosphere and is also likely responsible for certain types of coronal radio bursts.
Due to its proximity to Earth, the Sun is the brightest source of astronomical radio emissionProposed mechanisms for pulsar radio emission include coherent curvature emission, relativistic plasma emission, anomalous Doppler emission, and linear acceleration emission or free-electron maser emissionThe expanding shell of gas forms a supernova remnant, a special diffuse neb","Although much of the optical and X-ray emission from supernova remnants originates from ionized gas, a great amount of the radio emission is a form of non-thermal emission called synchrotron emissionHowever, in this case the electrons are moving at nearly the speed of light, and the debate revolves around what process accelerates these electrons and how their energy is converted into radiation.
Supernova remnants often show diffuse radio emissionFor ""normal"" main sequence stars, the mechanisms that produce stellar radio emission are the same as those that produce solar radio emission- Different radio emission processes also exist for certain pre-main-sequence stars, along with post-main sequence stars such as neutron starsNotable examples of stellar radio emission include quiescent steady emission from stellar chromospheres and coronae, radio bursts from flare stars, radio emission from massive stellar winds, and radio emission associated with close binary starsPre-main-sequence stars such as T Tauri stars also exhibit radio emission through reasonably well-understood processes, namely gyrosynchrotron and electron cyclotron maser emission.
A supernova occurs when a high-mass star reaches the end of its lifePre-main-sequence stars such as T Tauri stars also exhibit radio emission through reasonably well-understood processes, namely gyrosynchrotron and electron cyclotron maser emission.Different radio emission processes also exist for certain pre-main-sequence stars, along with post-main sequence stars such as neutron starsGyrosynchrotron emission is responsible for certain types of microwave radio bursts from the chromosphere and is also likely responsible for certain types of coronal radio bursts.
Due to its proximity to Earth, the Sun is the brightest source of astronomical radio emissionProposed mechanisms for pulsar radio emission include coherent curvature emission, relativistic plasma emission, anomalous Doppler emission, and linear acceleration emission or free-electron maser emissionThe expanding shell of gas forms a supernova remnant, a special diffuse neb[SEP]What is the origin of the radio emission observed from supernova remnants?","['E', 'B', 'D']",0.5
What is the relationship between the Hamiltonians and eigenstates in supersymmetric quantum mechanics?,"An introductory theorem shows that for every eigenstate of one Hamiltonian, its partner Hamiltonian has a corresponding eigenstate with the same energy (except possibly for zero energy eigenstates). SUSY quantum mechanics involves pairs of Hamiltonians which share a particular mathematical relationship, which are called partner Hamiltonians. Thus, if a Hamiltonian matrix has as an eigenvalue, then , and are also eigenvalues. Furthermore, the sum (and any linear combination) of two Hamiltonian matrices is again Hamiltonian, as is their commutator. In quantum mechanics, the Hamiltonian of a system is an operator corresponding to the total energy of that system, including both kinetic energy and potential energy. The SUSY partner of this Hamiltonian would be ""fermionic"", and its eigenstates would be the theory's fermions. Then the condition that be Hamiltonian is equivalent to requiring that the matrices and are symmetric, and that .. (The potential energy terms which occur in the Hamiltonians are then called partner potentials.) In theoretical physics, supersymmetric quantum mechanics is an area of research where supersymmetry are applied to the simpler setting of plain quantum mechanics, rather than quantum field theory. Given a superpotential, two ""partner potentials"" are derived that can each serve as a potential in the Schrödinger equation. In atomic, molecular, and optical physics and quantum chemistry, the molecular Hamiltonian is the Hamiltonian operator representing the energy of the electrons and nuclei in a molecule. Casting all of these into the Hamiltonian gives \hat{H} = \frac{1}{2m} \left ( -i\hbar abla - q\mathbf{A} \right)^2 + q\phi . ==Energy eigenket degeneracy, symmetry, and conservation laws== In many systems, two or more energy eigenstates have the same energy. In mathematics, a Hamiltonian matrix is a -by- matrix such that is symmetric, where is the skew-symmetric matrix :J = \begin{bmatrix} 0_n & I_n \\\ -I_n & 0_n \\\ \end{bmatrix} and is the -by- identity matrix. We can imagine a ""bosonic Hamiltonian"", whose eigenstates are the various bosons of our theory. Then we can simplify the expression for the Hamiltonian to :H = \frac{(p)^2}{2}+\frac{{W}^2}{2}+\frac{W'}{2}(bb^\dagger-b^\dagger b) There are certain classes of superpotentials such that both the bosonic and fermionic Hamiltonians have similar forms. The partner potentials have the same spectrum, apart from a possible eigenvalue of zero, meaning that the physical systems represented by the two potentials have the same characteristic energies, apart from a possible zero-energy ground state. ==One-dimensional example== Consider a one-dimensional, non- relativistic particle with a two state internal degree of freedom called ""spin"". It follows easily from the definition that the transpose of a Hamiltonian matrix is Hamiltonian. The Hamiltonian takes different forms and can be simplified in some cases by taking into account the concrete characteristics of the system under analysis, such as single or several particles in the system, interaction between particles, kind of potential energy, time varying potential or time independent one. ==Schrödinger Hamiltonian== ===One particle=== By analogy with classical mechanics, the Hamiltonian is commonly expressed as the sum of operators corresponding to the kinetic and potential energies of a system in the form \hat{H} = \hat{T} + \hat{V}, where \hat{V} = V = V(\mathbf{r},t) , is the potential energy operator and \hat{T} = \frac{\mathbf{\hat{p}}\cdot\mathbf{\hat{p}}}{2m} = \frac{\hat{p}^2}{2m} = -\frac{\hbar^2}{2m} abla^2, is the kinetic energy operator in which m is the mass of the particle, the dot denotes the dot product of vectors, and \hat{p} = -i\hbar abla , is the momentum operator where a abla is the del operator. Let's say we have a quantum system described by a Hamiltonian \mathcal{H} and a set of N operators Q_i. We can continue this process of finding partner potentials with the shape invariance condition, giving the following formula for the energy levels in terms of the parameters of the potential : E_n=\sum\limits_{i=1}^n R(a_i) where a_i are the parameters for the multiple partnered potentials. ==Applications== In 2021, supersymmetric quantum mechanics was applied to option pricing and the analysis of markets in quantum finance, and to financial networks. ==See also== *Supersymmetry algebra *Superalgebra *Supersymmetric gauge theory ==References== ==Sources== * F. Cooper, A. Khare and U. Sukhatme, ""Supersymmetry and Quantum Mechanics"", Phys.Rept.251:267-385, 1995. ","For every eigenstate of one Hamiltonian, its partner Hamiltonian has a corresponding eigenstate with the same energy.","For every eigenstate of one Hamiltonian, its partner Hamiltonian has a corresponding eigenstate with a higher energy.","For every eigenstate of one Hamiltonian, its partner Hamiltonian has a corresponding eigenstate with a different spin.","For every eigenstate of one Hamiltonian, its partner Hamiltonian has a corresponding eigenstate with a different energy.","For every eigenstate of one Hamiltonian, its partner Hamiltonian has a corresponding eigenstate with a lower energy.",A,kaggle200,"In a quantum system at the ""n""-th eigenstate, an adiabatic evolution of the Hamiltonian sees the system remain in the ""n""-th eigenstate of the Hamiltonian, while also obtaining a phase factor. The phase obtained has a contribution from the state's time evolution and another from the variation of the eigenstate with the changing Hamiltonian. The second term corresponds to the Berry phase, and for non-cyclical variations of the Hamiltonian it can be made to vanish by a different choice of the phase associated with the eigenstates of the Hamiltonian at each point in the evolution.
If the system is initially in an eigenstate of formula_2, after a period formula_14 it will have passed into the ""corresponding"" eigenstate of formula_5.
SUSY quantum mechanics involves pairs of Hamiltonians which share a particular mathematical relationship, which are called ""partner Hamiltonians"". (The potential energy terms which occur in the Hamiltonians are then called ""partner potentials"".) An introductory theorem shows that for every eigenstate of one Hamiltonian, its partner Hamiltonian has a corresponding eigenstate with the same energy (except possibly for zero energy eigenstates). This fact can be exploited to deduce many properties of the eigenstate spectrum. It is analogous to the original description of SUSY, which referred to bosons and fermions. We can imagine a ""bosonic Hamiltonian"", whose eigenstates are the various bosons of our theory. The SUSY partner of this Hamiltonian would be ""fermionic"", and its eigenstates would be the theory's fermions. Each boson would have a fermionic partner of equal energy—but, in the relativistic world, energy and mass are interchangeable, so we can just as easily say that the partner particles have equal mass.
SUSY quantum mechanics involves pairs of Hamiltonians which share a particular mathematical relationship, which are called ""partner Hamiltonians"". (The potential energy terms which occur in the Hamiltonians are then known as ""partner potentials"".) An introductory theorem shows that for every eigenstate of one Hamiltonian, its partner Hamiltonian has a corresponding eigenstate with the same energy. This fact can be exploited to deduce many properties of the eigenstate spectrum. It is analogous to the original description of SUSY, which referred to bosons and fermions. We can imagine a ""bosonic Hamiltonian"", whose eigenstates are the various bosons of our theory. The SUSY partner of this Hamiltonian would be ""fermionic"", and its eigenstates would be the theory's fermions. Each boson would have a fermionic partner of equal energy.","In a quantum system at the n-th eigenstate, an adiabatic evolution of the Hamiltonian sees the system remain in the n-th eigenstate of the Hamiltonian, while also obtaining a phase factor. The phase obtained has a contribution from the state's time evolution and another from the variation of the eigenstate with the changing Hamiltonian. The second term corresponds to the Berry phase, and for non-cyclical variations of the Hamiltonian it can be made to vanish by a different choice of the phase associated with the eigenstates of the Hamiltonian at each point in the evolution.
SUSY quantum mechanics involves pairs of Hamiltonians which share a particular mathematical relationship, which are called partner Hamiltonians. (The potential energy terms which occur in the Hamiltonians are then called partner potentials.) An introductory theorem shows that for every eigenstate of one Hamiltonian, its partner Hamiltonian has a corresponding eigenstate with the same energy (except possibly for zero energy eigenstates). This fact can be exploited to deduce many properties of the eigenstate spectrum. It is analogous to the original description of SUSY, which referred to bosons and fermions. We can imagine a ""bosonic Hamiltonian"", whose eigenstates are the various bosons of our theory. The SUSY partner of this Hamiltonian would be ""fermionic"", and its eigenstates would be the theory's fermions. Each boson would have a fermionic partner of equal energy—but, in the relativistic world, energy and mass are interchangeable, so we can just as easily say that the partner particles have equal mass.
SUSY quantum mechanics involves pairs of Hamiltonians which share a particular mathematical relationship, which are called partner Hamiltonians. (The potential energy terms which occur in the Hamiltonians are then known as partner potentials.) An introductory theorem shows that for every eigenstate of one Hamiltonian, its partner Hamiltonian has a corresponding eigenstate with the same energy. This fact can be exploited to deduce many properties of the eigenstate spectrum. It is analogous to the original description of SUSY, which referred to bosons and fermions. We can imagine a ""bosonic Hamiltonian"", whose eigenstates are the various bosons of our theory. The SUSY partner of this Hamiltonian would be ""fermionic"", and its eigenstates would be the theory's fermions. Each boson would have a fermionic partner of equal energy.","The SUSY partner of this Hamiltonian would be ""fermionic"", and its eigenstates would be the theory's fermions(The potential energy terms which occur in the Hamiltonians are then called ""partner potentials"".) An introductory theorem shows that for every eigenstate of one Hamiltonian, its partner Hamiltonian has a corresponding eigenstate with the same energy (except possibly for zero energy eigenstates)(The potential energy terms which occur in the Hamiltonians are then called partner potentials.) An introductory theorem shows that for every eigenstate of one Hamiltonian, its partner Hamiltonian has a corresponding eigenstate with the same energy (except possibly for zero energy eigenstates)The second term corresponds to the Berry phase, and for non-cyclical variations of the Hamiltonian it can be made to vanish by a different choice of the phase associated with the eigenstates of the Hamiltonian at each point in the evolution.
If the system is initially in an eigenstate of formula_2, after a period formula_14 it will have passed into the ""corresponding"" eigenstate of formula_5.
SUSY quantum mechanics involves pairs of Hamiltonians which share a particular mathematical relationship, which are called ""partner Hamiltonians""We can imagine a ""bosonic Hamiltonian"", whose eigenstates are the various bosons of our theory(The potential energy terms which occur in the Hamiltonians are then known as ""partner potentials"".) An introductory theorem shows that for every eigenstate of one Hamiltonian, its partner Hamiltonian has a corresponding eigenstate with the same energyThe second term corresponds to the Berry phase, and for non-cyclical variations of the Hamiltonian it can be made to vanish by a different choice of the phase associated with the eigenstates of the Hamiltonian at each point in the evolution.
SUSY quantum mechanics involves pairs of Hamiltonians which share a particular mathematical relationship, which are called partner Hamiltonians(The potential energy terms which occur in the Hamiltonians are then known as partner potentials.) An introductory theorem shows ","The SUSY partner of this Hamiltonian would be ""fermionic"", and its eigenstates would be the theory's fermions(The potential energy terms which occur in the Hamiltonians are then called ""partner potentials"".) An introductory theorem shows that for every eigenstate of one Hamiltonian, its partner Hamiltonian has a corresponding eigenstate with the same energy (except possibly for zero energy eigenstates)(The potential energy terms which occur in the Hamiltonians are then called partner potentials.) An introductory theorem shows that for every eigenstate of one Hamiltonian, its partner Hamiltonian has a corresponding eigenstate with the same energy (except possibly for zero energy eigenstates)The second term corresponds to the Berry phase, and for non-cyclical variations of the Hamiltonian it can be made to vanish by a different choice of the phase associated with the eigenstates of the Hamiltonian at each point in the evolution.
If the system is initially in an eigenstate of formula_2, after a period formula_14 it will have passed into the ""corresponding"" eigenstate of formula_5.
SUSY quantum mechanics involves pairs of Hamiltonians which share a particular mathematical relationship, which are called ""partner Hamiltonians""We can imagine a ""bosonic Hamiltonian"", whose eigenstates are the various bosons of our theory(The potential energy terms which occur in the Hamiltonians are then known as ""partner potentials"".) An introductory theorem shows that for every eigenstate of one Hamiltonian, its partner Hamiltonian has a corresponding eigenstate with the same energyThe second term corresponds to the Berry phase, and for non-cyclical variations of the Hamiltonian it can be made to vanish by a different choice of the phase associated with the eigenstates of the Hamiltonian at each point in the evolution.
SUSY quantum mechanics involves pairs of Hamiltonians which share a particular mathematical relationship, which are called partner Hamiltonians(The potential energy terms which occur in the Hamiltonians are then known as partner potentials.) An introductory theorem shows [SEP]What is the relationship between the Hamiltonians and eigenstates in supersymmetric quantum mechanics?","['E', 'D', 'A']",0.3333333333333333
What is the proposed name for the field that is responsible for cosmic inflation and the metric expansion of space?,"Inflationary cosmology. Most inflationary models propose a scalar field called the inflaton field, with properties necessary for having (at least) two vacuum states. ""Inflationary cosmology."" In physical cosmology, warm inflation is one of two dynamical realizations of cosmological inflation. Starobinsky inflation is a modification of general relativity used to explain cosmological inflation. ==History== In the Soviet Union, Alexei Starobinsky noted that quantum corrections to general relativity should be important for the early universe. __NOTOC__ In physical cosmology, the inflationary epoch was the period in the evolution of the early universe when, according to inflation theory, the universe underwent an extremely rapid exponential expansion. When the inflaton field reconfigured itself into the low-energy vacuum state we currently observe, the huge difference of potential energy was released in the form of a dense, hot mixture of quarks, anti-quarks and gluons as it entered the electroweak epoch. == Detection via polarization of cosmic microwave background radiation == One approach to confirming the inflationary epoch is to directly measure its effect on the cosmic microwave background (CMB) radiation. Category:Inflation (cosmology) Category:Physical cosmology Category:Multiverse Category:Physical cosmology Category:Big Bang Category:Inflation (cosmology) To distinguish, models using the original, more complete, quantum effective action are then called (trace)-anomaly induced inflation. ==Observables== Starobinsky inflation gives a prediction for the observables of the spectral tilt n_s and the tensor- scalar ratio r: n_s = 1 - \frac{2}{N}, \quad r = \frac{12}{N^2}, where N is the number of e-foldings since the horizon crossing. Eternal inflation (a multiple universe model) Andreï Linde, 1983 Big Bang with cosmic inflation Multiverse based on the concept of cold inflation, in which inflationary events occur at random each with independent initial conditions; some expand into bubble universes supposedly like our entire cosmos. When a mini- universe inflates and ""self-reproduces"" into, say, twenty causally- disconnected mini-universes of equal size to the original mini-universe, perhaps nine of the new mini-universes will have a larger, rather than smaller, average inflaton field value than the original mini-universe, because they inflated from regions of the original mini-universe where quantum fluctuation pushed the inflaton value up more than the slow inflation decay rate brought the inflaton value down. Quantum fluctuations in the hypothetical inflation field produce changes in the rate of expansion that are responsible for eternal inflation. That analysis concluded to a high degree of certainty that the original BICEP signal can be entirely attributed to dust in the Milky Way and therefore does not provide evidence one way or the other to support the theory of the inflationary epoch. ==See also== * * * ==Notes== ==References== * * ==External links== * Inflation for Beginners by John Gribbin * NASA Universe 101 What is the Inflation Theory? Cosmology () is a branch of physics and metaphysics dealing with the nature of the universe. These observations matched the predictions of the cosmic inflation theory, a modified Big Bang theory, and the specific version known as the Lambda-CDM model. However, it was soon realized that the inflation was essentially controlled by the contribution from a squared Ricci scalar in the effective action : S = \frac{1}{2\kappa} \int \left(R + \frac{R^2}{6M^2} \right) \sqrt{\vert g\vert}\,\mathrm{d}^4x, where \kappa=8\pi G/c^4 and R is the Ricci scalar. Eternal inflation is a hypothetical inflationary universe model, which is itself an outgrowth or extension of the Big Bang theory. This expansion explains various properties of the current universe that are difficult to account for without such an inflationary epoch. Their paper therefore concluded that the theory of eternal inflation based on random quantum fluctuations would not be a viable theory, and the resulting existence of a multiverse is ""still very much an open question that will require much deeper investigation"". ==Inflation, eternal inflation, and the multiverse== In 1983, it was shown that inflation could be eternal, leading to a multiverse in which space is broken up into bubbles or patches whose properties differ from patch to patch spanning all physical possibilities. ",Inflaton,Quanta,Scalar,Metric,Conformal cyclic cosmology,A,kaggle200,"It is now known that quasars are distant but extremely luminous objects, so any light that reaches the Earth is redshifted due to the metric expansion of space.
While an interesting example of gravitational chaos, it is widely recognized that the cosmological problems the Mixmaster universe attempts to solve are more elegantly tackled by cosmic inflation. The metric Misner studied is also known as the Bianchi type IX metric.
No physical field has yet been discovered that is responsible for this inflation. However such a field would be scalar and the first relativistic scalar field proven to exist, the Higgs field, was only discovered in 2012–2013 and is still being researched. So it is not seen as problematic that a field responsible for cosmic inflation and the metric expansion of space has not yet been discovered. The proposed field and its quanta (the subatomic particles related to it) have been named the inflaton. If this field did not exist, scientists would have to propose a different explanation for all the observations that strongly suggest a metric expansion of space has occurred, and is still occurring (much more slowly) today.
No field responsible for cosmic inflation has been discovered. However such a field, if found in the future, would be scalar. The first similar scalar field proven to exist was only discovered in 2012–2013 and is still being researched. So it is not seen as problematic that a field responsible for cosmic inflation and the metric expansion of space has not yet been discovered.","Cosmic inflation Cosmic inflation is a faster-than-light expansion of space just after the Big Bang. It explains the origin of the large-scale structure of the cosmos. It is believed quantum vacuum fluctuations caused by zero-point energy arising in the microscopic inflationary period, later became magnified to a cosmic size, becoming the gravitational seeds for galaxies and structure in the Universe (see galaxy formation and evolution and structure formation). Many physicists also believe that inflation explains why the Universe appears to be the same in all directions (isotropic), why the cosmic microwave background radiation is distributed evenly, why the Universe is flat, and why no magnetic monopoles have been observed.
In this stage, some mechanism, such as cosmic inflation, was responsible for establishing the initial conditions of the universe: homogeneity, isotropy, and flatness. Cosmic inflation also would have amplified minute quantum fluctuations (pre-inflation) into slight density ripples of overdensity and underdensity (post-inflation).
While an interesting example of gravitational chaos, it is widely recognized that the cosmological problems the Mixmaster universe attempts to solve are more elegantly tackled by cosmic inflation. The metric Misner studied is also known as the Bianchi type IX metric.","Cosmic inflation Cosmic inflation is a faster-than-light expansion of space just after the Big BangIf this field did not exist, scientists would have to propose a different explanation for all the observations that strongly suggest a metric expansion of space has occurred, and is still occurring (much more slowly) today.
No field responsible for cosmic inflation has been discoveredSo it is not seen as problematic that a field responsible for cosmic inflation and the metric expansion of space has not yet been discoveredSo it is not seen as problematic that a field responsible for cosmic inflation and the metric expansion of space has not yet been discovered.The metric Misner studied is also known as the Bianchi type IX metric.
No physical field has yet been discovered that is responsible for this inflationIt is believed quantum vacuum fluctuations caused by zero-point energy arising in the microscopic inflationary period, later became magnified to a cosmic size, becoming the gravitational seeds for galaxies and structure in the Universe (see galaxy formation and evolution and structure formation)The proposed field and its quanta (the subatomic particles related to it) have been named the inflaton- It is now known that quasars are distant but extremely luminous objects, so any light that reaches the Earth is redshifted due to the metric expansion of space.
While an interesting example of gravitational chaos, it is widely recognized that the cosmological problems the Mixmaster universe attempts to solve are more elegantly tackled by cosmic inflationCosmic inflation also would have amplified minute quantum fluctuations (pre-inflation) into slight density ripples of overdensity and underdensity (post-inflation).
While an interesting example of gravitational chaos, it is widely recognized that the cosmological problems the Mixmaster universe attempts to solve are more elegantly tackled by cosmic inflationIt explains the origin of the large-scale structure of the cosmosMany physicists also believe that inflation explains why the Universe appears to be the same in all dir","Cosmic inflation Cosmic inflation is a faster-than-light expansion of space just after the Big BangIf this field did not exist, scientists would have to propose a different explanation for all the observations that strongly suggest a metric expansion of space has occurred, and is still occurring (much more slowly) today.
No field responsible for cosmic inflation has been discoveredSo it is not seen as problematic that a field responsible for cosmic inflation and the metric expansion of space has not yet been discoveredSo it is not seen as problematic that a field responsible for cosmic inflation and the metric expansion of space has not yet been discovered.The metric Misner studied is also known as the Bianchi type IX metric.
No physical field has yet been discovered that is responsible for this inflationIt is believed quantum vacuum fluctuations caused by zero-point energy arising in the microscopic inflationary period, later became magnified to a cosmic size, becoming the gravitational seeds for galaxies and structure in the Universe (see galaxy formation and evolution and structure formation)The proposed field and its quanta (the subatomic particles related to it) have been named the inflaton- It is now known that quasars are distant but extremely luminous objects, so any light that reaches the Earth is redshifted due to the metric expansion of space.
While an interesting example of gravitational chaos, it is widely recognized that the cosmological problems the Mixmaster universe attempts to solve are more elegantly tackled by cosmic inflationCosmic inflation also would have amplified minute quantum fluctuations (pre-inflation) into slight density ripples of overdensity and underdensity (post-inflation).
While an interesting example of gravitational chaos, it is widely recognized that the cosmological problems the Mixmaster universe attempts to solve are more elegantly tackled by cosmic inflationIt explains the origin of the large-scale structure of the cosmosMany physicists also believe that inflation explains why the Universe appears to be the same in all dir[SEP]What is the proposed name for the field that is responsible for cosmic inflation and the metric expansion of space?","['A', 'E', 'C']",1.0
Which of the following statements accurately describes the characteristics of gravitational waves?,"As with other waves, there are a number of characteristics used to describe a gravitational wave: * Amplitude: Usually denoted h, this is the size of the wave the fraction of stretching or squeezing in the animation. The speed, wavelength, and frequency of a gravitational wave are related by the equation , just like the equation for a light wave. If the orbit of the masses is elliptical then the gravitational wave's amplitude also varies with time according to Einstein's quadrupole formula. Stephen Hawking and Werner Israel list different frequency bands for gravitational waves that could plausibly be detected, ranging from 10−7 Hz up to 1011 Hz. ==Speed of gravity== The speed of gravitational waves in the general theory of relativity is equal to the speed of light in vacuum, . Gravitational waves are waves of the intensity of gravity generated by the accelerated masses of an orbital binary system that propagate as waves outward from their source at the speed of light. In this case the amplitude of the gravitational wave is constant, but its plane of polarization changes or rotates at twice the orbital rate, so the time-varying gravitational wave size, or 'periodic spacetime strain', exhibits a variation as shown in the animation. Gravitational waves perform the same function. For gravitational waves with small amplitudes, this wave speed is equal to the speed of light (c). Gravitational waves have two important and unique properties. All of these must be taken into account and excluded by analysis before detection may be considered a true gravitational wave event. ====Einstein@Home==== The simplest gravitational waves are those with constant frequency. The oscillations depicted in the animation are exaggerated for the purpose of discussion in reality a gravitational wave has a very small amplitude (as formulated in linearized gravity). The sources of gravitational waves described above are in the low-frequency end of the gravitational-wave spectrum (10−7 to 105 Hz). In principle, gravitational waves could exist at any frequency. However, they help illustrate the kind of oscillations associated with gravitational waves as produced by a pair of masses in a circular orbit. The nonlinearity of surface gravity waves refers to their deviations from a sinusoidal shape. The gravitational wave background (also GWB and stochastic background) is a random gravitational-wave signal potentially detectable by gravitational wave detection experiments. Credit: NASA Goddard Space Flight Center In general terms, gravitational waves are radiated by objects whose motion involves acceleration and its change, provided that the motion is not perfectly spherically symmetric (like an expanding or contracting sphere) or rotationally symmetric (like a spinning disk or sphere). Where General Relativity is accepted, gravitational waves as detected are attributed to ripples in spacetime; otherwise the gravitational waves can be thought of simply as a product of the orbit of binary systems. In general relativity, a gravitational plane wave is a special class of a vacuum pp-wave spacetime, and may be defined in terms of Brinkmann coordinates by ds^2=[a(u)(x^2-y^2)+2b(u)xy]du^2+2dudv+dx^2+dy^2 Here, a(u), b(u) can be any smooth functions; they control the waveform of the two possible polarization modes of gravitational radiation. General relativity precisely describes these trajectories; in particular, the energy radiated in gravitational waves determines the rate of decrease in the period, defined as the time interval between successive periastrons (points of closest approach of the two stars). ","Gravitational waves have an amplitude denoted by h, which represents the size of the wave. The amplitude varies with time according to Newton's quadrupole formula. Gravitational waves also have a frequency denoted by f, which is the frequency of the wave's oscillation, and a wavelength denoted by λ, which is the distance between points of minimum stretch or squeeze.","Gravitational waves have an amplitude denoted by λ, which represents the distance between points of maximum stretch or squeeze. The amplitude varies with time according to Einstein's quadrupole formula. Gravitational waves also have a frequency denoted by h, which is the size of the wave, and a wavelength denoted by f, which is the frequency of the wave's oscillation.","Gravitational waves have an amplitude denoted by h, which represents the size of the wave. The amplitude varies with time according to Einstein's quadrupole formula. Gravitational waves also have a frequency denoted by f, which is the frequency of the wave's oscillation, and a wavelength denoted by λ, which is the distance between points of maximum stretch or squeeze.","Gravitational waves have an amplitude denoted by f, which represents the frequency of the wave's oscillation. The amplitude varies with time according to Einstein's quadrupole formula. Gravitational waves also have a frequency denoted by h, which is the size of the wave, and a wavelength denoted by λ, which is the distance between points of maximum stretch or squeeze.","Gravitational waves have an amplitude denoted by f, which represents the frequency of the wave's oscillation. The amplitude varies with time according to Newton's quadrupole formula. Gravitational waves also have a frequency denoted by h, which is the size of the wave, and a wavelength denoted by λ, which is the distance between points of minimum stretch or squeeze.",C,kaggle200,"Gravitational waves can be detected indirectly – by observing celestial phenomena caused by gravitational waves – or more directly by means of instruments such as the Earth-based LIGO or the planned space-based LISA instrument.
In the following pseudocode the degree of a polynomial formula_20 is denoted by formula_21 and the coefficient of formula_22 is denoted by formula_23.
The oscillations depicted in the animation are exaggerated for the purpose of discussion in reality a gravitational wave has a very small amplitude (as formulated in linearized gravity). However, they help illustrate the kind of oscillations associated with gravitational waves as produced by a pair of masses in a circular orbit. In this case the amplitude of the gravitational wave is constant, but its plane of polarization changes or rotates at twice the orbital rate, so the time-varying gravitational wave size, or 'periodic spacetime strain', exhibits a variation as shown in the animation. If the orbit of the masses is elliptical then the gravitational wave's amplitude also varies with time according to Einstein's quadrupole formula.
Gravitational waves also travel through space. The first observation of gravitational waves was announced on 11 February 2016.","Gravitational waves also travel through space. The first observation of gravitational waves was announced on 11 February 2016.
Gravitational waves are disturbances in the curvature of spacetime, predicted by Einstein's theory of general relativity.
Gravitational waves also travel through space. The first observation of gravitational waves was announced on 11 February 2016.
Gravitational waves are disturbances in the curvature of spacetime, predicted by Einstein's theory of general relativity.
Frequency: Usually denoted f, this is the frequency with which the wave oscillates (1 divided by the amount of time between two successive maximum stretches or squeezes) Wavelength: Usually denoted λ, this is the distance along the wave between points of maximum stretch or squeeze.","The first observation of gravitational waves was announced on 11 February 2016.
Gravitational waves are disturbances in the curvature of spacetime, predicted by Einstein's theory of general relativity.
Frequency: Usually denoted f, this is the frequency with which the wave oscillates (1 divided by the amount of time between two successive maximum stretches or squeezes) Wavelength: Usually denoted λ, this is the distance along the wave between points of maximum stretch or squeezeGravitational waves also travel through spaceHowever, they help illustrate the kind of oscillations associated with gravitational waves as produced by a pair of masses in a circular orbit- Gravitational waves can be detected indirectly – by observing celestial phenomena caused by gravitational waves – or more directly by means of instruments such as the Earth-based LIGO or the planned space-based LISA instrument.
In the following pseudocode the degree of a polynomial formula_20 is denoted by formula_21 and the coefficient of formula_22 is denoted by formula_23.
The oscillations depicted in the animation are exaggerated for the purpose of discussion in reality a gravitational wave has a very small amplitude (as formulated in linearized gravity)The first observation of gravitational waves was announced on 11 February 2016.
Gravitational waves are disturbances in the curvature of spacetime, predicted by Einstein's theory of general relativity.
Gravitational waves also travel through spaceIf the orbit of the masses is elliptical then the gravitational wave's amplitude also varies with time according to Einstein's quadrupole formula.
Gravitational waves also travel through spaceIn this case the amplitude of the gravitational wave is constant, but its plane of polarization changes or rotates at twice the orbital rate, so the time-varying gravitational wave size, or 'periodic spacetime strain', exhibits a variation as shown in the animationThe first observation of gravitational waves was announced on 11 February 2016.","The first observation of gravitational waves was announced on 11 February 2016.
Gravitational waves are disturbances in the curvature of spacetime, predicted by Einstein's theory of general relativity.
Frequency: Usually denoted f, this is the frequency with which the wave oscillates (1 divided by the amount of time between two successive maximum stretches or squeezes) Wavelength: Usually denoted λ, this is the distance along the wave between points of maximum stretch or squeezeGravitational waves also travel through spaceHowever, they help illustrate the kind of oscillations associated with gravitational waves as produced by a pair of masses in a circular orbit- Gravitational waves can be detected indirectly – by observing celestial phenomena caused by gravitational waves – or more directly by means of instruments such as the Earth-based LIGO or the planned space-based LISA instrument.
In the following pseudocode the degree of a polynomial formula_20 is denoted by formula_21 and the coefficient of formula_22 is denoted by formula_23.
The oscillations depicted in the animation are exaggerated for the purpose of discussion in reality a gravitational wave has a very small amplitude (as formulated in linearized gravity)The first observation of gravitational waves was announced on 11 February 2016.
Gravitational waves are disturbances in the curvature of spacetime, predicted by Einstein's theory of general relativity.
Gravitational waves also travel through spaceIf the orbit of the masses is elliptical then the gravitational wave's amplitude also varies with time according to Einstein's quadrupole formula.
Gravitational waves also travel through spaceIn this case the amplitude of the gravitational wave is constant, but its plane of polarization changes or rotates at twice the orbital rate, so the time-varying gravitational wave size, or 'periodic spacetime strain', exhibits a variation as shown in the animationThe first observation of gravitational waves was announced on 11 February 2016.[SEP]Which of the following statements accurately describes the characteristics of gravitational waves?","['C', 'D', 'A']",1.0
What is the difference between the coevolution of myrmecophytes and the mutualistic symbiosis of mycorrhiza?,"Many myrmecophytes are defended from both herbivores and other competing plants by their ant symbionts. The ants keep the plant free from other insects and vertebrate herbivores, from invading fungi and also from other plants. == Examples of myrmecophytic genera and species == * Anthorrhiza * Dischidia * Hydnophytum * Lecanopteris * Nepenthes bicalcarata * Microgramma * Myrmecodia * Myrmephytum * Squamellaria * Vachellia cornigera == See also == *Ant garden *List of symbiotic relationships ==Notes== ==References== * * * * * * * * * * * * * == External links == * A video about ant plants Category:Ants Category:Mutualism (biology) Category:Botany * Myrmecotrophy is the ability of plants to obtain nutrients from ants, a form of mutualism. Specifically, domatia adapted to ants may be called myrmecodomatia. == Mutualism == Myrmecophytes share a mutualistic relationship with ants, benefiting both the plants and ants. Mycorrhizal relationships are most commonly mutualistic, with both partners benefiting, but can be commensal or parasitic, and a single partnership may change between any of the three types of symbiosis at different times. Both plants and fungi associate with multiple symbiotic partners at once, and both plants and fungi are capable of preferentially allocating resources to one partner over another. In general, myrmecophytes (or ant plants) usually provide some form of shelter and food in exchange for ant ""tending"", which may include protection, seed dispersal (see myrmecochory), reduced competition from other plants, hygienic services, and/or nutrient supplementation.M. Heil and D. McKey, ""Protective ant-plant interactions as model systems in ecological and evolutionary research,"" Annual Review of Ecology, Evolution, and Systematics, vol. 34, 2003, pp. 425-453. In evolutionary biology, mycorrhizal symbiosis has prompted inquiries into the possibility that symbiosis, not competition, is the main driver of evolution. A plant sharing a mycorrhizal network with another that is attacked will display similar defensive strategies, and its defenses will be primed to increase the production of toxins or chemicals which repel attackers or attract defensive species. Studies have found that association with mature plants correlates with higher survival of the plant and greater diversity and species richness of the mycorrhizal fungi. === Carbon transfer === Mycorrhizal networks can transfer carbon between plants in the network through the fungi linking them. A carnivorous plant fed by its ant symbiont: a unique multi-faceted nutritional mutualism. Myrmecophily is considered a form of indirect plant defense against herbivory, though ants often provide other services in addition to protection. Many studies report that mycorrhizal networks facilitate the coordination of defenses between connected plants using volatile organic compounds and other plant defensive enzymes acting as infochemicals. In studying the coevolution of myrmecophilous organisms, many researchers have addressed the relative costs and benefits of mutualistic interactions, which can vary drastically according to local species composition and abundance, variation in nutrient requirements and availability, host plant quality, presence of alternative food sources, abundance and composition of predator and parasitoid species, and abiotic conditions. It has been demonstrated that mechanisms exist by which mycorrhizal fungi can preferentially allocate nutrients to certain plants without a source–sink relationship. These and other studies provide evidence that mycorrhizal networks can facilitate the effects on plant behavior caused by allelochemicals. === Defensive communication === Mycorrhizal networks can connect many different plants and provide shared pathways by which plants can transfer infochemicals related to attacks by pathogens or herbivores, allowing receiving plants to react in the same way as the infected or infested plants. Therefore, they provide ideal model systems in which to explore the magnitude, dynamics, and frequency of mutualism in nature. ==See also== * Myrmecochory * Myrmecophyte * Myrmecotrophy * Myrmecomorphy (ant mimicry) ==References== Category:Ecology Category:Myrmecology Category:Ants Category:Mutualism (biology) Scientists believe that transfer of nutrients by way of mycorrhizal networks could act to alter the behavior of receiving plants by inducing physiological or biochemical changes, and there is evidence that these changes have improved nutrition, growth and survival of receiving plants. === Mechanisms === Several mechanisms have been observed and proposed by which nutrients can move between plants connected by a mycorrhizal network, including source-sink relationships, preferential transfer and kin related mechanisms. In laboratory tests, the worker ants did not survive away from the plants, and in their natural habitat they were never found anywhere else. ===Facultative mutualism=== Facultative mutualism is a type of relationship where the survival of both parties (plant and ants, in this instance) is not dependent upon the interaction. Acacia ants|right|thumb Myrmecophytes (; literally ""ant-plant"") are plants that live in a mutualistic association with a colony of ants. ","Myrmecophytes coevolve with ants, providing them with a home and sometimes food, while the ants defend the plant from herbivores and competing plants. On the other hand, mycorrhiza is a mutualistic symbiosis between plants and fungi, where the fungi help the plants gain water and mineral nutrients from the soil, while the plant gives the fungi carbohydrates manufactured in photosynthesis.","Myrmecophytes coevolve with ants, providing them with food, while the ants defend the plant from herbivores and competing plants. On the other hand, mycorrhiza is a mutualistic symbiosis between plants and fungi, where the fungi help the plants gain water and mineral nutrients from the soil, while the plant gives the fungi water and mineral nutrients.","Myrmecophytes coevolve with butterflies, providing them with a home and sometimes food, while the butterflies defend the plant from herbivores and competing plants. On the other hand, mycorrhiza is a mutualistic symbiosis between plants and birds, where the birds help the plants gain water and mineral nutrients from the soil, while the plant gives the birds carbohydrates manufactured in photosynthesis.","Myrmecophytes coevolve with birds, providing them with a home and sometimes food, while the birds defend the plant from herbivores and competing plants. On the other hand, mycorrhiza is a mutualistic symbiosis between plants and bacteria, where the bacteria help the plants gain water and mineral nutrients from the soil, while the plant gives the bacteria carbohydrates manufactured in photosynthesis.","Myrmecophytes coevolve with bees, providing them with a home and sometimes food, while the bees defend the plant from herbivores and competing plants. On the other hand, mycorrhiza is a mutualistic symbiosis between plants and insects, where the insects help the plants gain water and mineral nutrients from the soil, while the plant gives the insects carbohydrates manufactured in photosynthesis.",A,kaggle200,"Many plants don't really have secondary metabolites, chemical processes, and/or mechanical defenses to help them fend off herbivores. Instead, these plants rely on overcompensation (which is regarded as a form of mutualism) when they are attacked by herbivores. Overcompensation in plants is defined as plants having higher fitness when attacked by an herbivore/predator, in comparison to undamaged plants. Which ends up being somewhat of a mutualistic relationship, the herbivore is satisfied with a meal, while the plant starts growing the missing part quickly. These plants then have a higher chance of reproducing, hence why their fitness is increased. Sadly, this approach is often dismissed because many scientists fail to see how direct herbivory can lead to plant evolution and mutualistic relationships.
Plants also have mutualistic symbiotic relationships with fungal communities that are found in a microbe abundant layer of the soil called the rhizosphere. Fungi can be vertically transmitted to progeny plants, or horizontally through fungal diffusion in the soil. Regardless of transmission, the most common cases of fungal plant symbiosis happens when fungal communities colonize plant root structure. There are some cases of symbiosis that Begin before maturity such as the Orchidaceae family, in which symbiosis begins at the seed germination phase. Arbuscular mycorrhizal fungi supply the plant essential inorganic nutrients (in the form of minerals) for 80% of terrestrial plant species. In return the plant will provide fungi with plant assimilated carbon that can easily be metabolized and used for energy.
The relationship between plants and mycorrhizal fungi is an example of mutualism because plants provides fungi with carbohydrates and mycorrhizal fungi help plants absorb more water and nutrients. Since mycorrhizal fungi increase plants' uptake of below-ground resources, plants who form a mutualistic relationship with fungi have stimulated shoot growth and a higher shoot to root ratio.
""Roridula"" has a more complex relationship with its prey. The plants in this genus produce sticky leaves with resin-tipped glands that look similar to those of larger ""Drosera"". However, the resin, unlike mucilage, is unable to carry digestive enzymes. Therefore, ""Roridula"" species do not directly benefit from the insects they catch. Instead, they form a mutualistic symbiosis with species of assassin bugs that eat the trapped insects. The plant benefits from the nutrients in the bugs' feces.","Sugar-water/mineral exchange The mycorrhizal mutualistic association provides the fungus with relatively constant and direct access to carbohydrates, such as glucose and sucrose. The carbohydrates are translocated from their source (usually leaves) to root tissue and on to the plant's fungal partners. In return, the plant gains the benefits of the mycelium's higher absorptive capacity for water and mineral nutrients, partly because of the large surface area of fungal hyphae, which are much longer and finer than plant root hairs, and partly because some such fungi can mobilize soil minerals unavailable to the plants' roots. The effect is thus to improve the plant's mineral absorption capabilities.Unaided plant roots may be unable to take up nutrients that are chemically or physically immobilised; examples include phosphate ions and micronutrients such as iron. One form of such immobilization occurs in soil with high clay content, or soils with a strongly basic pH. The mycelium of the mycorrhizal fungus can, however, access many such nutrient sources, and make them available to the plants they colonize. Thus, many plants are able to obtain phosphate without using soil as a source. Another form of immobilisation is when nutrients are locked up in organic matter that is slow to decay, such as wood, and some mycorrhizal fungi act directly as decay organisms, mobilising the nutrients and passing some onto the host plants; for example, in some dystrophic forests, large amounts of phosphate and other nutrients are taken up by mycorrhizal hyphae acting directly on leaf litter, bypassing the need for soil uptake. Inga alley cropping, an agroforestry technique proposed as an alternative to slash and burn rainforest destruction, relies upon mycorrhiza within the root system of species of Inga to prevent the rain from washing phosphorus out of the soil.In some more complex relationships, mycorrhizal fungi do not just collect immobilised soil nutrients, but connect individual plants together by mycorrhizal networks that transport water, carbon, and other nutrients directly from plant to plant through underground hyphal networks.Suillus tomentosus, a basidiomycete fungus, produces specialized structures known as tuberculate ectomycorrhizae with its plant host lodgepole pine (Pinus contorta var. latifolia). These structures have been shown to host nitrogen fixing bacteria which contribute a significant amount of nitrogen and allow the pines to colonize nutrient-poor sites.
Plants also have mutualistic symbiotic relationships with fungal communities that are found in a microbe abundant layer of the soil called the rhizosphere. Fungi can be vertically transmitted to progeny plants, or horizontally through fungal diffusion in the soil. Regardless of transmission, the most common cases of fungal plant symbiosis happens when fungal communities colonize plant root structure. There are some cases of symbiosis that Begin before maturity such as the Orchidaceae family, in which symbiosis begins at the seed germination phase. Arbuscular mycorrhizal fungi supply the plant essential inorganic nutrients (in the form of minerals) for 80% of terrestrial plant species. In return the plant will provide fungi with plant assimilated carbon that can easily be metabolized and used for energy.
A mycorrhiza is a symbiotic association between a green plant and a fungus. The plant makes organic molecules such as sugars by photosynthesis and supplies them to the fungus, while the fungus supplies the plant with water and mineral nutrients, such as phosphorus, taken from the soil. Mycorrhizas are located in the roots of vascular plants, but mycorrhiza-like associations also occur in bryophytes and there is fossil evidence that early land plants that lacked roots formed arbuscular mycorrhizal associations. Most plant species form mycorrhizal associations, though some families like Brassicaceae and Chenopodiaceae cannot. Different forms for the association are detailed in the next section. The most common is the arbuscular type that is present in 70% of plant species, including many crop plants such as wheat and rice.","In return the plant will provide fungi with plant assimilated carbon that can easily be metabolized and used for energy.
The relationship between plants and mycorrhizal fungi is an example of mutualism because plants provides fungi with carbohydrates and mycorrhizal fungi help plants absorb more water and nutrientsSugar-water/mineral exchange The mycorrhizal mutualistic association provides the fungus with relatively constant and direct access to carbohydrates, such as glucose and sucroseIn return the plant will provide fungi with plant assimilated carbon that can easily be metabolized and used for energy.
A mycorrhiza is a symbiotic association between a green plant and a fungusMost plant species form mycorrhizal associations, though some families like Brassicaceae and Chenopodiaceae cannotSince mycorrhizal fungi increase plants' uptake of below-ground resources, plants who form a mutualistic relationship with fungi have stimulated shoot growth and a higher shoot to root ratio.
""Roridula"" has a more complex relationship with its preyThe mycelium of the mycorrhizal fungus can, however, access many such nutrient sources, and make them available to the plants they colonizeMycorrhizas are located in the roots of vascular plants, but mycorrhiza-like associations also occur in bryophytes and there is fossil evidence that early land plants that lacked roots formed arbuscular mycorrhizal associationsArbuscular mycorrhizal fungi supply the plant essential inorganic nutrients (in the form of minerals) for 80% of terrestrial plant speciesInga alley cropping, an agroforestry technique proposed as an alternative to slash and burn rainforest destruction, relies upon mycorrhiza within the root system of species of Inga to prevent the rain from washing phosphorus out of the soil.In some more complex relationships, mycorrhizal fungi do not just collect immobilised soil nutrients, but connect individual plants together by mycorrhizal networks that transport water, carbon, and other nutrients directly from plant to plant through underground hyphal networks.Suillus tomentosus, a bas","In return the plant will provide fungi with plant assimilated carbon that can easily be metabolized and used for energy.
The relationship between plants and mycorrhizal fungi is an example of mutualism because plants provides fungi with carbohydrates and mycorrhizal fungi help plants absorb more water and nutrientsSugar-water/mineral exchange The mycorrhizal mutualistic association provides the fungus with relatively constant and direct access to carbohydrates, such as glucose and sucroseIn return the plant will provide fungi with plant assimilated carbon that can easily be metabolized and used for energy.
A mycorrhiza is a symbiotic association between a green plant and a fungusMost plant species form mycorrhizal associations, though some families like Brassicaceae and Chenopodiaceae cannotSince mycorrhizal fungi increase plants' uptake of below-ground resources, plants who form a mutualistic relationship with fungi have stimulated shoot growth and a higher shoot to root ratio.
""Roridula"" has a more complex relationship with its preyThe mycelium of the mycorrhizal fungus can, however, access many such nutrient sources, and make them available to the plants they colonizeMycorrhizas are located in the roots of vascular plants, but mycorrhiza-like associations also occur in bryophytes and there is fossil evidence that early land plants that lacked roots formed arbuscular mycorrhizal associationsArbuscular mycorrhizal fungi supply the plant essential inorganic nutrients (in the form of minerals) for 80% of terrestrial plant speciesInga alley cropping, an agroforestry technique proposed as an alternative to slash and burn rainforest destruction, relies upon mycorrhiza within the root system of species of Inga to prevent the rain from washing phosphorus out of the soil.In some more complex relationships, mycorrhizal fungi do not just collect immobilised soil nutrients, but connect individual plants together by mycorrhizal networks that transport water, carbon, and other nutrients directly from plant to plant through underground hyphal networks.Suillus tomentosus, a bas[SEP]What is the difference between the coevolution of myrmecophytes and the mutualistic symbiosis of mycorrhiza?","['A', 'B', 'E']",1.0
What is the Kelvin-Helmholtz instability and how does it affect Earth's magnetosphere?,"This is a form of Kelvin–Helmholtz instability. thumb|right|Numerical simulation of a temporal Kelvin–Helmholtz instability The Kelvin–Helmholtz instability (after Lord Kelvin and Hermann von Helmholtz) is a fluid instability that occurs when there is velocity shear in a single continuous fluid or a velocity difference across the interface between two fluids. If the density and velocity vary continuously in space (with the lighter layers uppermost, so that the fluid is RT-stable), the dynamics of the Kelvin- Helmholtz instability is described by the Taylor–Goldstein equation: (U-c)^2\left({d^2\tilde\phi \over d z^2} - k^2\tilde\phi\right) +\left[N^2-(U-c){d^2 U \over d z^2}\right]\tilde\phi = 0, where N = \sqrt{g / L_\rho} denotes the Brunt–Väisälä frequency, U is the horizontal parallel velocity, k is the wave number, c is the eigenvalue parameter of the problem, \tilde\phi is complex amplitude of the stream function. The impact of the solar wind onto the magnetosphere generates an electric field within the inner magnetosphere (r < 10 a; with a the Earth's radius) - the convection field-. Numerically, the Kelvin–Helmholtz instability is simulated in a temporal or a spatial approach. Kelvin-Helmholtz instabilities are visible in the atmospheres of planets and moons, such as in cloud formations on Earth or the Red Spot on Jupiter, and the atmospheres of the Sun and other stars. ==Theory overview and mathematical concepts== thumb|right|A KH instability on the planet Saturn, formed at the interaction of two bands of the planet's atmosphere thumb|left|Kelvin-Helmholtz billows 500m deep in the Atlantic Ocean thumb|Animation of the KH instability, using a second order 2D finite volume scheme Fluid dynamics predicts the onset of instability and transition to turbulent flow within fluids of different densities moving at different speeds. ""The Inner Magnetosphere: Physics and Modelling"", Geophysical Monograph AGU, Washington, D.C., 2000 One possibility is viscous interaction between solar wind and the boundary layer of the magnetosphere (magnetopause). During major magnetospheric disturbances, large amounts of ionospheric plasma are transported into the polar ionosphere by the electric convection fields, causing severe ionospheric anomalies and impacting space weather. ==See also== *Corotation electric field ==Literature== Category:Geomagnetism Longer-lasting magnetospheric disturbances of the order of several hours to days can develop into global- scale thermospheric and ionospheric storms (e.g.,Prölss, G.W. and M. K. Bird, ""Physics of the Earth's Space Environment"", Springer Verlag, Heidelberg, 2010). The thermal plasma within the inner magnetosphere co- rotates with the Earth. The co- rotating thermal plasma within the inner magnetosphere drifts orthogonal to that field and to the geomagnetic field Bo. This instability is a turbulence of the electron gas in a non-equilibrium plasma (i.e. where the electron temperature Te is greatly higher than the overall gas temperature Tg). The variability of the solar wind flux determines the magnetospheric activity, generally expressed by the degree of geomagnetic activity observed on the ground. ==Polar Magnetosphere== The electric convection field in the near Earth polar region can be simulated by eq.() with the exponent q = - 1/2. Magnetic pulsations are extremely low frequency disturbances in the Earth's magnetosphere driven by its interactions with the solar wind. The electric field reversal at Lm clearly indicates a reversal of the plasma drift within the inner and the polar magnetosphere. From the shape of the observed plasmapause configuration, the exponent q = 2 in eq.() has been determined, while the extent of the plasmapause decreasing with geomagnetic activity is simulated by the amplitude Φco ==Origin of Convection Field== The origin of the electric convection field results from the interaction between the solar wind plasma and the geomagnetic field. For a transformation from a rotating magnetospheric coordinate system into a non-rotating system, τ must be replaced by the longitude -λ. ==Inner Magnetosphere== With the numbers q ~ 2, and Φco and τco increasing with geomagnetic activity (e.g., Φco ~ 17 and 65 kVolt, and τco ~ 0 and 1 h, during geomagnetically quiet and slightly disturbed conditions, respectively), eq.() valid at lower latitudes, (θ > θm) and within the inner magnetosphere (r ≤ 10 a) is the Volland-Stern model (see Fig. 1 a)). thumb|center|upright=2.0|alt=Global magnetospheric electric convection field |Figure 1: Equipotential lines of electric convection field within the equatorial plane of the magnetosphere (left), and superposition of the convection field with the co-rotation field (right) during magnetically quiet conditions The use of an electrostatic field means that this model is valid only for slow temporal variations (of the order of one day or larger). __NOTOC__ The electrothermal instability (also known as ionization instability, non-equilibrium instability or Velikhov instability in the literature) is a magnetohydrodynamic (MHD) instability appearing in magnetized non-thermal plasmas used in MHD converters. Geophysical observations of the Kelvin-Helmholtz instability were made through the late 1960s/early 1970s, for clouds, and later the ocean. == See also == * Rayleigh–Taylor instability * Richtmyer–Meshkov instability * Mushroom cloud * Plateau–Rayleigh instability * Kármán vortex street * Taylor–Couette flow * Fluid mechanics * Fluid dynamics *Reynolds number *Turbulence == Notes == == References == * * * Article describing discovery of K-H waves in deep ocean: == External links == * * Giant Tsunami-Shaped Clouds Roll Across Alabama Sky - Natalie Wolchover, Livescience via Yahoo.com * Tsunami Cloud Hits Florida Coastline * Vortex formation in free jet - YouTube video showing Kelvin Helmholtz waves on the edge of a free jet visualised in a scientific experiment. Atmospheric convection is the result of a parcel-environment instability, or temperature difference layer in the atmosphere. ","The Kelvin-Helmholtz instability is a phenomenon that occurs when large swirls of plasma travel along the edge of the magnetosphere at a different velocity from the magnetosphere, causing the plasma to slip past. This results in magnetic reconnection, and as the magnetic field lines break and reconnect, solar wind particles are able to enter the magnetosphere.",The Kelvin-Helmholtz instability is a phenomenon that occurs when the magnetosphere is compared to a sieve because it allows solar wind particles to enter.,"The Kelvin-Helmholtz instability is a phenomenon that occurs when Earth's bow shock is about 17 kilometers (11 mi) thick and located about 90,000 kilometers (56,000 mi) from Earth.","The Kelvin-Helmholtz instability is a phenomenon that occurs when the magnetic field extends in the magnetotail on Earth's nightside, which lengthwise exceeds 6,300,000 kilometers (3,900,000 mi).","The Kelvin-Helmholtz instability is a phenomenon that occurs when the magnetosphere is compressed by the solar wind to a distance of approximately 65,000 kilometers (40,000 mi) on the dayside of Earth. This results in the magnetopause existing at a distance of several hundred kilometers above Earth's surface.",A,kaggle200,"The reference to the thunderstorm front corresponds to the outflow boundary associated with downbursts that are indeed very dangerous and are the site of vortices associated with the Kelvin-Helmholtz instability at the junction between updraughts and downdraughts. However, in front of the thunderstorm, updraughts are generally laminar due to the negative buoyancy of air parcels (see above).
If the density and velocity vary continuously in space (with the lighter layers uppermost, so that the fluid is RT-stable), the dynamics of the Kelvin-Helmholtz instability is described by the Taylor–Goldstein equation:
A diocotron instability is a plasma instability created by two sheets of charge slipping past each other. Energy is dissipated in the form of two surface waves which propagate in opposite directions, one flowing over the other. This instability is the plasma analog of the Kelvin-Helmholtz instability in fluid mechanics.
Over Earth's equator, the magnetic field lines become almost horizontal, then return to reconnect at high latitudes. However, at high altitudes, the magnetic field is significantly distorted by the solar wind and its solar magnetic field. On the dayside of Earth, the magnetic field is significantly compressed by the solar wind to a distance of approximately . Earth's bow shock is about thick and located about from Earth. The magnetopause exists at a distance of several hundred kilometers above Earth's surface. Earth's magnetopause has been compared to a sieve because it allows solar wind particles to enter. Kelvin–Helmholtz instabilities occur when large swirls of plasma travel along the edge of the magnetosphere at a different velocity from the magnetosphere, causing the plasma to slip past. This results in magnetic reconnection, and as the magnetic field lines break and reconnect, solar wind particles are able to enter the magnetosphere. On Earth's nightside, the magnetic field extends in the magnetotail, which lengthwise exceeds . Earth's magnetotail is the primary source of the polar aurora. Also, NASA scientists have suggested that Earth's magnetotail might cause ""dust storms"" on the Moon by creating a potential difference between the day side and the night side.","Throughout the early 20th Century, the ideas of Kelvin-Helmholtz instabilities were applied to a range of stratified fluid applications. In the early 1920s, Lewis Fry Richardson developed the concept that such shear instability would only form where shear overcame static stability due to stratification, encapsulated in the Richardson Number. Geophysical observations of the Kelvin-Helmholtz instability were made through the late 1960s/early 1970s, for clouds, and later the ocean.
A diocotron instability is a plasma instability created by two sheets of charge slipping past each other. Energy is dissipated in the form of two surface waves which propagate in opposite directions, one flowing over the other. This instability is the plasma analog of the Kelvin-Helmholtz instability in fluid mechanics.
Earth's magnetosphere Over Earth's equator, the magnetic field lines become almost horizontal, then return to reconnect at high latitudes. However, at high altitudes, the magnetic field is significantly distorted by the solar wind and its solar magnetic field. On the dayside of Earth, the magnetic field is significantly compressed by the solar wind to a distance of approximately 65,000 kilometers (40,000 mi). Earth's bow shock is about 17 kilometers (11 mi) thick and located about 90,000 kilometers (56,000 mi) from Earth. The magnetopause exists at a distance of several hundred kilometers above Earth's surface. Earth's magnetopause has been compared to a sieve because it allows solar wind particles to enter. Kelvin–Helmholtz instabilities occur when large swirls of plasma travel along the edge of the magnetosphere at a different velocity from the magnetosphere, causing the plasma to slip past. This results in magnetic reconnection, and as the magnetic field lines break and reconnect, solar wind particles are able to enter the magnetosphere. On Earth's nightside, the magnetic field extends in the magnetotail, which lengthwise exceeds 6,300,000 kilometers (3,900,000 mi). Earth's magnetotail is the primary source of the polar aurora. Also, NASA scientists have suggested that Earth's magnetotail might cause ""dust storms"" on the Moon by creating a potential difference between the day side and the night side.","Kelvin–Helmholtz instabilities occur when large swirls of plasma travel along the edge of the magnetosphere at a different velocity from the magnetosphere, causing the plasma to slip pastThis instability is the plasma analog of the Kelvin-Helmholtz instability in fluid mechanics.
Earth's magnetosphere Over Earth's equator, the magnetic field lines become almost horizontal, then return to reconnect at high latitudesThis instability is the plasma analog of the Kelvin-Helmholtz instability in fluid mechanics.
Over Earth's equator, the magnetic field lines become almost horizontal, then return to reconnect at high latitudes Geophysical observations of the Kelvin-Helmholtz instability were made through the late 1960s/early 1970s, for clouds, and later the ocean.
A diocotron instability is a plasma instability created by two sheets of charge slipping past each otherHowever, in front of the thunderstorm, updraughts are generally laminar due to the negative buoyancy of air parcels (see above).
If the density and velocity vary continuously in space (with the lighter layers uppermost, so that the fluid is RT-stable), the dynamics of the Kelvin-Helmholtz instability is described by the Taylor–Goldstein equation:
A diocotron instability is a plasma instability created by two sheets of charge slipping past each otherThis results in magnetic reconnection, and as the magnetic field lines break and reconnect, solar wind particles are able to enter the magnetosphereThe magnetopause exists at a distance of several hundred kilometers above Earth's surface- The reference to the thunderstorm front corresponds to the outflow boundary associated with downbursts that are indeed very dangerous and are the site of vortices associated with the Kelvin-Helmholtz instability at the junction between updraughts and downdraughtsThroughout the early 20th Century, the ideas of Kelvin-Helmholtz instabilities were applied to a range of stratified fluid applicationsEarth's magnetotail is the primary source of the polar auroraHowever, at high altitudes, the magnetic field is significantly distorted by ","Kelvin–Helmholtz instabilities occur when large swirls of plasma travel along the edge of the magnetosphere at a different velocity from the magnetosphere, causing the plasma to slip pastThis instability is the plasma analog of the Kelvin-Helmholtz instability in fluid mechanics.
Earth's magnetosphere Over Earth's equator, the magnetic field lines become almost horizontal, then return to reconnect at high latitudesThis instability is the plasma analog of the Kelvin-Helmholtz instability in fluid mechanics.
Over Earth's equator, the magnetic field lines become almost horizontal, then return to reconnect at high latitudes Geophysical observations of the Kelvin-Helmholtz instability were made through the late 1960s/early 1970s, for clouds, and later the ocean.
A diocotron instability is a plasma instability created by two sheets of charge slipping past each otherHowever, in front of the thunderstorm, updraughts are generally laminar due to the negative buoyancy of air parcels (see above).
If the density and velocity vary continuously in space (with the lighter layers uppermost, so that the fluid is RT-stable), the dynamics of the Kelvin-Helmholtz instability is described by the Taylor–Goldstein equation:
A diocotron instability is a plasma instability created by two sheets of charge slipping past each otherThis results in magnetic reconnection, and as the magnetic field lines break and reconnect, solar wind particles are able to enter the magnetosphereThe magnetopause exists at a distance of several hundred kilometers above Earth's surface- The reference to the thunderstorm front corresponds to the outflow boundary associated with downbursts that are indeed very dangerous and are the site of vortices associated with the Kelvin-Helmholtz instability at the junction between updraughts and downdraughtsThroughout the early 20th Century, the ideas of Kelvin-Helmholtz instabilities were applied to a range of stratified fluid applicationsEarth's magnetotail is the primary source of the polar auroraHowever, at high altitudes, the magnetic field is significantly distorted by [SEP]What is the Kelvin-Helmholtz instability and how does it affect Earth's magnetosphere?","['E', 'D', 'A']",0.3333333333333333
What is the significance of the high degree of fatty-acyl disorder in the thylakoid membranes of plants?,"On the thylakoid membranes are photosynthetic pigments, including chlorophyll a. Chloroplasts in plants are notable as they exhibit a distinct chloroplast dimorphism. === pH === Because of the H+ gradient across the thylakoid membrane, the interior of the thylakoid is acidic, with a pH around 4, while the stroma is slightly basic, with a pH of around 8. A chloroplast is characterized by its two membranes and a high concentration of chlorophyll. Inside the outer and inner chloroplast membranes is the chloroplast stroma, a semi-gel-like fluid that makes up much of a chloroplast's volume, and in which the thylakoid system floats. In addition to regulating the passage of materials, the inner chloroplast membrane is where fatty acids, lipids, and carotenoids are synthesized. ==== Peripheral reticulum ==== Some chloroplasts contain a structure called the chloroplast peripheral reticulum. Like chloroplasts, they have thylakoids within them. Like photosystem I and ATP synthase, phycobilisomes jut into the stroma, preventing thylakoid stacking in red algal chloroplasts. While different parts of the thylakoid system contain different membrane proteins, the thylakoid membranes are continuous and the thylakoid space they enclose form a single continuous labyrinth. ====Thylakoid composition==== Embedded in the thylakoid membranes are important protein complexes which carry out the light reactions of photosynthesis. For a long time, the three-dimensional structure of the thylakoid membrane system had been unknown or disputed. Chloroplasts carry out a number of other functions, including fatty acid synthesis, amino acid synthesis, and the immune response in plants. Chloroplasts synthesize all the fatty acids in a plant cell—linoleic acid, a fatty acid, is a precursor to jasmonate. === Photosynthesis === One of the main functions of the chloroplast is its role in photosynthesis, the process by which light is transformed into chemical energy, to subsequently produce food in the form of sugars. Extension of the fatty acid comes from repeated cycles of malonyl-ACP condensation, reduction, and dehydration. A chloroplast () is a type of membrane-bound organelle known as a plastid that conducts photosynthesis mostly in plant and algal cells. Molecules in the thylakoid membrane use the energized electrons to pump hydrogen ions into the thylakoid space, decreasing the pH and turning it acidic. In the presence of light, the pH of the thylakoid lumen can drop up to 1.5 pH units, while the pH of the stroma can rise by nearly one pH unit. === Amino acid synthesis === Chloroplasts alone make almost all of a plant cell's amino acids in their stroma except the sulfur-containing ones like cysteine and methionine. Glaucophyte algal chloroplasts have a peptidoglycan layer between the chloroplast membranes. In the transmission electron microscope, thylakoid membranes appear as alternating light-and-dark bands, 8.5 nanometers thick. Other lipids are derived from the methyl-erythritol phosphate (MEP) pathway and consist of gibberelins, sterols, abscisic acid, phytol, and innumerable secondary metabolites. == Differentiation, replication, and inheritance == Chloroplasts are a special type of a plant cell organelle called a plastid, though the two terms are sometimes used interchangeably. All chloroplasts have at least three membrane systems—the outer chloroplast membrane, the inner chloroplast membrane, and the thylakoid system. Plant Physiology and Development, Sixth Edition === Cellular location === ==== Chloroplast movement ==== The chloroplasts of plant and algal cells can orient themselves to best suit the available light. ",The high degree of fatty-acyl disorder in the thylakoid membranes of plants is responsible for the low fluidity of membrane lipid fatty-acyl chains in the gel phase.,The high degree of fatty-acyl disorder in the thylakoid membranes of plants is responsible for the exposure of chloroplast thylakoid membranes to cold environmental temperatures.,The high degree of fatty-acyl disorder in the thylakoid membranes of plants allows for innate fluidity even at relatively low temperatures.,The high degree of fatty-acyl disorder in the thylakoid membranes of plants allows for a gel-to-liquid crystalline phase transition temperature to be determined by many techniques.,"The high degree of fatty-acyl disorder in the thylakoid membranes of plants restricts the movement of membrane proteins, thus hindering their physiological role.",C,kaggle200,"A study of central linewidths of electron spin resonance spectra of thylakoid membranes and aqueous dispersions of their total extracted lipids, labeled with stearic acid spin label (having spin or doxyl moiety at 5,7,9,12,13,14 and 16th carbons, with reference to carbonyl group), reveals a ""fluidity gradient"". Decreasing linewidth from 5th to 16th carbons represents increasing degree of motional freedom (""fluidity gradient"") from headgroup-side to methyl terminal in both native membranes and their aqueous lipid extract (a multilamellar liposomal structure, typical of lipid bilayer organization). This pattern points at similarity of lipid bilayer organization in both native membranes and liposomes. This observation is critical, as thylakoid membranes comprising largely galactolipids, contain only 10% phospholipid, unlike other biological membranes consisting largely of phospholipids. Proteins in chloroplast thylakoid membranes, apparently, restrict lipid fatty acyl chain segmental mobility from 9th to 16th carbons ""vis a vis"" their liposomal counterparts. Surprisingly, liposomal fatty acyl chains are more restricted at 5th and 7th carbon positions as compared at these positions in thylakoid membranes. This is explainable as due to motional restricting effect at these positions, because of steric hindrance by large chlorophyll headgroups, specially so, in liposomes. However, in native thylakoid membranes, chlorophylls are mainly complexed with proteins as light-harvesting complexes and may not largely be free to restrain lipid fluidity, as such.
Their chloroplasts do not have phycobilisomes, but they do have phycobilin pigments which they keep in their thylakoid space, rather than anchored on the outside of their thylakoid membranes.
The thylakoid membrane is the site of the light-dependent reactions of photosynthesis with the photosynthetic pigments embedded directly in the membrane. It is an alternating pattern of dark and light bands measuring each 1 nanometre. The thylakoid lipid bilayer shares characteristic features with prokaryotic membranes and the inner chloroplast membrane. For example, acidic lipids can be found in thylakoid membranes, cyanobacteria and other photosynthetic bacteria and are involved in the functional integrity of the photosystems. The thylakoid membranes of higher plants are composed primarily of phospholipids and galactolipids that are asymmetrically arranged along and across the membranes. Thylakoid membranes are richer in galactolipids rather than phospholipids; also they predominantly consist of hexagonal phase II forming monogalacotosyl diglyceride lipid. Despite this unique composition, plant thylakoid membranes have been shown to assume largely lipid-bilayer dynamic organization. Lipids forming the thylakoid membranes, richest in high-fluidity linolenic acid are synthesized in a complex pathway involving exchange of lipid precursors between the endoplasmic reticulum and inner membrane of the plastid envelope and transported from the inner membrane to the thylakoids via vesicles.
In ""biological membranes"", gel to liquid crystalline phase transitions play a critical role in physiological functioning of biomembranes. In gel phase, due to low fluidity of membrane lipid fatty-acyl chains, membrane proteins have restricted movement and thus are restrained in exercise of their physiological role. Plants depend critically on photosynthesis by chloroplast thylakoid membranes which are exposed cold environmental temperatures. Thylakoid membranes retain innate fluidity even at relatively low temperatures because of high degree of fatty-acyl disorder allowed by their high content of linolenic acid, 18-carbon chain with 3-double bonds. Gel-to-liquid crystalline phase transition temperature of biological membranes can be determined by many techniques including calorimetry, fluorescence, spin label electron paramagnetic resonance and NMR by recording measurements of the concerned parameter by at series of sample temperatures. A simple method for its determination from 13-C NMR line intensities has also been proposed.","Plant chloroplast thylakoid membranes however, have unique lipid composition as they are deficient in phospholipids. Also, their largest constituent, monogalactosyl diglyceride or MGDG, does not form aqueous bilayers. Nevertheless, dynamic studies reveal a normal lipid bilayer organisation in thylakoid membranes.
A study of central linewidths of electron spin resonance spectra of thylakoid membranes and aqueous dispersions of their total extracted lipids, labeled with stearic acid spin label (having spin or doxyl moiety at 5,7,9,12,13,14 and 16th carbons, with reference to carbonyl group), reveals a fluidity gradient. Decreasing linewidth from 5th to 16th carbons represents increasing degree of motional freedom (fluidity gradient) from headgroup-side to methyl terminal in both native membranes and their aqueous lipid extract (a multilamellar liposomal structure, typical of lipid bilayer organization). This pattern points at similarity of lipid bilayer organization in both native membranes and liposomes. This observation is critical, as thylakoid membranes comprising largely galactolipids, contain only 10% phospholipid, unlike other biological membranes consisting largely of phospholipids. Proteins in chloroplast thylakoid membranes, apparently, restrict lipid fatty acyl chain segmental mobility from 9th to 16th carbons vis a vis their liposomal counterparts. Surprisingly, liposomal fatty acyl chains are more restricted at 5th and 7th carbon positions as compared at these positions in thylakoid membranes. This is explainable as due to motional restricting effect at these positions, because of steric hindrance by large chlorophyll headgroups, specially so, in liposomes. However, in native thylakoid membranes, chlorophylls are mainly complexed with proteins as light-harvesting complexes and may not largely be free to restrain lipid fluidity, as such.
Membrane The thylakoid membrane is the site of the light-dependent reactions of photosynthesis with the photosynthetic pigments embedded directly in the membrane. It is an alternating pattern of dark and light bands measuring each 1 nanometre. The thylakoid lipid bilayer shares characteristic features with prokaryotic membranes and the inner chloroplast membrane. For example, acidic lipids can be found in thylakoid membranes, cyanobacteria and other photosynthetic bacteria and are involved in the functional integrity of the photosystems. The thylakoid membranes of higher plants are composed primarily of phospholipids and galactolipids that are asymmetrically arranged along and across the membranes. Thylakoid membranes are richer in galactolipids rather than phospholipids; also they predominantly consist of hexagonal phase II forming monogalacotosyl diglyceride lipid. Despite this unique composition, plant thylakoid membranes have been shown to assume largely lipid-bilayer dynamic organization. Lipids forming the thylakoid membranes, richest in high-fluidity linolenic acid are synthesized in a complex pathway involving exchange of lipid precursors between the endoplasmic reticulum and inner membrane of the plastid envelope and transported from the inner membrane to the thylakoids via vesicles.","The thylakoid membranes of higher plants are composed primarily of phospholipids and galactolipids that are asymmetrically arranged along and across the membranesSurprisingly, liposomal fatty acyl chains are more restricted at 5th and 7th carbon positions as compared at these positions in thylakoid membranesPlant chloroplast thylakoid membranes however, have unique lipid composition as they are deficient in phospholipidsDespite this unique composition, plant thylakoid membranes have been shown to assume largely lipid-bilayer dynamic organizationProteins in chloroplast thylakoid membranes, apparently, restrict lipid fatty acyl chain segmental mobility from 9th to 16th carbons vis a vis their liposomal counterpartsProteins in chloroplast thylakoid membranes, apparently, restrict lipid fatty acyl chain segmental mobility from 9th to 16th carbons ""vis a vis"" their liposomal counterpartsThylakoid membranes retain innate fluidity even at relatively low temperatures because of high degree of fatty-acyl disorder allowed by their high content of linolenic acid, 18-carbon chain with 3-double bondsThylakoid membranes are richer in galactolipids rather than phospholipids; also they predominantly consist of hexagonal phase II forming monogalacotosyl diglyceride lipidThis observation is critical, as thylakoid membranes comprising largely galactolipids, contain only 10% phospholipid, unlike other biological membranes consisting largely of phospholipidsNevertheless, dynamic studies reveal a normal lipid bilayer organisation in thylakoid membranes.
A study of central linewidths of electron spin resonance spectra of thylakoid membranes and aqueous dispersions of their total extracted lipids, labeled with stearic acid spin label (having spin or doxyl moiety at 5,7,9,12,13,14 and 16th carbons, with reference to carbonyl group), reveals a fluidity gradientLipids forming the thylakoid membranes, richest in high-fluidity linolenic acid are synthesized in a complex pathway involving exchange of lipid precursors between the endoplasmic reticulum and inner membrane of the plastid envelope ","The thylakoid membranes of higher plants are composed primarily of phospholipids and galactolipids that are asymmetrically arranged along and across the membranesSurprisingly, liposomal fatty acyl chains are more restricted at 5th and 7th carbon positions as compared at these positions in thylakoid membranesPlant chloroplast thylakoid membranes however, have unique lipid composition as they are deficient in phospholipidsDespite this unique composition, plant thylakoid membranes have been shown to assume largely lipid-bilayer dynamic organizationProteins in chloroplast thylakoid membranes, apparently, restrict lipid fatty acyl chain segmental mobility from 9th to 16th carbons vis a vis their liposomal counterpartsProteins in chloroplast thylakoid membranes, apparently, restrict lipid fatty acyl chain segmental mobility from 9th to 16th carbons ""vis a vis"" their liposomal counterpartsThylakoid membranes retain innate fluidity even at relatively low temperatures because of high degree of fatty-acyl disorder allowed by their high content of linolenic acid, 18-carbon chain with 3-double bondsThylakoid membranes are richer in galactolipids rather than phospholipids; also they predominantly consist of hexagonal phase II forming monogalacotosyl diglyceride lipidThis observation is critical, as thylakoid membranes comprising largely galactolipids, contain only 10% phospholipid, unlike other biological membranes consisting largely of phospholipidsNevertheless, dynamic studies reveal a normal lipid bilayer organisation in thylakoid membranes.
A study of central linewidths of electron spin resonance spectra of thylakoid membranes and aqueous dispersions of their total extracted lipids, labeled with stearic acid spin label (having spin or doxyl moiety at 5,7,9,12,13,14 and 16th carbons, with reference to carbonyl group), reveals a fluidity gradientLipids forming the thylakoid membranes, richest in high-fluidity linolenic acid are synthesized in a complex pathway involving exchange of lipid precursors between the endoplasmic reticulum and inner membrane of the plastid envelope [SEP]What is the significance of the high degree of fatty-acyl disorder in the thylakoid membranes of plants?","['C', 'D', 'E']",1.0
What is the explanation for the effective supersymmetry in quark-diquark models?,"Corresponding models of baryons are referred to as quark–diquark models. The diquark is often treated as a single subatomic particle with which the third quark interacts via the strong interaction. Diquark–antidiquark pairs have also been advanced for anomalous particles such as the X(3872). == Formation == The forces between the two quarks in a diquark is attractive when both the colors and spins are antisymmetric. In particle physics, a diquark, or diquark correlation/clustering, is a hypothetical state of two quarks grouped inside a baryon (that consists of three quarks) (Lichtenberg 1982). In this study the baryon had one heavy and two light quarks. Since the heavy quark is inert, the scientists were able to discern the properties of the different quark configurations in the hadronic spectrum. == Λ and Σ baryon experiment == An experiment was conducted using diquarks in an attempt to study the Λ and Σ baryons that are produced in the creation of hadrons created by fast-moving quarks. When both quarks are correlated in this way they tend to form a very low energy configuration. There are many different pieces of evidence that prove diquarks are fundamental in the structure of hadrons. Even though they may contain two quarks they are not colour neutral, and therefore cannot exist as isolated bound states. The existence of diquarks inside the nucleons is a disputed issue, but it helps to explain some nucleon properties and to reproduce experimental data sensitive to the nucleon structure. When generating a baryon by assembling quarks, it is helpful if the quarks first form a stable two-quark state. From this experiment scientists inferred that Λ baryons are more common than Σ baryons, and indeed they are more common by a factor of 10. ==References== ==Further reading== * * Category:Quarks This produced the quark–antiquark pairs, which then converted themselves into mesons. While the top quark is the heaviest known quark, the stop squark is actually often the lightest squark in many supersymmetry models.Search For Pair Production of Stop Quarks Mimicking Top Event Signatures ==Overview== The stop squark is a key ingredient of a wide range of SUSY models that address the hierarchy problem of the Standard Model (SM) in a natural way. One of the most compelling pieces of evidence comes from a recent study of baryons. The Λ and the Σ are created as a result of up, down and strange quarks. This also happens to be the same size as the hadron itself. == Uses == Diquarks are the conceptual building blocks, and as such give scientists an ordering principle for the most important states in the hadronic spectrum. This low energy configuration has become known as a diquark. == Controversy == Many scientists theorize that a diquark should not be considered a particle. In theoretical physics, one often analyzes theories with supersymmetry in which D-terms play an important role. In the generic R-parity conserving Minimal Supersymmetric Standard Model (MSSM) the scalar partners of right-handed and left-handed top quarks mix to form two stop mass eigenstates. ","Two different color charges close together appear as the corresponding anti-color under coarse resolution, which makes a diquark cluster viewed with coarse resolution effectively appear as an antiquark. Therefore, a baryon containing 3 valence quarks, of which two tend to cluster together as a diquark, behaves like a meson.","Two different color charges close together appear as the corresponding color under coarse resolution, which makes a diquark cluster viewed with coarse resolution effectively appear as a quark. Therefore, a baryon containing 3 valence quarks, of which two tend to cluster together as a diquark, behaves like a baryon.","Two different color charges close together appear as the corresponding color under fine resolution, which makes a diquark cluster viewed with fine resolution effectively appear as a quark. Therefore, a baryon containing 3 valence quarks, of which two tend to cluster together as a diquark, behaves like a baryon.","Two different color charges close together appear as the corresponding anti-color under fine resolution, which makes a diquark cluster viewed with fine resolution effectively appear as an antiquark. Therefore, a baryon containing 3 valence quarks, of which two tend to cluster together as a diquark, behaves like a meson.","Two different color charges close together appear as the corresponding anti-color under any resolution, which makes a diquark cluster viewed with any resolution effectively appear as an antiquark. Therefore, a baryon containing 3 valence quarks, of which two tend to cluster together as a diquark, behaves like a meson.",A,kaggle200,"Many scientists theorize that a diquark should not be considered a particle. Even though they may contain two quarks they are not colour neutral, and therefore cannot exist as isolated bound states. So instead they tend to float freely inside hadrons as composite entities; while free-floating they have a size of about . This also happens to be the same size as the hadron itself.
The forces between the two quarks in a diquark is attractive when both the colors and spins are antisymmetric. When both quarks are correlated in this way they tend to form a very low energy configuration. This low energy configuration has become known as a diquark.
In particle physics, a diquark, or diquark correlation/clustering, is a hypothetical state of two quarks grouped inside a baryon (that consists of three quarks) (Lichtenberg 1982). Corresponding models of baryons are referred to as quark–diquark models. The diquark is often treated as a single subatomic particle with which the third quark interacts via the strong interaction. The existence of diquarks inside the nucleons is a disputed issue, but it helps to explain some nucleon properties and to reproduce experimental data sensitive to the nucleon structure. Diquark–antidiquark pairs have also been advanced for anomalous particles such as the X(3872).
The realization of this effective supersymmetry is readily explained in quark–diquark models: Because two different color charges close together (e.g., blue and red) appear under coarse resolution as the corresponding anti-color (e.g. anti-green), a diquark cluster viewed with coarse resolution (i.e., at the energy-momentum scale used to study hadron structure) effectively appears as an antiquark. Therefore, a baryon containing 3 valence quarks, of which two tend to cluster together as a diquark, behaves likes a meson.","In particle physics, a diquark, or diquark correlation/clustering, is a hypothetical state of two quarks grouped inside a baryon (that consists of three quarks) (Lichtenberg 1982). Corresponding models of baryons are referred to as quark–diquark models. The diquark is often treated as a single subatomic particle with which the third quark interacts via the strong interaction. The existence of diquarks inside the nucleons is a disputed issue, but it helps to explain some nucleon properties and to reproduce experimental data sensitive to the nucleon structure. Diquark–antidiquark pairs have also been advanced for anomalous particles such as the X(3872).
The forces between the two quarks in a diquark is attractive when both the colors and spins are antisymmetric. When both quarks are correlated in this way they tend to form a very low energy configuration. This low energy configuration has become known as a diquark.
Supersymmetry in quantum field theory In quantum field theory, supersymmetry is motivated by solutions to several theoretical problems, for generally providing many desirable mathematical properties, and for ensuring sensible behavior at high energies. Supersymmetric quantum field theory is often much easier to analyze, as many more problems become mathematically tractable. When supersymmetry is imposed as a local symmetry, Einstein's theory of general relativity is included automatically, and the result is said to be a theory of supergravity. Another theoretically appealing property of supersymmetry is that it offers the only ""loophole"" to the Coleman–Mandula theorem, which prohibits spacetime and internal symmetries from being combined in any nontrivial way, for quantum field theories with very general assumptions. The Haag–Łopuszański–Sohnius theorem demonstrates that supersymmetry is the only way spacetime and internal symmetries can be combined consistently.While supersymmetry has not yet been discovered at high energy, see Section Supersymmetry in particle physics, supersymmetry was found to be effectively realized at the intermediate energy of hadronic physics where baryons and mesons are superpartners. An exception is the pion that appears as a zero mode in the mass spectrum and thus protected by the supersymmetry: It has no baryonic partner. The realization of this effective supersymmetry is readily explained in quark–diquark models: Because two different color charges close together (e.g., blue and red) appear under coarse resolution as the corresponding anti-color (e.g. anti-green), a diquark cluster viewed with coarse resolution (i.e., at the energy-momentum scale used to study hadron structure) effectively appears as an antiquark. Therefore, a baryon containing 3 valence quarks, of which two tend to cluster together as a diquark, behaves likes a meson."," The realization of this effective supersymmetry is readily explained in quark–diquark models: Because two different color charges close together (e.g., blue and red) appear under coarse resolution as the corresponding anti-color (e.gDiquark–antidiquark pairs have also been advanced for anomalous particles such as the X(3872).
The realization of this effective supersymmetry is readily explained in quark–diquark models: Because two different color charges close together (e.g., blue and red) appear under coarse resolution as the corresponding anti-color (e.gThis low energy configuration has become known as a diquark.
Supersymmetry in quantum field theory In quantum field theory, supersymmetry is motivated by solutions to several theoretical problems, for generally providing many desirable mathematical properties, and for ensuring sensible behavior at high energiesAnother theoretically appealing property of supersymmetry is that it offers the only ""loophole"" to the Coleman–Mandula theorem, which prohibits spacetime and internal symmetries from being combined in any nontrivial way, for quantum field theories with very general assumptionsSupersymmetric quantum field theory is often much easier to analyze, as many more problems become mathematically tractableCorresponding models of baryons are referred to as quark–diquark modelsDiquark–antidiquark pairs have also been advanced for anomalous particles such as the X(3872).
The forces between the two quarks in a diquark is attractive when both the colors and spins are antisymmetricThis also happens to be the same size as the hadron itself.
The forces between the two quarks in a diquark is attractive when both the colors and spins are antisymmetricThe diquark is often treated as a single subatomic particle with which the third quark interacts via the strong interactionWhen both quarks are correlated in this way they tend to form a very low energy configurationWhen supersymmetry is imposed as a local symmetry, Einstein's theory of general relativity is included automatically, and the result is said to be a theory of supergra"," The realization of this effective supersymmetry is readily explained in quark–diquark models: Because two different color charges close together (e.g., blue and red) appear under coarse resolution as the corresponding anti-color (e.gDiquark–antidiquark pairs have also been advanced for anomalous particles such as the X(3872).
The realization of this effective supersymmetry is readily explained in quark–diquark models: Because two different color charges close together (e.g., blue and red) appear under coarse resolution as the corresponding anti-color (e.gThis low energy configuration has become known as a diquark.
Supersymmetry in quantum field theory In quantum field theory, supersymmetry is motivated by solutions to several theoretical problems, for generally providing many desirable mathematical properties, and for ensuring sensible behavior at high energiesAnother theoretically appealing property of supersymmetry is that it offers the only ""loophole"" to the Coleman–Mandula theorem, which prohibits spacetime and internal symmetries from being combined in any nontrivial way, for quantum field theories with very general assumptionsSupersymmetric quantum field theory is often much easier to analyze, as many more problems become mathematically tractableCorresponding models of baryons are referred to as quark–diquark modelsDiquark–antidiquark pairs have also been advanced for anomalous particles such as the X(3872).
The forces between the two quarks in a diquark is attractive when both the colors and spins are antisymmetricThis also happens to be the same size as the hadron itself.
The forces between the two quarks in a diquark is attractive when both the colors and spins are antisymmetricThe diquark is often treated as a single subatomic particle with which the third quark interacts via the strong interactionWhen both quarks are correlated in this way they tend to form a very low energy configurationWhen supersymmetry is imposed as a local symmetry, Einstein's theory of general relativity is included automatically, and the result is said to be a theory of supergra[SEP]What is the explanation for the effective supersymmetry in quark-diquark models?","['E', 'D', 'A']",0.3333333333333333
What is the relationship between the complete electromagnetic Hamiltonian of a molecule and the parity operation?,"Wiley To see that the parity's eigenvalues are phase factors, we assume an eigenstate of the parity operation (this is realized because the intrinsic parity is a property of a particle species) and use the fact that two parity transformations leave the particle in the same state, thus the new wave function can differ by only a phase factor, i.e.: P^{2} \psi = e^{i \phi} \psi thus P \psi = \pm e^{i \phi /2} \psi, since these are the only eigenstates satisfying the above equation. In quantum mechanics, the intrinsic parity is a phase factor that arises as an eigenvalue of the parity operation x_i \rightarrow x_i' = -x_i (a reflection about the origin).Griffiths, D., (1987). Since the parity commutes with the Hamiltonian and \frac{dP}{dt} = 0 its eigenvalue does not change with time, therefore the intrinsic parities phase is a conserved quantity. The intrinsic parity of a system is the product of the intrinsic parities of the particles, for instance for noninteracting particles we have P(|1\rangle|2\rangle)=(P|1\rangle)(P|2\rangle). As [P,H]=0 the Hamiltonian is invariant under a parity transformation. In physics, the C parity or charge parity is a multiplicative quantum number of some particles that describes their behavior under the symmetry operation of charge conjugation. After GUT symmetry breaking, this spinor parity descends into R-parity so long as no spinor fields were used to break the GUT symmetry. R-parity is a \mathbb{Z}_2 symmetry acting on the Minimal Supersymmetric Standard Model (MSSM) fields that forbids these couplings and can be defined as :P_\mathrm{R} = (-1)^{3B+L+2s}, or, equivalently, as :P_\mathrm{R} = (-1)^{3(B-L)+2s}, where is spin, is baryon number, and is lepton number. In atomic, molecular, and optical physics and quantum chemistry, the molecular Hamiltonian is the Hamiltonian operator representing the energy of the electrons and nuclei in a molecule. We can generalize the C-parity so it applies to all charge states of a given multiplet: :\mathcal G \begin{pmatrix} \pi^+ \\\ \pi^0 \\\ \pi^- \end{pmatrix} = \eta_G \begin{pmatrix} \pi^+ \\\ \pi^0 \\\ \pi^- \end{pmatrix} where ηG = ±1 are the eigenvalues of G-parity. All Standard Model particles have R-parity of +1 while supersymmetric particles have R-parity of −1\. R-parity is a concept in particle physics. As a consequence, in such theories R-parity remains exact at all energies. Given that charge conjugation and isospin are preserved by strong interactions, so is G. Weak and electromagnetic interactions, though, are not invariant under G-parity. The intrinsic parity's phase is conserved for non-weak interactions (the product of the intrinsic parities is the same before and after the reaction). A consequence of the Dirac equation is that the intrinsic parity of fermions and antifermions obey the relation P_{\bar{f}}P_f = - 1, so particles and their antiparticles have the opposite parity. G-parity is a combination of charge conjugation and a π rad (180°) rotation around the 2nd axis of isospin space. The molecular Hamiltonian is a sum of several terms: its major terms are the kinetic energies of the electrons and the Coulomb (electrostatic) interactions between the two kinds of charged particles. In particle physics, G-parity is a multiplicative quantum number that results from the generalization of C-parity to multiplets of particles. Since antiparticles and particles have charges of opposite sign, only states with all quantum charges equal to zero, such as the photon and particle–antiparticle bound states like the neutral pion, η or positronium, are eigenstates of \mathcal C. ==Multiparticle systems== For a system of free particles, the C parity is the product of C parities for each particle. ","The complete electromagnetic Hamiltonian of any molecule is invariant to the parity operation, and its eigenvalues cannot be given the parity symmetry label + or -.","The complete electromagnetic Hamiltonian of any molecule is dependent on the parity operation, and its eigenvalues can be given the parity symmetry label even or odd, respectively.","The complete electromagnetic Hamiltonian of any molecule is dependent on the parity operation, and its eigenvalues can be given the parity symmetry label + or - depending on whether they are even or odd, respectively.","The complete electromagnetic Hamiltonian of any molecule is invariant to the parity operation, and its eigenvalues can be given the parity symmetry label + or - depending on whether they are even or odd, respectively.","The complete electromagnetic Hamiltonian of any molecule does not involve the parity operation, and its eigenvalues cannot be given the parity symmetry label + or -.",D,kaggle200,"If one can show that the vacuum state is invariant under parity, formula_62, the Hamiltonian is parity invariant formula_63 and the quantization conditions remain unchanged under parity, then it follows that every state has good parity, and this parity is conserved in any reaction.
In quantum mechanics, the intrinsic parity is a phase factor that arises as an eigenvalue of the parity operation formula_1 (a reflection about the origin). To see that the parity's eigenvalues are phase factors, we assume an eigenstate of the parity operation (this is realized because the intrinsic parity is a property of a particle species) and use the fact that two parity transformations leave the particle in the same state, thus the new wave function can differ by only a phase factor, i.e.: formula_2 thus formula_3, since these are the only eigenstates satisfying the above equation.
The complete Hamiltonian of a diatomic molecule (as for all molecules) commutes with the parity operation P or E* and rovibronic (rotation-vibration-electronic) energy levels (often called rotational levels) can be given the parity symmetry label + or -. The complete Hamiltonian of a homonuclear diatomic molecule also commutes with the operation
The complete (rotational-vibrational-electronic-nuclear spin) electromagnetic Hamiltonian of any molecule commutes with (or is invariant to) the parity operation P (or E*, in the notation introduced by Longuet-Higgins) and its eigenvalues can be given the parity symmetry label + or - as they are even or odd, respectively. The parity operation involves the inversion of electronic and nuclear spatial coordinates at the molecular center of mass.","The complete Hamiltonian of a diatomic molecule (as for all molecules) commutes with the parity operation P or E* and rovibronic (rotation-vibration-electronic) energy levels (often called rotational levels) can be given the parity symmetry label + or -. The complete Hamiltonian of a homonuclear diatomic molecule also commutes with the operation of permuting (or exchanging) the coordinates of the two (identical) nuclei and rotational levels gain the additional label s or a depending on whether the total wavefunction is unchanged (symmetric) or changed in sign (antisymmetric) by the permutation operation. Thus, the rotational levels of heteronuclear diatomic molecules are labelled + or -, whereas those of homonuclear diatomic molecules are labelled +s, +a, -s or -a. The rovibronic nuclear spin states are classified using the appropriate permutation-inversion group.The complete Hamiltonian of a homonuclear diatomic molecule (as for all centro-symmetric molecules) does not commute with the point group inversion operation i because of the effect of the nuclear hyperfine Hamiltonian. The nuclear hyperfine Hamiltonian can mix the rotational levels of g and u vibronic states (called ortho-para mixing) and give rise to ortho-para transitions Spin and total angular momentum If S denotes the resultant of the individual electron spins, s(s+1)ℏ2 are the eigenvalues of S and as in the case of atoms, each electronic term of the molecule is also characterised by the value of S. If spin-orbit coupling is neglected, there is a degeneracy of order 2s+1 associated with each s for a given Λ . Just as for atoms, the quantity 2s+1 is called the multiplicity of the term and.is written as a (left) superscript, so that the term symbol is written as 2s+1Λ . For example, the symbol 3Π denotes a term such that Λ=1 and s=1 . It is worth noting that the ground state (often labelled by the symbol X ) of most diatomic molecules is such that s=0 and exhibits maximum symmetry. Thus, in most cases it is a 1Σ+ state (written as X1Σ+ , excited states are written with A,B,C,...
Consequences of parity symmetry When parity generates the Abelian group ℤ2, one can always take linear combinations of quantum states such that they are either even or odd under parity (see the figure). Thus the parity of such states is ±1. The parity of a multiparticle state is the product of the parities of each state; in other words parity is a multiplicative quantum number.
Molecules The complete (rotational-vibrational-electronic-nuclear spin) electromagnetic Hamiltonian of any molecule commutes with (or is invariant to) the parity operation P (or E*, in the notation introduced by Longuet-Higgins) and its eigenvalues can be given the parity symmetry label + or - as they are even or odd, respectively. The parity operation involves the inversion of electronic and nuclear spatial coordinates at the molecular center of mass.","The complete Hamiltonian of a homonuclear diatomic molecule also commutes with the operation
The complete (rotational-vibrational-electronic-nuclear spin) electromagnetic Hamiltonian of any molecule commutes with (or is invariant to) the parity operation P (or E*, in the notation introduced by Longuet-Higgins) and its eigenvalues can be given the parity symmetry label + or - as they are even or odd, respectivelyThe parity of a multiparticle state is the product of the parities of each state; in other words parity is a multiplicative quantum number.
Molecules The complete (rotational-vibrational-electronic-nuclear spin) electromagnetic Hamiltonian of any molecule commutes with (or is invariant to) the parity operation P (or E*, in the notation introduced by Longuet-Higgins) and its eigenvalues can be given the parity symmetry label + or - as they are even or odd, respectivelyThe parity operation involves the inversion of electronic and nuclear spatial coordinates at the molecular center of massThe parity operation involves the inversion of electronic and nuclear spatial coordinates at the molecular center of mass.To see that the parity's eigenvalues are phase factors, we assume an eigenstate of the parity operation (this is realized because the intrinsic parity is a property of a particle species) and use the fact that two parity transformations leave the particle in the same state, thus the new wave function can differ by only a phase factor, i.e.: formula_2 thus formula_3, since these are the only eigenstates satisfying the above equation.
The complete Hamiltonian of a diatomic molecule (as for all molecules) commutes with the parity operation P or E* and rovibronic (rotation-vibration-electronic) energy levels (often called rotational levels) can be given the parity symmetry label + or -The complete Hamiltonian of a diatomic molecule (as for all molecules) commutes with the parity operation P or E* and rovibronic (rotation-vibration-electronic) energy levels (often called rotational levels) can be given the parity symmetry label + or -- If one can show that the ","The complete Hamiltonian of a homonuclear diatomic molecule also commutes with the operation
The complete (rotational-vibrational-electronic-nuclear spin) electromagnetic Hamiltonian of any molecule commutes with (or is invariant to) the parity operation P (or E*, in the notation introduced by Longuet-Higgins) and its eigenvalues can be given the parity symmetry label + or - as they are even or odd, respectivelyThe parity of a multiparticle state is the product of the parities of each state; in other words parity is a multiplicative quantum number.
Molecules The complete (rotational-vibrational-electronic-nuclear spin) electromagnetic Hamiltonian of any molecule commutes with (or is invariant to) the parity operation P (or E*, in the notation introduced by Longuet-Higgins) and its eigenvalues can be given the parity symmetry label + or - as they are even or odd, respectivelyThe parity operation involves the inversion of electronic and nuclear spatial coordinates at the molecular center of massThe parity operation involves the inversion of electronic and nuclear spatial coordinates at the molecular center of mass.To see that the parity's eigenvalues are phase factors, we assume an eigenstate of the parity operation (this is realized because the intrinsic parity is a property of a particle species) and use the fact that two parity transformations leave the particle in the same state, thus the new wave function can differ by only a phase factor, i.e.: formula_2 thus formula_3, since these are the only eigenstates satisfying the above equation.
The complete Hamiltonian of a diatomic molecule (as for all molecules) commutes with the parity operation P or E* and rovibronic (rotation-vibration-electronic) energy levels (often called rotational levels) can be given the parity symmetry label + or -The complete Hamiltonian of a diatomic molecule (as for all molecules) commutes with the parity operation P or E* and rovibronic (rotation-vibration-electronic) energy levels (often called rotational levels) can be given the parity symmetry label + or -- If one can show that the [SEP]What is the relationship between the complete electromagnetic Hamiltonian of a molecule and the parity operation?","['D', 'C', 'E']",1.0
What is the difference between active and passive transport in cells?,"Passive transport is a type of membrane transport that does not require energy to move substances across cell membranes. Instead of using cellular energy, like active transport, passive transport relies on the second law of thermodynamics to drive the movement of substances across cell membranes. There are two types of passive transport, passive diffusion and facilitated diffusion. Transcellular transport can occur in three different ways active transport, passive transport, and transcytosis. == Active Transport == Main article: Active transport Active transport is the process of moving molecules from an area of low concentrations to an area of high concentration. Simple diffusion and osmosis are both forms of passive transport and require none of the cell's ATP energy. === Example of diffusion: Gas Exchange === A biological example of diffusion is the gas exchange that occurs during respiration within the human body. The four main kinds of passive transport are simple diffusion, facilitated diffusion, filtration, and/or osmosis. There are two types of active transport, primary active transport and secondary active transport. The rate of passive transport depends on the permeability of the cell membrane, which, in turn, depends on the organization and characteristics of the membrane lipids and proteins. Secondary active transport is when one solute moves down the electrochemical gradient to produce enough energy to force the transport of another solute from low concentration to high concentration. Both types of passive transport will continue until the system reaches equilibrium. There are many other types of glucose transport proteins, some that do require energy, and are therefore not examples of passive transport. Passive transport follows Fick's first law. ==Diffusion== right|thumb|240px|Passive diffusion on a cell membrane. Transcellular transport is more likely to involve energy expenditure than paracellular transport. Passive diffusion is the unassisted movement of molecules from high concentration to low concentration across a permeable membrane. Transcellular transport involves the transportation of solutes by a cell through a cell. Facilitated diffusion (also known as facilitated transport or passive-mediated transport) is the process of spontaneous passive transport (as opposed to active transport) of molecules or ions across a biological membrane via specific transmembrane integral proteins. Being passive, facilitated transport does not directly require chemical energy from ATP hydrolysis in the transport step itself; rather, molecules and ions move down their concentration gradient reflecting its diffusive nature. thumb|Insoluble molecules diffusing through an integral protein. It differs from transcellular transport, where the substances travel through the cell passing through both the apical membrane and basolateral membrane *2. Primary active transport uses adenosine triphosphate (ATP) to move specific molecules and solutes against its concentration gradient. An example of where this occurs is in the movement of glucose within the proximal convoluted tubule (PCT). == Passive Transport == Main article: Passive transport Passive transport is the process of moving molecules from an area of high concentration to an area of low concentration without expelling any energy. ",Active transport and passive transport both require energy input from the cell to function.,"Passive transport is powered by the arithmetic sum of osmosis and an electric field, while active transport requires energy input from the cell.","Passive transport requires energy input from the cell, while active transport is powered by the arithmetic sum of osmosis and an electric field.",Active transport and passive transport are both powered by the arithmetic sum of osmosis and an electric field.,"Active transport is powered by the arithmetic sum of osmosis and an electric field, while passive transport requires energy input from the cell.",B,kaggle200,"Transcellular transport involves the transportation of solutes by a cell ""through"" a cell. Transcellular transport can occur in three different ways active transport, passive transport, and transcytosis.
Passive transport is a type of membrane transport that does not require energy to move substances across cell membranes. Instead of using cellular energy, like active transport, passive transport relies on the second law of thermodynamics to drive the movement of substances across cell membranes. Fundamentally, substances follow Fick's first law, and move from an area of high concentration to one of low concentration because this movement increases the entropy of the overall system. The rate of passive transport depends on the permeability of the cell membrane, which, in turn, depends on the organization and characteristics of the membrane lipids and proteins. The four main kinds of passive transport are simple diffusion, facilitated diffusion, filtration, and/or osmosis.
There are different ways through which cells can transport substances across the cell membrane. The two main pathways are passive transport and active transport. Passive transport is more direct and does not require the use of the cell's energy. It relies on an area that maintains a high-to-low concentration gradient. Active transport uses adenosine triphosphate (ATP) to transport a substance that moves against its concentration gradient.
An example of passive transport is ion fluxes through Na, K, Ca, and Cl channels. Unlike active transport, passive transport is powered by the arithmetic sum of osmosis (a concentration gradient) and an electric field (the transmembrane potential). Formally, the molar Gibbs free energy change associated with successful transport is formula_2 where represents the gas constant, represents absolute temperature, is the charge per ion, and represents the Faraday constant.","There are different ways through which cells can transport substances across the cell membrane. The two main pathways are passive transport and active transport. Passive transport is more direct and does not require the use of the cell's energy. It relies on an area that maintains a high-to-low concentration gradient. Active transport uses adenosine triphosphate (ATP) to transport a substance that moves against its concentration gradient.
Passive transport is a type of membrane transport that does not require energy to move substances across cell membranes. Instead of using cellular energy, like active transport, passive transport relies on the second law of thermodynamics to drive the movement of substances across cell membranes. Fundamentally, substances follow Fick's first law, and move from an area of high concentration to one of low concentration because this movement increases the entropy of the overall system. The rate of passive transport depends on the permeability of the cell membrane, which, in turn, depends on the organization and characteristics of the membrane lipids and proteins. The four main kinds of passive transport are simple diffusion, facilitated diffusion, filtration, and/or osmosis.
An example of active transport of ions is the Na+-K+-ATPase (NKA). NKA is powered by the hydrolysis of ATP into ADP and an inorganic phosphate; for every molecule of ATP hydrolized, three Na+ are transported outside and two K+ are transported inside the cell. This makes the inside of the cell more negative than the outside and more specifically generates a membrane potential Vmembrane of about −60 mV.An example of passive transport is ion fluxes through Na+, K+, Ca2+, and Cl− channels. Unlike active transport, passive transport is powered by the arithmetic sum of osmosis (a concentration gradient) and an electric field (the transmembrane potential). Formally, the molar Gibbs free energy change associated with successful transport is where R represents the gas constant, T represents absolute temperature, z is the charge per ion, and F represents the Faraday constant.: 464–465 In the example of Na+, both terms tend to support transport: the negative electric potential inside the cell attracts the positive ion and since Na+ is concentrated outside the cell, osmosis supports diffusion through the Na+ channel into the cell. In the case of K+, the effect of osmosis is reversed: although external ions are attracted by the negative intracellular potential, entropy seeks to diffuse the ions already concentrated inside the cell. The converse phenomenon (osmosis supports transport, electric potential opposes it) can be achieved for Na+ in cells with abnormal transmembrane potentials: at +70 mV, the Na+ influx halts; at higher potentials, it becomes an efflux.","Passive transport is more direct and does not require the use of the cell's energyActive transport uses adenosine triphosphate (ATP) to transport a substance that moves against its concentration gradient.
Passive transport is a type of membrane transport that does not require energy to move substances across cell membranesThe rate of passive transport depends on the permeability of the cell membrane, which, in turn, depends on the organization and characteristics of the membrane lipids and proteinsActive transport uses adenosine triphosphate (ATP) to transport a substance that moves against its concentration gradient.
An example of passive transport is ion fluxes through Na, K, Ca, and Cl channelsThe two main pathways are passive transport and active transportUnlike active transport, passive transport is powered by the arithmetic sum of osmosis (a concentration gradient) and an electric field (the transmembrane potential)This makes the inside of the cell more negative than the outside and more specifically generates a membrane potential Vmembrane of about −60 mV.An example of passive transport is ion fluxes through Na+, K+, Ca2+, and Cl− channelsTranscellular transport can occur in three different ways active transport, passive transport, and transcytosis.
Passive transport is a type of membrane transport that does not require energy to move substances across cell membranesInstead of using cellular energy, like active transport, passive transport relies on the second law of thermodynamics to drive the movement of substances across cell membranesThe four main kinds of passive transport are simple diffusion, facilitated diffusion, filtration, and/or osmosis.
There are different ways through which cells can transport substances across the cell membrane- Transcellular transport involves the transportation of solutes by a cell ""through"" a cellThe four main kinds of passive transport are simple diffusion, facilitated diffusion, filtration, and/or osmosis.
An example of active transport of ions is the Na+-K+-ATPase (NKA)There are different ways through which cells can tr","Passive transport is more direct and does not require the use of the cell's energyActive transport uses adenosine triphosphate (ATP) to transport a substance that moves against its concentration gradient.
Passive transport is a type of membrane transport that does not require energy to move substances across cell membranesThe rate of passive transport depends on the permeability of the cell membrane, which, in turn, depends on the organization and characteristics of the membrane lipids and proteinsActive transport uses adenosine triphosphate (ATP) to transport a substance that moves against its concentration gradient.
An example of passive transport is ion fluxes through Na, K, Ca, and Cl channelsThe two main pathways are passive transport and active transportUnlike active transport, passive transport is powered by the arithmetic sum of osmosis (a concentration gradient) and an electric field (the transmembrane potential)This makes the inside of the cell more negative than the outside and more specifically generates a membrane potential Vmembrane of about −60 mV.An example of passive transport is ion fluxes through Na+, K+, Ca2+, and Cl− channelsTranscellular transport can occur in three different ways active transport, passive transport, and transcytosis.
Passive transport is a type of membrane transport that does not require energy to move substances across cell membranesInstead of using cellular energy, like active transport, passive transport relies on the second law of thermodynamics to drive the movement of substances across cell membranesThe four main kinds of passive transport are simple diffusion, facilitated diffusion, filtration, and/or osmosis.
There are different ways through which cells can transport substances across the cell membrane- Transcellular transport involves the transportation of solutes by a cell ""through"" a cellThe four main kinds of passive transport are simple diffusion, facilitated diffusion, filtration, and/or osmosis.
An example of active transport of ions is the Na+-K+-ATPase (NKA)There are different ways through which cells can tr[SEP]What is the difference between active and passive transport in cells?","['E', 'B', 'C']",0.5
What is the Heisenberg uncertainty principle and how does it relate to angular momentum in quantum mechanics?,"Just as there is an uncertainty principle relating position and momentum, there are uncertainty principles for angular momentum. Heisenberg's uncertainty relation is one of the fundamental results in quantum mechanics. The Heisenberg–Robertson uncertainty relation follows from the above uncertainty relation. ==Remarks== In quantum theory, one should distinguish between the uncertainty relation and the uncertainty principle. In simpler terms, the total angular momentum operator characterizes how a quantum system is changed when it is rotated. In quantum mechanics, the angular momentum operator is one of several related operators analogous to classical angular momentum. The angular momentum operator plays a central role in the theory of atomic and molecular physics and other quantum problems involving rotational symmetry. As above, there is an analogous relationship in classical physics: \left\\{L^2, L_x\right\\} = \left\\{L^2, L_y\right\\} = \left\\{L^2, L_z\right\\} = 0 where L_i is a component of the classical angular momentum operator, and \\{ ,\\} is the Poisson bracket.Goldstein et al, p. 410 Returning to the quantum case, the same commutation relations apply to the other angular momentum operators (spin and total angular momentum), as well, \begin{align} \left[ S^2, S_i \right] &= 0, \\\ \left[ J^2, J_i \right] &= 0. \end{align} ===Uncertainty principle=== In general, in quantum mechanics, when two observable operators do not commute, they are called complementary observables. In both classical and quantum mechanical systems, angular momentum (together with linear momentum and energy) is one of the three fundamental properties of motion.Introductory Quantum Mechanics, Richard L. Liboff, 2nd Edition, There are several angular momentum operators: total angular momentum (usually denoted J), orbital angular momentum (usually denoted L), and spin angular momentum (spin for short, usually denoted S). In quantum mechanics, the spin–statistics theorem relates the intrinsic spin of a particle (angular momentum not due to the orbital motion) to the particle statistics it obeys. The eigenvalues are related to l and m, as shown in the table below. ==Quantization== In quantum mechanics, angular momentum is quantized – that is, it cannot vary continuously, but only in ""quantum leaps"" between certain allowed values. The Heisenberg–Robertson–Schrödinger uncertainty relation was proved at the dawn of quantum formalism and is ever-present in the teaching and research on quantum mechanics. The new uncertainty relations not only capture the incompatibility of observables but also of quantities that are physically measurable (as variances can be measured in the experiment). ==References== ==Other sources== * Research Highlight, NATURE ASIA, 19 January 2015, ""Heisenberg's uncertainty relation gets stronger"" Category:Quantum mechanics Category:Mathematical physics For example, electrons always have ""spin 1/2"" while photons always have ""spin 1"" (details below). ===Total angular momentum=== Finally, there is total angular momentum \mathbf{J} = \left(J_x, J_y, J_z\right), which combines both the spin and orbital angular momentum of a particle or system: \mathbf{J} = \mathbf{L} + \mathbf{S}. However, the uncertainty principle says that it is impossible to measure the exact value for the momentum of a particle like an electron, given that its position has been determined at a given instant. More specifically, let R(\hat{n},\phi) be a rotation operator, which rotates any quantum state about axis \hat{n} by angle \phi. In the special case of a single particle with no electric charge and no spin, the orbital angular momentum operator can be written in the position basis as:\mathbf{L} = -i\hbar(\mathbf{r} \times abla) where is the vector differential operator, del. ===Spin angular momentum=== There is another type of angular momentum, called spin angular momentum (more often shortened to spin), represented by the spin operator \mathbf{S} = \left(S_x, S_y, S_z\right). The term angular momentum operator can (confusingly) refer to either the total or the orbital angular momentum. The uncertainty principle also says that eliminating uncertainty about position maximizes uncertainty about momentum, and eliminating uncertainty about momentum maximizes uncertainty about position. However, the stronger uncertainty relations due to Maccone and Pati provide different uncertainty relations, based on the sum of variances that are guaranteed to be nontrivial whenever the observables are incompatible on the state of the quantum system. For example, if L_z/\hbar is roughly 100000000, it makes essentially no difference whether the precise value is an integer like 100000000 or 100000001, or a non- integer like 100000000.2—the discrete steps are currently too small to measure. ==Angular momentum as the generator of rotations== The most general and fundamental definition of angular momentum is as the generator of rotations. ","The Heisenberg uncertainty principle states that the axis of rotation of a quantum particle is undefined, and that quantum particles possess a type of non-orbital angular momentum called ""spin"". This is because angular momentum, like other quantities in quantum mechanics, is expressed as a tensorial operator in relativistic quantum mechanics.","The Heisenberg uncertainty principle states that the total angular momentum of a system of particles is equal to the sum of the individual particle angular momenta, and that the centre of mass is for the system. This is because angular momentum, like other quantities in quantum mechanics, is expressed as an operator with quantized eigenvalues.","The Heisenberg uncertainty principle states that the total angular momentum of a system of particles is subject to quantization, and that the individual particle angular momenta are expressed as operators. This is because angular momentum, like other quantities in quantum mechanics, is subject to the Heisenberg uncertainty principle.","The Heisenberg uncertainty principle states that the axis of rotation of a quantum particle is undefined, and that at any given time, only one projection of angular momentum can be measured with definite precision, while the other two remain uncertain. This is because angular momentum, like other quantities in quantum mechanics, is subject to quantization and expressed as an operator with quantized eigenvalues.","The Heisenberg uncertainty principle states that at any given time, only one projection of angular momentum can be measured with definite precision, while the other two remain uncertain. This is because angular momentum, like other quantities in quantum mechanics, is expressed as an operator with quantized eigenvalues.",E,kaggle200,"In general, in quantum mechanics, when two observable operators do not commute, they are called complementary observables. Two complementary observables cannot be measured simultaneously; instead they satisfy an uncertainty principle. The more accurately one observable is known, the less accurately the other one can be known. Just as there is an uncertainty principle relating position and momentum, there are uncertainty principles for angular momentum.
In quantum mechanics, momentum is defined as a self-adjoint operator on the wave function. The Heisenberg uncertainty principle defines limits on how accurately the momentum and position of a single observable system can be known at once. In quantum mechanics, position and momentum are conjugate variables.
In quantum mechanics, the total angular momentum quantum number parametrises the total angular momentum of a given particle, by combining its orbital angular momentum and its intrinsic angular momentum (i.e., its spin).
In quantum mechanics and its applications to quantum many-particle systems, notably quantum chemistry, angular momentum diagrams, or more accurately from a mathematical viewpoint angular momentum graphs, are a diagrammatic method for representing angular momentum quantum states of a quantum system allowing calculations to be done symbolically. More specifically, the arrows encode angular momentum states in bra–ket notation and include the abstract nature of the state, such as tensor products and transformation rules.","In quantum mechanics and its applications to quantum many-particle systems, notably quantum chemistry, angular momentum diagrams, or more accurately from a mathematical viewpoint angular momentum graphs, are a diagrammatic method for representing angular momentum quantum states of a quantum system allowing calculations to be done symbolically. More specifically, the arrows encode angular momentum states in bra–ket notation and include the abstract nature of the state, such as tensor products and transformation rules.
Uncertainty In the definition L=r×p , six operators are involved: The position operators rx , ry , rz , and the momentum operators px , py , pz . However, the Heisenberg uncertainty principle tells us that it is not possible for all six of these quantities to be known simultaneously with arbitrary precision. Therefore, there are limits to what can be known or measured about a particle's angular momentum. It turns out that the best that one can do is to simultaneously measure both the angular momentum vector's magnitude and its component along one axis.
In quantum mechanics, angular momentum (like other quantities) is expressed as an operator, and its one-dimensional projections have quantized eigenvalues. Angular momentum is subject to the Heisenberg uncertainty principle, implying that at any time, only one projection (also called ""component"") can be measured with definite precision; the other two then remain uncertain. Because of this, the axis of rotation of a quantum particle is undefined. Quantum particles do possess a type of non-orbital angular momentum called ""spin"", but this angular momentum does not correspond to a spinning motion. In relativistic quantum mechanics the above relativistic definition becomes a tensorial operator.","Just as there is an uncertainty principle relating position and momentum, there are uncertainty principles for angular momentum.
In quantum mechanics, momentum is defined as a self-adjoint operator on the wave functionAngular momentum is subject to the Heisenberg uncertainty principle, implying that at any time, only one projection (also called ""component"") can be measured with definite precision; the other two then remain uncertainThe Heisenberg uncertainty principle defines limits on how accurately the momentum and position of a single observable system can be known at onceIn quantum mechanics, position and momentum are conjugate variables.
In quantum mechanics, the total angular momentum quantum number parametrises the total angular momentum of a given particle, by combining its orbital angular momentum and its intrinsic angular momentum (i.e., its spin).
In quantum mechanics and its applications to quantum many-particle systems, notably quantum chemistry, angular momentum diagrams, or more accurately from a mathematical viewpoint angular momentum graphs, are a diagrammatic method for representing angular momentum quantum states of a quantum system allowing calculations to be done symbolicallyIn quantum mechanics and its applications to quantum many-particle systems, notably quantum chemistry, angular momentum diagrams, or more accurately from a mathematical viewpoint angular momentum graphs, are a diagrammatic method for representing angular momentum quantum states of a quantum system allowing calculations to be done symbolicallyTherefore, there are limits to what can be known or measured about a particle's angular momentumHowever, the Heisenberg uncertainty principle tells us that it is not possible for all six of these quantities to be known simultaneously with arbitrary precisionQuantum particles do possess a type of non-orbital angular momentum called ""spin"", but this angular momentum does not correspond to a spinning motionTwo complementary observables cannot be measured simultaneously; instead they satisfy an uncertainty principleMore specifically, the a","Just as there is an uncertainty principle relating position and momentum, there are uncertainty principles for angular momentum.
In quantum mechanics, momentum is defined as a self-adjoint operator on the wave functionAngular momentum is subject to the Heisenberg uncertainty principle, implying that at any time, only one projection (also called ""component"") can be measured with definite precision; the other two then remain uncertainThe Heisenberg uncertainty principle defines limits on how accurately the momentum and position of a single observable system can be known at onceIn quantum mechanics, position and momentum are conjugate variables.
In quantum mechanics, the total angular momentum quantum number parametrises the total angular momentum of a given particle, by combining its orbital angular momentum and its intrinsic angular momentum (i.e., its spin).
In quantum mechanics and its applications to quantum many-particle systems, notably quantum chemistry, angular momentum diagrams, or more accurately from a mathematical viewpoint angular momentum graphs, are a diagrammatic method for representing angular momentum quantum states of a quantum system allowing calculations to be done symbolicallyIn quantum mechanics and its applications to quantum many-particle systems, notably quantum chemistry, angular momentum diagrams, or more accurately from a mathematical viewpoint angular momentum graphs, are a diagrammatic method for representing angular momentum quantum states of a quantum system allowing calculations to be done symbolicallyTherefore, there are limits to what can be known or measured about a particle's angular momentumHowever, the Heisenberg uncertainty principle tells us that it is not possible for all six of these quantities to be known simultaneously with arbitrary precisionQuantum particles do possess a type of non-orbital angular momentum called ""spin"", but this angular momentum does not correspond to a spinning motionTwo complementary observables cannot be measured simultaneously; instead they satisfy an uncertainty principleMore specifically, the a[SEP]What is the Heisenberg uncertainty principle and how does it relate to angular momentum in quantum mechanics?","['D', 'E', 'C']",0.5
What is the difference between natural convection and forced convection?,"In broad terms, convection arises because of body forces acting within the fluid, such as gravity. ===Natural convection=== Natural convection is a type of flow, of motion of a liquid such as water or a gas such as air, in which the fluid motion is not generated by any external source (like a pump, fan, suction device, etc.) but by some parts of the fluid being heavier than other parts. * Forced convection: when a fluid is forced to flow over the surface by an internal source such as fans, by stirring, and pumps, creating an artificially induced convection current.http://www.engineersedge.com/heat_transfer/convection.htm Engineers Edge, 2009, ""Convection Heat Transfer"",Accessed 20/04/09 In many real-life applications (e.g. heat losses at solar central receivers or cooling of photovoltaic panels), natural and forced convection occur at the same time (mixed convection). Forced convection is type of heat transport in which fluid motion is generated by an external source like a (pump, fan, suction device, etc.). In thermodynamics, convection often refers to heat transfer by convection, where the prefixed variant Natural Convection is used to distinguish the fluid mechanics concept of Convection (covered in this article) from convective heat transfer. In fluid thermodynamics, combined forced convection and natural convection, or mixed convection, occurs when natural convection and forced convection mechanisms act together to transfer heat. In other cases, natural buoyancy forces alone are entirely responsible for fluid motion when the fluid is heated, and this process is called ""natural convection"". Combined forced and natural convection, however, can be generally described in one of three ways. ===Two-dimensional mixed convection with aiding flow=== The first case is when natural convection aids forced convection. It should not be confused with the dynamic fluid phenomenon of convection, which is typically referred to as Natural Convection in thermodynamic contexts in order to distinguish the two. ==Overview== Convection can be ""forced"" by movement of a fluid by means other than buoyancy forces (for example, a water pump in an automobile engine). Natural convection will be less likely and less rapid with more rapid diffusion (thereby diffusing away the thermal gradient that is causing the convection) or a more viscous (sticky) fluid. Natural convection can occur when there are hot and cold regions of either air or water, because both water and air become less dense as they are heated. In fluid mechanics, convection has a broader sense: it refers to the motion of fluid driven by density (or other property) difference. Convection is often categorised or described by the main effect causing the convective flow, e.g. Thermal convection. It is customary to use the term convection when referring to this cumulative transport and the term advection when referring to the transport due to bulk fluid motion.Incropera DeWitt VBergham Lavine 2007, Introduction to Heat Transfer, 5th ed., pg. 6 ==Types== Two types of convective heat transfer may be distinguished: * Free or natural convection: when fluid motion is caused by buoyancy forces that result from the density variations due to variations of thermal ±temperature in the fluid. Natural convection will be more likely and/or more rapid with a greater variation in density between the two fluids, a larger acceleration due to gravity that drives the convection, and/or a larger distance through the convecting medium. Natural convection will be more likely and more rapid with a greater variation in density between the two fluids, a larger acceleration due to gravity that drives the convection or a larger distance through the convecting medium. Both of these types of convection, either natural or forced, can be internal or external because they are independent of each other. In natural convection, an increase in temperature produces a reduction in density, which in turn causes fluid motion due to pressures and forces when fluids of different densities are affected by gravity (or any g-force). Convection (or convective heat transfer) is the transfer of heat from one place to another due to the movement of fluid. Convection is a process in which heat is carried from place to place by the bulk movement of a fluid and gases ==History== In the 1830s, in The Bridgewater Treatises, the term convection is attested in a scientific sense. Natural convection has attracted a great deal of attention from researchers because of its presence both in nature and engineering applications. ","Natural convection and forced convection are the same phenomenon, where a fluid is forced to flow over the surface by an internal source such as fans, stirring, and pumps, causing the fluid to be less dense and displaced.",Natural convection and forced convection are two different phenomena that do not relate to each other.,"Natural convection occurs when a fluid is in contact with a hot surface, causing the fluid to be less dense and displaced, while forced convection is when a fluid is forced to flow over the surface by an internal source such as fans, stirring, and pumps.","Natural convection is when a fluid is forced to flow over the surface by an internal source such as fans, stirring, and pumps, while forced convection occurs when a fluid is in contact with a hot surface, causing the fluid to be less dense and displaced.","Natural convection and forced convection are the same phenomenon, where a fluid is in contact with a hot surface, causing the fluid to be less dense and displaced, and then forced to flow over the surface by an internal source such as fans, stirring, and pumps.",C,kaggle200,"When analyzing potentially mixed convection, a parameter called the Archimedes number (Ar) parametrizes the relative strength of free and forced convection. The Archimedes number is the ratio of Grashof number and the square of Reynolds number, which represents the ratio of buoyancy force and inertia force, and which stands in for the contribution of natural convection. When Ar ≫ 1, natural convection dominates and when Ar ≪ 1, forced convection dominates.
Conventional ovens circulate hot air using natural convection and fan-assisted ovens circulate hot air using forced convection, so scientifically the term ""convection"" applies equally to both conventional (natural convection) ovens and fan-assisted (forced convection) ovens.
Combined forced and natural convection is often seen in very-high-power-output devices where the forced convection is not enough to dissipate all of the heat necessary. At this point, combining natural convection with forced convection will often deliver the desired results. Examples of these processes are nuclear reactor technology and some aspects of electronic cooling.
In fluid thermodynamics, combined forced convection and natural convection, or mixed convection, occurs when natural convection and forced convection mechanisms act together to transfer heat. This is also defined as situations where both pressure forces and buoyant forces interact. How much each form of convection contributes to the heat transfer is largely determined by the flow, temperature, geometry, and orientation. The nature of the fluid is also influential, since the Grashof number increases in a fluid as temperature increases, but is maximized at some point for a gas.","Combined forced and natural convection is often seen in very-high-power-output devices where the forced convection is not enough to dissipate all of the heat necessary. At this point, combining natural convection with forced convection will often deliver the desired results. Examples of these processes are nuclear reactor technology and some aspects of electronic cooling.
In fluid thermodynamics, combined forced convection and natural convection, or mixed convection, occurs when natural convection and forced convection mechanisms act together to transfer heat. This is also defined as situations where both pressure forces and buoyant forces interact. How much each form of convection contributes to the heat transfer is largely determined by the flow, temperature, geometry, and orientation. The nature of the fluid is also influential, since the Grashof number increases in a fluid as temperature increases, but is maximized at some point for a gas.
Forced convection: when a fluid is forced to flow over the surface by an internal source such as fans, by stirring, and pumps, creating an artificially induced convection current.In many real-life applications (e.g. heat losses at solar central receivers or cooling of photovoltaic panels), natural and forced convection occur at the same time (mixed convection).Internal and external flow can also classify convection. Internal flow occurs when a fluid is enclosed by a solid boundary such as when flowing through a pipe. An external flow occurs when a fluid extends indefinitely without encountering a solid surface. Both of these types of convection, either natural or forced, can be internal or external because they are independent of each other. The bulk temperature, or the average fluid temperature, is a convenient reference point for evaluating properties related to convective heat transfer, particularly in applications related to flow in pipes and ducts.","Both of these types of convection, either natural or forced, can be internal or external because they are independent of each otherAt this point, combining natural convection with forced convection will often deliver the desired resultsWhen Ar ≫ 1, natural convection dominates and when Ar ≪ 1, forced convection dominates.
Conventional ovens circulate hot air using natural convection and fan-assisted ovens circulate hot air using forced convection, so scientifically the term ""convection"" applies equally to both conventional (natural convection) ovens and fan-assisted (forced convection) ovens.
Combined forced and natural convection is often seen in very-high-power-output devices where the forced convection is not enough to dissipate all of the heat necessaryExamples of these processes are nuclear reactor technology and some aspects of electronic cooling.
In fluid thermodynamics, combined forced convection and natural convection, or mixed convection, occurs when natural convection and forced convection mechanisms act together to transfer heatHow much each form of convection contributes to the heat transfer is largely determined by the flow, temperature, geometry, and orientationCombined forced and natural convection is often seen in very-high-power-output devices where the forced convection is not enough to dissipate all of the heat necessary- When analyzing potentially mixed convection, a parameter called the Archimedes number (Ar) parametrizes the relative strength of free and forced convectionThe nature of the fluid is also influential, since the Grashof number increases in a fluid as temperature increases, but is maximized at some point for a gas.
Forced convection: when a fluid is forced to flow over the surface by an internal source such as fans, by stirring, and pumps, creating an artificially induced convection current.In many real-life applications (e.gheat losses at solar central receivers or cooling of photovoltaic panels), natural and forced convection occur at the same time (mixed convection).Internal and external flow can also classify convectionIntern","Both of these types of convection, either natural or forced, can be internal or external because they are independent of each otherAt this point, combining natural convection with forced convection will often deliver the desired resultsWhen Ar ≫ 1, natural convection dominates and when Ar ≪ 1, forced convection dominates.
Conventional ovens circulate hot air using natural convection and fan-assisted ovens circulate hot air using forced convection, so scientifically the term ""convection"" applies equally to both conventional (natural convection) ovens and fan-assisted (forced convection) ovens.
Combined forced and natural convection is often seen in very-high-power-output devices where the forced convection is not enough to dissipate all of the heat necessaryExamples of these processes are nuclear reactor technology and some aspects of electronic cooling.
In fluid thermodynamics, combined forced convection and natural convection, or mixed convection, occurs when natural convection and forced convection mechanisms act together to transfer heatHow much each form of convection contributes to the heat transfer is largely determined by the flow, temperature, geometry, and orientationCombined forced and natural convection is often seen in very-high-power-output devices where the forced convection is not enough to dissipate all of the heat necessary- When analyzing potentially mixed convection, a parameter called the Archimedes number (Ar) parametrizes the relative strength of free and forced convectionThe nature of the fluid is also influential, since the Grashof number increases in a fluid as temperature increases, but is maximized at some point for a gas.
Forced convection: when a fluid is forced to flow over the surface by an internal source such as fans, by stirring, and pumps, creating an artificially induced convection current.In many real-life applications (e.gheat losses at solar central receivers or cooling of photovoltaic panels), natural and forced convection occur at the same time (mixed convection).Internal and external flow can also classify convectionIntern[SEP]What is the difference between natural convection and forced convection?","['C', 'D', 'E']",1.0
What is magnetic susceptibility?,"Magnetic susceptibility indicates whether a material is attracted into or repelled out of a magnetic field. This allows classical physics to make useful predictions while avoiding the underlying quantum mechanical details. ==Definition== === Volume susceptibility === Magnetic susceptibility is a dimensionless proportionality constant that indicates the degree of magnetization of a material in response to an applied magnetic field. In electromagnetism, the magnetic susceptibility (; denoted , chi) is a measure of how much a material will become magnetized in an applied magnetic field. Susceptibility may refer to: ==Physics and engineering== In physics the susceptibility is a quantification for the change of an extensive property under variation of an intensive property. This allows an alternative description of all magnetization phenomena in terms of the quantities and , as opposed to the commonly used and . === Molar susceptibility and mass susceptibility === There are two other measures of susceptibility, the molar magnetic susceptibility () with unit m3/mol, and the mass magnetic susceptibility () with unit m3/kg that are defined below, where is the density with unit kg/m3 and is molar mass with unit kg/mol: \begin{align} \chi_\rho &= \frac{\chi_\text{v}}{\rho}; \\\ \chi_\text{m} &= M\chi_\rho = \frac{M}{\rho} \chi_\text{v}. \end{align} === In CGS units === The definitions above are according to the International System of Quantities (ISQ) upon which the SI is based. However, a useful simplification is to measure the magnetic susceptibility of a material and apply the macroscopic form of Maxwell's equations. Ferromagnetic, ferrimagnetic, or antiferromagnetic materials possess permanent magnetization even without external magnetic field and do not have a well defined zero-field susceptibility. ==Experimental measurement== Volume magnetic susceptibility is measured by the force change felt upon a substance when a magnetic field gradient is applied. The word may refer to: * In physics, the susceptibility of a material or substance describes its response to an applied field. In materials where susceptibility is anisotropic (different depending on direction), susceptibility is represented as a matrix known as the susceptibility tensor. In these cases, volume susceptibility is defined as a tensor M_i = H_j \chi_{ij} where and refer to the directions (e.g., of the and Cartesian coordinates) of the applied field and magnetization, respectively. A related term is magnetizability, the proportion between magnetic moment and magnetic flux density. An analogue non-linear relation between magnetization and magnetic field happens for antiferromagnetic materials. ==In the frequency domain== When the magnetic susceptibility is measured in response to an AC magnetic field (i.e. a magnetic field that varies sinusoidally), this is called AC susceptibility. In the study of liquid crystals the paranematic susceptibility (Latin: susceptibilis ""receptiveness"") is a quantity that describes the degree of induced order in a liquid crystal in response to an applied magnetic field. When the coercivity of the material parallel to an applied field is the smaller of the two, the differential susceptibility is a function of the applied field and self interactions, such as the magnetic anisotropy. The volume magnetic susceptibility, represented by the symbol (often simply , sometimes – magnetic, to distinguish from the electric susceptibility), is defined in the International System of Units – in other systems there may be additional constants – by the following relationship: \mathbf{M} = \chi_\text{v} \mathbf{H}. This method is highly accurate for diamagnetic materials with susceptibilities similar to water. ==Tensor susceptibility== The magnetic susceptibility of most crystals is not a scalar quantity. In electricity (electromagnetism), the electric susceptibility (\chi_{\text{e}}; Latin: susceptibilis ""receptive"") is a dimensionless proportionality constant that indicates the degree of polarization of a dielectric material in response to an applied electric field. Thus the volume magnetic susceptibility and the magnetic permeability are related by the following formula: \mu = \mu_0\left(1 + \chi_\text{v}\right). An important effect in metals under strong magnetic fields, is the oscillation of the differential susceptibility as function of . The magnetizability of materials comes from the atomic-level magnetic properties of the particles of which they are made. ","Magnetic susceptibility is a measure of how much a material will absorb magnetization in an applied magnetic field. It is the ratio of magnetization to the applied magnetizing field intensity, allowing for a simple classification of most materials' responses to an applied magnetic field.","Magnetic susceptibility is a measure of how much a material will become magnetized in an applied magnetic field. It is the ratio of magnetization to the applied magnetizing field intensity, allowing for a simple classification of most materials' responses to an applied magnetic field.","Magnetic susceptibility is a measure of how much a material will resist magnetization in an applied magnetic field. It is the ratio of magnetization to the applied magnetizing field intensity, allowing for a simple classification of most materials' responses to an applied magnetic field.","Magnetic susceptibility is a measure of how much a material will conduct magnetization in an applied magnetic field. It is the ratio of magnetization to the applied magnetizing field intensity, allowing for a simple classification of most materials' responses to an applied magnetic field.","Magnetic susceptibility is a measure of how much a material will reflect magnetization in an applied magnetic field. It is the ratio of magnetization to the applied magnetizing field intensity, allowing for a simple classification of most materials' responses to an applied magnetic field.",B,kaggle200,"Magnetic susceptibility is a dimensionless proportionality constant that indicates the degree of magnetization of a material in response to an applied magnetic field. A related term is magnetizability, the proportion between magnetic moment and magnetic flux density. A closely related parameter is the permeability, which expresses the total magnetization of material and volume.
Because of the relationship between magnetic susceptibility formula_13, magnetization formula_14 and applied magnetic field formula_11 is almost linear at low fields, then
The magnetizations of diamagnetic materials vary with an applied magnetic field which can be given as:
In electromagnetism, the magnetic susceptibility (Latin: , ""receptive""; denoted ) is a measure of how much a material will become magnetized in an applied magnetic field. It is the ratio of magnetization (magnetic moment per unit volume) to the applied magnetizing field intensity . This allows a simple classification, into two categories, of most materials' responses to an applied magnetic field: an alignment with the magnetic field, , called paramagnetism, or an alignment against the field, , called diamagnetism.","Volume susceptibility Magnetic susceptibility is a dimensionless proportionality constant that indicates the degree of magnetization of a material in response to an applied magnetic field. A related term is magnetizability, the proportion between magnetic moment and magnetic flux density. A closely related parameter is the permeability, which expresses the total magnetization of material and volume.
The Curie–Weiss law describes the changes in a material's magnetic susceptibility, χ , near its Curie temperature. The magnetic susceptibility is the ratio between the material's magnetization and the applied magnetic field.
In electromagnetism, the magnetic susceptibility (from Latin susceptibilis 'receptive'; denoted χ, chi) is a measure of how much a material will become magnetized in an applied magnetic field. It is the ratio of magnetization M (magnetic moment per unit volume) to the applied magnetizing field intensity H. This allows a simple classification, into two categories, of most materials' responses to an applied magnetic field: an alignment with the magnetic field, χ > 0, called paramagnetism, or an alignment against the field, χ < 0, called diamagnetism.","The magnetic susceptibility is the ratio between the material's magnetization and the applied magnetic field.
In electromagnetism, the magnetic susceptibility (from Latin susceptibilis 'receptive'; denoted χ, chi) is a measure of how much a material will become magnetized in an applied magnetic field- Magnetic susceptibility is a dimensionless proportionality constant that indicates the degree of magnetization of a material in response to an applied magnetic fieldVolume susceptibility Magnetic susceptibility is a dimensionless proportionality constant that indicates the degree of magnetization of a material in response to an applied magnetic fieldA closely related parameter is the permeability, which expresses the total magnetization of material and volume.
Because of the relationship between magnetic susceptibility formula_13, magnetization formula_14 and applied magnetic field formula_11 is almost linear at low fields, then
The magnetizations of diamagnetic materials vary with an applied magnetic field which can be given as:
In electromagnetism, the magnetic susceptibility (Latin: , ""receptive""; denoted ) is a measure of how much a material will become magnetized in an applied magnetic fieldA related term is magnetizability, the proportion between magnetic moment and magnetic flux densityIt is the ratio of magnetization (magnetic moment per unit volume) to the applied magnetizing field intensity It is the ratio of magnetization M (magnetic moment per unit volume) to the applied magnetizing field intensity HThis allows a simple classification, into two categories, of most materials' responses to an applied magnetic field: an alignment with the magnetic field, χ > 0, called paramagnetism, or an alignment against the field, χ < 0, called diamagnetismA closely related parameter is the permeability, which expresses the total magnetization of material and volume.
The Curie–Weiss law describes the changes in a material's magnetic susceptibility, χ , near its Curie temperatureThis allows a simple classification, into two categories, of most materials' responses to an a","The magnetic susceptibility is the ratio between the material's magnetization and the applied magnetic field.
In electromagnetism, the magnetic susceptibility (from Latin susceptibilis 'receptive'; denoted χ, chi) is a measure of how much a material will become magnetized in an applied magnetic field- Magnetic susceptibility is a dimensionless proportionality constant that indicates the degree of magnetization of a material in response to an applied magnetic fieldVolume susceptibility Magnetic susceptibility is a dimensionless proportionality constant that indicates the degree of magnetization of a material in response to an applied magnetic fieldA closely related parameter is the permeability, which expresses the total magnetization of material and volume.
Because of the relationship between magnetic susceptibility formula_13, magnetization formula_14 and applied magnetic field formula_11 is almost linear at low fields, then
The magnetizations of diamagnetic materials vary with an applied magnetic field which can be given as:
In electromagnetism, the magnetic susceptibility (Latin: , ""receptive""; denoted ) is a measure of how much a material will become magnetized in an applied magnetic fieldA related term is magnetizability, the proportion between magnetic moment and magnetic flux densityIt is the ratio of magnetization (magnetic moment per unit volume) to the applied magnetizing field intensity It is the ratio of magnetization M (magnetic moment per unit volume) to the applied magnetizing field intensity HThis allows a simple classification, into two categories, of most materials' responses to an applied magnetic field: an alignment with the magnetic field, χ > 0, called paramagnetism, or an alignment against the field, χ < 0, called diamagnetismA closely related parameter is the permeability, which expresses the total magnetization of material and volume.
The Curie–Weiss law describes the changes in a material's magnetic susceptibility, χ , near its Curie temperatureThis allows a simple classification, into two categories, of most materials' responses to an a[SEP]What is magnetic susceptibility?","['B', 'A', 'C']",1.0
"What is a transient condensation cloud, also known as a Wilson cloud?","A transient condensation cloud, also called a Wilson cloud, is observable surrounding large explosions in humid air. The lifetime of the Wilson cloud during nuclear air bursts can be shortened by the thermal radiation from the fireball, which heats the cloud above to the dew point and evaporates the droplets. ===Non-nuclear explosions=== Any sufficiently large explosion, such as one caused by a large quantity of conventional explosives or a volcanic eruption, can create a condensation cloud, as seen in Operation Sailor Hat or in the 2020 Beirut explosion, where a very large Wilson cloud expanded outwards from the blast. ===Aircraft and Rockets=== The same kind of condensation cloud is sometimes seen above the wings of aircraft in a moist atmosphere. Such vapor can also be seen in low pressure regions during high–g subsonic maneuvers of aircraft in humid conditions. ==Occurrence== ===Nuclear weapons testing=== Scientists observing the Operation Crossroads nuclear tests in 1946 at Bikini Atoll named that transitory cloud a ""Wilson cloud"" because the same pressure effect is employed in a Wilson cloud chamber to let condensation mark the tracks of electrically- charged sub-atomic particles. Analysts of later nuclear bomb tests used the more general term condensation cloud. Hence, the small, transient clouds that appear. Clouds consist of microscopic droplets of liquid water (warm clouds), tiny crystals of ice (cold clouds), or both (mixed phase clouds), along with microscopic particles of dust, smoke, or other matter, known as condensation nuclei.https://ssec.si.edu/stemvisions- blog/what-are-clouds Cloud droplets initially form by the condensation of water vapor onto condensation nuclei when the supersaturation of air exceeds a critical value according to Köhler theory. In humid air, the drop in temperature in the most rarefied portion of the shock wave can bring the air temperature below its dew point, at which moisture condenses to form a visible cloud of microscopic water droplets. The shape of the shock wave, influenced by different speed in different altitudes, and the temperature and humidity of different atmospheric layers determines the appearance of the Wilson clouds. In meteorology, a cloud is an aerosol consisting of a visible mass of miniature liquid droplets, frozen crystals, or other particles suspended in the atmosphere of a planetary body or similar space. The vapor cone of a transonic aircraft or rocket on ascent is another example of a condensation cloud. ==See also== * Mushroom cloud * Rope trick effect * Contrail ==References== Category:Aerodynamics Category:Physical phenomena Category:Explosions Category:Cloud types As the air is cooled to its dew point and becomes saturated, water vapor normally condenses to form cloud drops. This leads to at least some degree of adiabatic warming of the air which can result in the cloud droplets or crystals turning back into invisible water vapor. During nuclear tests, condensation rings around or above the fireball are commonly observed. Cloud physics is the study of the physical processes that lead to the formation, growth and precipitation of atmospheric clouds. When a nuclear weapon or a large amount of a conventional explosive is detonated in sufficiently humid air, the ""negative phase"" of the shock wave causes a rarefaction of the air surrounding the explosion, but not contained within it. On Earth, clouds are formed as a result of saturation of the air when it is cooled to its dew point, or when it gains sufficient moisture (usually in the form of water vapor) from an adjacent source to raise the dew point to the ambient temperature. If the visibility is 1 km or higher, the visible condensation is termed mist. ====Multi-level or moderate vertical==== These clouds have low- to mid-level bases that form anywhere from near the surface to about and tops that can extend into the mid-altitude range and sometimes higher in the case of nimbostratus. As more moist air forms along the surface, the process repeats, resulting in a series of discrete packets of moist air rising to form clouds. This condensation normally occurs on cloud condensation nuclei such as salt or dust particles that are small enough to be held aloft by normal circulation of the air. thumb|upright=1.35|Animation of cloud evolution from cumulus humilis to cumulonimbus capillatus incus One agent is the convective upward motion of air caused by daytime solar heating at surface level. There is evidence that smoke particles from burnt-up meteors provide much of the condensation nuclei required for the formation of noctilucent cloud. ","A visible cloud of smoke that forms when a nuclear weapon or a large amount of a conventional explosive is detonated in humid air, due to the burning of materials in the explosion.","A visible cloud of microscopic water droplets that forms when a nuclear weapon or a large amount of a conventional explosive is detonated in humid air, due to a temporary cooling of the air caused by a rarefaction of the air surrounding the explosion.","A visible cloud of microscopic water droplets that forms when a nuclear weapon or a large amount of a conventional explosive is detonated in dry air, due to a temporary cooling of the air caused by a rarefaction of the air surrounding the explosion.","A visible cloud of gas that forms when a nuclear weapon or a large amount of a conventional explosive is detonated in humid air, due to the release of gases from the explosion.","A visible cloud of smoke that forms when a nuclear weapon or a large amount of a conventional explosive is detonated in dry air, due to the burning of materials in the explosion.",B,kaggle200,"Scientists observing the Operation Crossroads nuclear tests in 1946 at Bikini Atoll named that transitory cloud a ""Wilson cloud"" because the same pressure effect is employed in a Wilson cloud chamber to let condensation mark the tracks of electrically-charged sub-atomic particles. Analysts of later nuclear bomb tests used the more general term ""condensation cloud"".
Any sufficiently large explosion, such as one caused by a large quantity of conventional explosives or a volcanic eruption, can create a condensation cloud, as seen in Operation Sailor Hat or in the 2020 Beirut explosion, where a very large Wilson cloud expanded outwards from the blast.
A transient condensation cloud, also called a Wilson cloud, is observable surrounding large explosions in humid air.
When a nuclear weapon or a large amount of a conventional explosive is detonated in sufficiently humid air, the ""negative phase"" of the shock wave causes a rarefaction of the air surrounding the explosion, but not contained within it. This rarefaction results in a temporary cooling of that air, which causes a condensation of some of the water vapor contained in it. When the pressure and the temperature return to normal, the Wilson cloud dissipates.","The lifetime of the Wilson cloud during nuclear air bursts can be shortened by the thermal radiation from the fireball, which heats the cloud above to the dew point and evaporates the droplets.
Non-nuclear explosions Any sufficiently large explosion, such as one caused by a large quantity of conventional explosives or a volcanic eruption, can create a condensation cloud, as seen in Operation Sailor Hat or in the 2020 Beirut explosion, where a very large Wilson cloud expanded outwards from the blast.
A transient condensation cloud, also called a Wilson cloud, is observable surrounding large explosions in humid air.
When a nuclear weapon or a large amount of a conventional explosive is detonated in sufficiently humid air, the ""negative phase"" of the shock wave causes a rarefaction of the air surrounding the explosion, but not contained within it. This rarefaction results in a temporary cooling of that air, which causes a condensation of some of the water vapor contained in it. When the pressure and the temperature return to normal, the Wilson cloud dissipates.","Analysts of later nuclear bomb tests used the more general term ""condensation cloud"".
Any sufficiently large explosion, such as one caused by a large quantity of conventional explosives or a volcanic eruption, can create a condensation cloud, as seen in Operation Sailor Hat or in the 2020 Beirut explosion, where a very large Wilson cloud expanded outwards from the blast.
A transient condensation cloud, also called a Wilson cloud, is observable surrounding large explosions in humid air.
When a nuclear weapon or a large amount of a conventional explosive is detonated in sufficiently humid air, the ""negative phase"" of the shock wave causes a rarefaction of the air surrounding the explosion, but not contained within itThe lifetime of the Wilson cloud during nuclear air bursts can be shortened by the thermal radiation from the fireball, which heats the cloud above to the dew point and evaporates the droplets.
Non-nuclear explosions Any sufficiently large explosion, such as one caused by a large quantity of conventional explosives or a volcanic eruption, can create a condensation cloud, as seen in Operation Sailor Hat or in the 2020 Beirut explosion, where a very large Wilson cloud expanded outwards from the blast.
A transient condensation cloud, also called a Wilson cloud, is observable surrounding large explosions in humid air.
When a nuclear weapon or a large amount of a conventional explosive is detonated in sufficiently humid air, the ""negative phase"" of the shock wave causes a rarefaction of the air surrounding the explosion, but not contained within itWhen the pressure and the temperature return to normal, the Wilson cloud dissipates.When the pressure and the temperature return to normal, the Wilson cloud dissipates- Scientists observing the Operation Crossroads nuclear tests in 1946 at Bikini Atoll named that transitory cloud a ""Wilson cloud"" because the same pressure effect is employed in a Wilson cloud chamber to let condensation mark the tracks of electrically-charged sub-atomic particlesThis rarefaction results in a temporary cooling of that air, which cause","Analysts of later nuclear bomb tests used the more general term ""condensation cloud"".
Any sufficiently large explosion, such as one caused by a large quantity of conventional explosives or a volcanic eruption, can create a condensation cloud, as seen in Operation Sailor Hat or in the 2020 Beirut explosion, where a very large Wilson cloud expanded outwards from the blast.
A transient condensation cloud, also called a Wilson cloud, is observable surrounding large explosions in humid air.
When a nuclear weapon or a large amount of a conventional explosive is detonated in sufficiently humid air, the ""negative phase"" of the shock wave causes a rarefaction of the air surrounding the explosion, but not contained within itThe lifetime of the Wilson cloud during nuclear air bursts can be shortened by the thermal radiation from the fireball, which heats the cloud above to the dew point and evaporates the droplets.
Non-nuclear explosions Any sufficiently large explosion, such as one caused by a large quantity of conventional explosives or a volcanic eruption, can create a condensation cloud, as seen in Operation Sailor Hat or in the 2020 Beirut explosion, where a very large Wilson cloud expanded outwards from the blast.
A transient condensation cloud, also called a Wilson cloud, is observable surrounding large explosions in humid air.
When a nuclear weapon or a large amount of a conventional explosive is detonated in sufficiently humid air, the ""negative phase"" of the shock wave causes a rarefaction of the air surrounding the explosion, but not contained within itWhen the pressure and the temperature return to normal, the Wilson cloud dissipates.When the pressure and the temperature return to normal, the Wilson cloud dissipates- Scientists observing the Operation Crossroads nuclear tests in 1946 at Bikini Atoll named that transitory cloud a ""Wilson cloud"" because the same pressure effect is employed in a Wilson cloud chamber to let condensation mark the tracks of electrically-charged sub-atomic particlesThis rarefaction results in a temporary cooling of that air, which cause[SEP]What is a transient condensation cloud, also known as a Wilson cloud?","['B', 'A', 'C']",1.0
What is a uniform tiling in the hyperbolic plane?,"In geometry, the tetrahexagonal tiling is a uniform tiling of the hyperbolic plane. In geometry, the tetraheptagonal tiling is a uniform tiling of the hyperbolic plane. In geometry, the pentahexagonal tiling is a uniform tiling of the hyperbolic plane. In geometry, the rhombitetrahexagonal tiling is a uniform tiling of the hyperbolic plane. Hyperbolic triangles (p q r) define compact uniform hyperbolic tilings. Selected families of uniform tilings are shown below (using the Poincaré disk model for the hyperbolic plane). Examples of uniform tilings Spherical Euclidean Hyperbolic Hyperbolic Hyperbolic Hyperbolic 100px {5,3} 5.5.5 100px {6,3} 6.6.6 100px {7,3} 7.7.7 100px {∞,3} ∞.∞.∞ Regular tilings {p,q} of the sphere, Euclidean plane, and hyperbolic plane using regular pentagonal, hexagonal and heptagonal and apeirogonal faces. This article shows the regular tiling up to p, q = 8, and uniform tilings in 12 families: (7 3 2), (8 3 2), (5 4 2), (6 4 2), (7 4 2), (8 4 2), (5 5 2), (6 5 2) (6 6 2), (7 7 2), (8 6 2), and (8 8 2). === Regular hyperbolic tilings === The simplest set of hyperbolic tilings are regular tilings {p,q}, which exist in a matrix with the regular polyhedra and Euclidean tilings. Regular tilings {p,q} of the sphere, Euclidean plane, and hyperbolic plane using regular pentagonal, hexagonal and heptagonal and apeirogonal faces. Regular tilings {p,q} of the sphere, Euclidean plane, and hyperbolic plane using regular pentagonal, hexagonal and heptagonal and apeirogonal faces. There are an infinite number of uniform tilings based on the Schwarz triangles (p q r) where + + < 1, where p, q, r are each orders of reflection symmetry at three points of the fundamental domain triangle – the symmetry group is a hyperbolic triangle group. :See Template:Finite triangular hyperbolic tilings table == Quadrilateral domains== 320px|thumb|A quadrilateral domain has 9 generator point positions that define uniform tilings. Uniform tilings can be identified by their vertex configuration, a sequence of numbers representing the number of sides of the polygons around each vertex. Regular tilings {p,q} of the sphere, Euclidean plane, and hyperbolic plane using regular pentagonal, hexagonal and heptagonal and apeirogonal faces. 100px t{5,3} 10.10.3 100px t{6,3} 12.12.3 100px t{7,3} 14.14.3 100px t{∞,3} ∞.∞.3 Truncated tilings have 2p.2p.q vertex figures from regular {p,q}. The other edges are normal edges.) snub tetratetrahedron 50px snub cuboctahedron 50px snub cuboctahedron 50px snub icosidodecahedron 50px snub icosidodecahedron 50px snub trihexagonal tiling 50px snub trihexagonal tiling 50px Snub triheptagonal tiling 50px Snub triheptagonal tiling 50px Snub trioctagonal tiling 50px Snub trioctagonal tiling 50px Cantitruncation (tr) Bevel (b) tr{p,q} hexagonal prism 50px hexagonal prism 50px truncated tetratetrahedron 50px truncated cuboctahedron 50px truncated cuboctahedron 50px truncated icosidodecahedron 50px truncated icosidodecahedron 50px truncated trihexagonal tiling 50px truncated trihexagonal tiling 50px Truncated triheptagonal tiling 50px Truncated triheptagonal tiling 50px Truncated trioctagonal tiling 50px Truncated trioctagonal tiling 50px In hyperbolic geometry, a uniform hyperbolic tiling (or regular, quasiregular or semiregular hyperbolic tiling) is an edge-to-edge filling of the hyperbolic plane which has regular polygons as faces and is vertex-transitive (transitive on its vertices, isogonal, i.e. there is an isometry mapping any vertex onto any other). This coloring can be called a rhombiheptaheptagonal tiling. 160px The dual tiling is made of rhombic faces and has a face configuration V4.7.4.7. == Related polyhedra and tiling == ==See also== *Uniform tilings in hyperbolic plane *List of regular polytopes ==References== * John H. Conway, Heidi Burgiel, Chaim Goodman-Strass, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations) * == External links == * * * Hyperbolic and Spherical Tiling Gallery * KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings * Hyperbolic Planar Tessellations, Don Hatch Category:Hyperbolic tilings Category:Isogonal tilings Category:Isotoxal tilings Category:Uniform tilings Uniform tilings may be regular (if also face- and edge-transitive), quasi-regular (if edge-transitive but not face- transitive) or semi-regular (if neither edge- nor face-transitive). It has Schläfli symbol of r{6,5} or t1{6,5}. == Uniform colorings == 120px == Related polyhedra and tiling == [(5,5,3)] reflective symmetry uniform tilings 60px 60px 60px 60px 60px 60px 60px ==References== * John H. Conway, Heidi Burgiel, Chaim Goodman-Strass, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations) * ==See also== *Square tiling *Tilings of regular polygons *List of uniform planar tilings *List of regular polytopes == External links == * * * Hyperbolic and Spherical Tiling Gallery * KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings * Hyperbolic Planar Tessellations, Don Hatch Category:Hyperbolic tilings Category:Isogonal tilings Category:Isotoxal tilings Category:Uniform tilings Removing half of the blue mirrors doubles the domain again into *3322 symmetry. :160px160px160px160px == Related polyhedra and tiling == ==See also== *Square tiling *Tilings of regular polygons *List of uniform planar tilings *List of regular polytopes ==References== * John H. Conway, Heidi Burgiel, Chaim Goodman-Strass, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations) * == External links == * * * Hyperbolic and Spherical Tiling Gallery * KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings * Hyperbolic Planar Tessellations, Don Hatch Category:Hyperbolic tilings Category:Isogonal tilings Category:Uniform tilings Adding a 2-fold rotation point in the center of each rhombi represents a (2*32) orbifold. :120px120px120px120px == Related polyhedra and tiling == ==See also== *Square tiling *Tilings of regular polygons *List of uniform planar tilings *List of regular polytopes ==References== * John H. Conway, Heidi Burgiel, Chaim Goodman-Strass, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations) * == External links == * * * Hyperbolic and Spherical Tiling Gallery * KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings * Hyperbolic Planar Tessellations, Don Hatch Category:Hyperbolic tilings Category:Isogonal tilings Category:Isotoxal tilings Category:Uniform tilings ",A uniform tiling in the hyperbolic plane is a tessellation of the hyperbolic plane with irregular polygons as faces. These are not vertex-transitive and isogonal.,A uniform tiling in the hyperbolic plane is a tessellation of the hyperbolic plane with regular polygons as faces. These are not vertex-transitive and isogonal.,A uniform tiling in the hyperbolic plane is a tessellation of the hyperbolic plane with irregular polygons as faces. These are vertex-transitive and isogonal.,"A uniform tiling in the hyperbolic plane is an edge-to-edge filling of the hyperbolic plane, with regular polygons as faces. These are vertex-transitive and isogonal.","A uniform tiling in the hyperbolic plane is an edge-to-edge filling of the hyperbolic plane, with irregular polygons as faces. These are vertex-transitive and isogonal.",D,kaggle200,"In geometry, the rhombtriapeirogonal tiling is a uniform tiling of the hyperbolic plane with a Schläfli symbol of rr{∞,3}.
In geometry, the snub triapeirogonal tiling is a uniform tiling of the hyperbolic plane with a Schläfli symbol of sr{∞,3}.
In hyperbolic geometry, a uniform hyperbolic tiling (or regular, quasiregular or semiregular hyperbolic tiling) is an edge-to-edge filling of the hyperbolic plane which has regular polygons as faces and is vertex-transitive (transitive on its vertices, isogonal, i.e. there is an isometry mapping any vertex onto any other). It follows that all vertices are congruent, and the tiling has a high degree of rotational and translational symmetry.
It is possible to tessellate in non-Euclidean geometries such as hyperbolic geometry. A uniform tiling in the hyperbolic plane (which may be regular, quasiregular or semiregular) is an edge-to-edge filling of the hyperbolic plane, with regular polygons as faces; these are vertex-transitive (transitive on its vertices), and isogonal (there is an isometry mapping any vertex onto any other).","In geometry, a kisrhombille is a uniform tiling of rhombic faces, divided with a center points into four triangles.
Examples: 3-6 kisrhombille – Euclidean plane 3-7 kisrhombille – hyperbolic plane 3-8 kisrhombille – hyperbolic plane 4-5 kisrhombille – hyperbolic plane
In hyperbolic geometry, a uniform hyperbolic tiling (or regular, quasiregular or semiregular hyperbolic tiling) is an edge-to-edge filling of the hyperbolic plane which has regular polygons as faces and is vertex-transitive (transitive on its vertices, isogonal, i.e. there is an isometry mapping any vertex onto any other). It follows that all vertices are congruent, and the tiling has a high degree of rotational and translational symmetry.
Tessellations in non-Euclidean geometries It is possible to tessellate in non-Euclidean geometries such as hyperbolic geometry. A uniform tiling in the hyperbolic plane (that may be regular, quasiregular, or semiregular) is an edge-to-edge filling of the hyperbolic plane, with regular polygons as faces; these are vertex-transitive (transitive on its vertices), and isogonal (there is an isometry mapping any vertex onto any other).A uniform honeycomb in hyperbolic space is a uniform tessellation of uniform polyhedral cells. In three-dimensional (3-D) hyperbolic space there are nine Coxeter group families of compact convex uniform honeycombs, generated as Wythoff constructions, and represented by permutations of rings of the Coxeter diagrams for each family.","A uniform tiling in the hyperbolic plane (which may be regular, quasiregular or semiregular) is an edge-to-edge filling of the hyperbolic plane, with regular polygons as faces; these are vertex-transitive (transitive on its vertices), and isogonal (there is an isometry mapping any vertex onto any other).A uniform tiling in the hyperbolic plane (that may be regular, quasiregular, or semiregular) is an edge-to-edge filling of the hyperbolic plane, with regular polygons as faces; these are vertex-transitive (transitive on its vertices), and isogonal (there is an isometry mapping any vertex onto any other).A uniform honeycomb in hyperbolic space is a uniform tessellation of uniform polyhedral cells- In geometry, the rhombtriapeirogonal tiling is a uniform tiling of the hyperbolic plane with a Schläfli symbol of rr{∞,3}.
In geometry, the snub triapeirogonal tiling is a uniform tiling of the hyperbolic plane with a Schläfli symbol of sr{∞,3}.
In hyperbolic geometry, a uniform hyperbolic tiling (or regular, quasiregular or semiregular hyperbolic tiling) is an edge-to-edge filling of the hyperbolic plane which has regular polygons as faces and is vertex-transitive (transitive on its vertices, isogonal, i.eIn geometry, a kisrhombille is a uniform tiling of rhombic faces, divided with a center points into four triangles.
Examples: 3-6 kisrhombille – Euclidean plane 3-7 kisrhombille – hyperbolic plane 3-8 kisrhombille – hyperbolic plane 4-5 kisrhombille – hyperbolic plane
In hyperbolic geometry, a uniform hyperbolic tiling (or regular, quasiregular or semiregular hyperbolic tiling) is an edge-to-edge filling of the hyperbolic plane which has regular polygons as faces and is vertex-transitive (transitive on its vertices, isogonal, i.eIt follows that all vertices are congruent, and the tiling has a high degree of rotational and translational symmetry.
Tessellations in non-Euclidean geometries It is possible to tessellate in non-Euclidean geometries such as hyperbolic geometryIt follows that all vertices are congruent, and the tiling has a high degree of rotational and trans","A uniform tiling in the hyperbolic plane (which may be regular, quasiregular or semiregular) is an edge-to-edge filling of the hyperbolic plane, with regular polygons as faces; these are vertex-transitive (transitive on its vertices), and isogonal (there is an isometry mapping any vertex onto any other).A uniform tiling in the hyperbolic plane (that may be regular, quasiregular, or semiregular) is an edge-to-edge filling of the hyperbolic plane, with regular polygons as faces; these are vertex-transitive (transitive on its vertices), and isogonal (there is an isometry mapping any vertex onto any other).A uniform honeycomb in hyperbolic space is a uniform tessellation of uniform polyhedral cells- In geometry, the rhombtriapeirogonal tiling is a uniform tiling of the hyperbolic plane with a Schläfli symbol of rr{∞,3}.
In geometry, the snub triapeirogonal tiling is a uniform tiling of the hyperbolic plane with a Schläfli symbol of sr{∞,3}.
In hyperbolic geometry, a uniform hyperbolic tiling (or regular, quasiregular or semiregular hyperbolic tiling) is an edge-to-edge filling of the hyperbolic plane which has regular polygons as faces and is vertex-transitive (transitive on its vertices, isogonal, i.eIn geometry, a kisrhombille is a uniform tiling of rhombic faces, divided with a center points into four triangles.
Examples: 3-6 kisrhombille – Euclidean plane 3-7 kisrhombille – hyperbolic plane 3-8 kisrhombille – hyperbolic plane 4-5 kisrhombille – hyperbolic plane
In hyperbolic geometry, a uniform hyperbolic tiling (or regular, quasiregular or semiregular hyperbolic tiling) is an edge-to-edge filling of the hyperbolic plane which has regular polygons as faces and is vertex-transitive (transitive on its vertices, isogonal, i.eIt follows that all vertices are congruent, and the tiling has a high degree of rotational and translational symmetry.
Tessellations in non-Euclidean geometries It is possible to tessellate in non-Euclidean geometries such as hyperbolic geometryIt follows that all vertices are congruent, and the tiling has a high degree of rotational and trans[SEP]What is a uniform tiling in the hyperbolic plane?","['D', 'E', 'A']",1.0
What is the relation between the three moment theorem and the bending moments at three successive supports of a continuous beam?,"The second equation is more general as it does not require that the weight of each segment be distributed uniformly. thumb|Figure 01-Sample continuous beam section ==Derivation of three moments equations == Mohr's theorem can be used to derive the three moment theorem (TMT). ===Mohr's first theorem=== The change in slope of a deflection curve between two points of a beam is equal to the area of the M/EI diagram between those two points.(Figure 02) thumb|Figure 02-Mohr's First Theorem ===Mohr's second theorem=== Consider two points k1 and k2 on a beam. The deflection of k1 and k2 relative to the point of intersection between tangent at k1 and k2 and vertical through k1 is equal to the moment of M/EI diagram between k1 and k2 about k1.(Figure 03) thumb|Figure03-Mohr's Second Theorem The three moment equation expresses the relation between bending moments at three successive supports of a continuous beam, subject to a loading on a two adjacent span with or without settlement of the supports. ===The sign convention=== According to the Figure 04, # The moment M1, M2, and M3 be positive if they cause compression in the upper part of the beam. (sagging positive) # The deflection downward positive. In civil engineering and structural analysis Clapeyron's theorem of three moments is a relationship among the bending moments at three consecutive supports of a horizontal beam. The moment-area theorem is an engineering tool to derive the slope, rotation and deflection of beams and frames. A beam with both ends fixed is statically indeterminate to the 3rd degree, and any structural analysis method applicable on statically indeterminate beams can be used to calculate the fixed end moments. == Examples == In the following examples, clockwise moments are positive. 400px Concentrated load of magnitude P 400px Linearly distributed load of maximum intensity q0 400px Uniformly distributed load of intensity q 400px Couple of magnitude M0 The two cases with distributed loads can be derived from the case with concentrated load by integration. # Let A' B' and C' be the final positions of the beam ABC due to support settlements. thumb|Figure 04-Deflection Curve of a Continuous Beam Under Settlement ===Derivation of three moment theorem=== PB'Q is a tangent drawn at B' for final Elastic Curve A'B'C' of the beam ABC. Wheeler: An Elementary Course of Civil Engineering, 1876, Page 118 the bending moments M_A,\, M_B,\, M_C at the three points are related by: :M_A l + 2 M_B (l+l') +M_C l' = \frac{1}{4} w l^3 + \frac{1}{4} w' (l')^3. The moment distribution method is a structural analysis method for statically indeterminate beams and frames developed by Hardy Cross. This method is advantageous when we solve problems involving beams, especially for those subjected to a series of concentrated loadings or having segments with different moments of inertia. ==Theorem 1== The change in slope between any two points on the elastic curve equals the area of the M/EI (moment) diagram between these two points. :\theta_{A/B}={\int_A}^B\left(\frac{M}{EI}\right)dx where, * M = moment * EI = flexural rigidity * \theta_{A/B} = change in slope between points A and B * A, B = points on the elastic curve ==Theorem 2== The vertical deviation of a point A on an elastic curve with respect to the tangent which is extended from another point B equals the moment of the area under the M/EI diagram between those two points (A and B). The moment distribution method falls into the category of displacement method of structural analysis. == Implementation == In order to apply the moment distribution method to analyse a structure, the following things must be considered. === Fixed end moments === Fixed end moments are the moments produced at member ends by external loads. === Bending stiffness === The bending stiffness (EI/L) of a member is represented as the flexural rigidity of the member (product of the modulus of elasticity (E) and the second moment of area (I)) divided by the length (L) of the member. The fixed end moments are reaction moments developed in a beam member under certain load conditions with both ends fixed. Shear force and bending moment diagrams are analytical tools used in conjunction with structural analysis to help perform structural design by determining the value of shear forces and bending moments at a given point of a structural element such as a beam. The beam is considered to be three separate members, AB, BC, and CD, connected by fixed end (moment resisting) joints at B and C. *Members AB, BC, CD have the same span L = 10 \ m . thumb|400px|Shear and Bending moment diagram for a simply supported beam with a concentrated load at mid-span. The maximum and minimum values on the graphs represent the max forces and moments that this beam will have under these circumstances. ==Relationships among load, shear, and moment diagrams== Since this method can easily become unnecessarily complicated with relatively simple problems, it can be quite helpful to understand different relations between the loading, shear, and moment diagram. The differential equation that relates the beam deflection (w) to the bending moment (M) is : \frac{d^2 w}{dx^2} = - \frac{M}{EI} where E is the Young's modulus and I is the area moment of inertia of the beam cross-section. Arithmetically summing all moments in each respective columns gives the final moment values. === Result === *Moments at joints determined by the moment distribution method :M_A = 0 \ kN \cdot m :M_B = -11.569 \ kN \cdot m :M_C = -10.186 \ kN \cdot m :M_D = -13.657 \ kN \cdot m :The conventional engineer's sign convention is used here, i.e. positive moments cause elongation at the bottom part of a beam member. This moment is computed about point A where the deviation from B to A is to be determined. :t_{A/B} = {\int_A}^B \frac{M}{EI} x \;dx where, * M = moment * EI = flexural rigidity * t_{A/B} = deviation of tangent at point A with respect to the tangent at point B * A, B = points on the elastic curve ==Rule of sign convention== The deviation at any point on the elastic curve is positive if the point lies above the tangent, negative if the point is below the tangent; we measured it from left tangent, if θ is counterclockwise direction, the change in slope is positive, negative if θ is clockwise direction.Moment-Area Method Beam Deflection ==Procedure for analysis== The following procedure provides a method that may be used to determine the displacement and slope at a point on the elastic curve of a beam using the moment-area theorem. This equation can also be written as Srivastava and Gope: Strength of Materials, page 73 :M_A l + 2 M_B (l+l') +M_C l' = \frac{6 a_1 x_1}{l} + \frac{6 a_2 x_2}{l'} where a1 is the area on the bending moment diagram due to vertical loads on AB, a2 is the area due to loads on BC, x1 is the distance from A to the centroid of the bending moment diagram of beam AB, x2 is the distance from C to the centroid of the area of the bending moment diagram of beam BC. These four quantities have to be determined using two equations, the balance of forces in the beam and the balance of moments in the beam. ",The three moment theorem expresses the relation between the deflection of two points on a beam relative to the point of intersection between tangent at those two points and the vertical through the first point.,"The three moment theorem is used to calculate the maximum allowable bending moment of a beam, which is determined by the weight distribution of each segment of the beam.","The three moment theorem describes the relationship between bending moments at three successive supports of a continuous beam, subject to a loading on two adjacent spans with or without settlement of the supports.","The three moment theorem is used to calculate the weight distribution of each segment of a beam, which is required to apply Mohr's theorem.","The three moment theorem is used to derive the change in slope of a deflection curve between two points of a beam, which is equal to the area of the M/EI diagram between those two points.",C,kaggle200,"The change in slope of a deflection curve between two points of a beam is equal to the area of the M/EI diagram between those two points.(Figure 02)
Consider two points k1 and k2 on a beam. The deflection of k1 and k2 relative to the point of intersection between tangent at k1 and k2 and vertical through k1 is equal to the moment of M/EI diagram between k1 and k2 about k1.(Figure 03)
In civil engineering and structural analysis Clapeyron's theorem of three moments is a relationship among the bending moments at three consecutive supports of a horizontal beam.
The three moment equation expresses the relation between bending moments at three successive supports of a continuous beam, subject to a loading on a two adjacent span with or without settlement of the supports.","Let A' B' and C' be the final positions of the beam ABC due to support settlements.
Derivation of three moment theorem PB'Q is a tangent drawn at B' for final Elastic Curve A'B'C' of the beam ABC. RB'S is a horizontal line drawn through B'. Consider, Triangles RB'P and QB'S.
PRRB′=SQB′S, From (1), (2), and (3), ΔB−ΔA+PA′L1=ΔC−ΔB−QC′L2 Draw the M/EI diagram to find the PA' and QC'.
From Mohr's Second Theorem PA' = First moment of area of M/EI diagram between A and B about A.
PA′=(12×M1E1I1×L1)×L1×13+(12×M2E2I2×L1)×L1×23+A1X1E1I1 QC' = First moment of area of M/EI diagram between B and C about C.
QC′=(12×M3E2I2×L2)×L2×13+(12×M2E2I2×L2)×L2×23+A2X2E2I2 Substitute in PA' and QC' on equation (a), the Three Moment Theorem (TMT) can be obtained.
Mohr's theorem can be used to derive the three moment theorem (TMT).
Mohr's first theorem The change in slope of a deflection curve between two points of a beam is equal to the area of the M/EI diagram between those two points.(Figure 02) Mohr's second theorem Consider two points k1 and k2 on a beam. The deflection of k1 and k2 relative to the point of intersection between tangent at k1 and k2 and vertical through k1 is equal to the moment of M/EI diagram between k1 and k2 about k1.(Figure 03) The three moment equation expresses the relation between bending moments at three successive supports of a continuous beam, subject to a loading on a two adjacent span with or without settlement of the supports.","Let A' B' and C' be the final positions of the beam ABC due to support settlements.
Derivation of three moment theorem PB'Q is a tangent drawn at B' for final Elastic Curve A'B'C' of the beam ABCThe deflection of k1 and k2 relative to the point of intersection between tangent at k1 and k2 and vertical through k1 is equal to the moment of M/EI diagram between k1 and k2 about k1.(Figure 03) The three moment equation expresses the relation between bending moments at three successive supports of a continuous beam, subject to a loading on a two adjacent span with or without settlement of the supportsThe deflection of k1 and k2 relative to the point of intersection between tangent at k1 and k2 and vertical through k1 is equal to the moment of M/EI diagram between k1 and k2 about k1.(Figure 03)
In civil engineering and structural analysis Clapeyron's theorem of three moments is a relationship among the bending moments at three consecutive supports of a horizontal beam.
The three moment equation expresses the relation between bending moments at three successive supports of a continuous beam, subject to a loading on a two adjacent span with or without settlement of the supports. Consider, Triangles RB'P and QB'S.
PRRB′=SQB′S, From (1), (2), and (3), ΔB−ΔA+PA′L1=ΔC−ΔB−QC′L2 Draw the M/EI diagram to find the PA' and QC'.
From Mohr's Second Theorem PA' = First moment of area of M/EI diagram between A and B about A.
PA′=(12×M1E1I1×L1)×L1×13+(12×M2E2I2×L1)×L1×23+A1X1E1I1 QC' = First moment of area of M/EI diagram between B and C about C.
QC′=(12×M3E2I2×L2)×L2×13+(12×M2E2I2×L2)×L2×23+A2X2E2I2 Substitute in PA' and QC' on equation (a), the Three Moment Theorem (TMT) can be obtained.
Mohr's theorem can be used to derive the three moment theorem (TMT).
Mohr's first theorem The change in slope of a deflection curve between two points of a beam is equal to the area of the M/EI diagram between those two points.(Figure 02) Mohr's second theorem Consider two points k1 and k2 on a beam- The change in slope of a deflection curve between two points of a beam is equal to the area of the M/","Let A' B' and C' be the final positions of the beam ABC due to support settlements.
Derivation of three moment theorem PB'Q is a tangent drawn at B' for final Elastic Curve A'B'C' of the beam ABCThe deflection of k1 and k2 relative to the point of intersection between tangent at k1 and k2 and vertical through k1 is equal to the moment of M/EI diagram between k1 and k2 about k1.(Figure 03) The three moment equation expresses the relation between bending moments at three successive supports of a continuous beam, subject to a loading on a two adjacent span with or without settlement of the supportsThe deflection of k1 and k2 relative to the point of intersection between tangent at k1 and k2 and vertical through k1 is equal to the moment of M/EI diagram between k1 and k2 about k1.(Figure 03)
In civil engineering and structural analysis Clapeyron's theorem of three moments is a relationship among the bending moments at three consecutive supports of a horizontal beam.
The three moment equation expresses the relation between bending moments at three successive supports of a continuous beam, subject to a loading on a two adjacent span with or without settlement of the supports. Consider, Triangles RB'P and QB'S.
PRRB′=SQB′S, From (1), (2), and (3), ΔB−ΔA+PA′L1=ΔC−ΔB−QC′L2 Draw the M/EI diagram to find the PA' and QC'.
From Mohr's Second Theorem PA' = First moment of area of M/EI diagram between A and B about A.
PA′=(12×M1E1I1×L1)×L1×13+(12×M2E2I2×L1)×L1×23+A1X1E1I1 QC' = First moment of area of M/EI diagram between B and C about C.
QC′=(12×M3E2I2×L2)×L2×13+(12×M2E2I2×L2)×L2×23+A2X2E2I2 Substitute in PA' and QC' on equation (a), the Three Moment Theorem (TMT) can be obtained.
Mohr's theorem can be used to derive the three moment theorem (TMT).
Mohr's first theorem The change in slope of a deflection curve between two points of a beam is equal to the area of the M/EI diagram between those two points.(Figure 02) Mohr's second theorem Consider two points k1 and k2 on a beam- The change in slope of a deflection curve between two points of a beam is equal to the area of the M/[SEP]What is the relation between the three moment theorem and the bending moments at three successive supports of a continuous beam?","['C', 'D', 'E']",1.0
"What is the throttling process, and why is it important?","A throttle is the mechanism by which fluid flow is managed by constriction or obstruction. However, liquid-propellant rockets can be throttled by means of valves which regulate the flow of fuel and oxidizer to the combustion chamber. The ""secondary"" throttle is operated either mechanically when the primary plate is opened past a certain amount, or via engine vacuum, influenced by the position of the accelerator pedal and engine load, allowing for greater air flow into the engine at high RPM and load and better efficiency at low RPM. Throttle bodies may also contain valves and adjustments to control the minimum airflow during idle. Throttling can be used to actively limit a user's upload and download rates on programs such as video streaming, BitTorrent protocols and other file sharing applications, as well as even out the usage of the total bandwidth supplied across all users on the network. However, factors such as improper maintenance, fouled spark plugs or bad injectors can reduce throttle response. Bandwidth throttling consists in the intentional limitation of the communication speed (bytes or kilobytes per second), of the ingoing (received) or outgoing (sent) data in a network node or in a network device. The difference is that bandwidth throttling regulates a bandwidth intensive device (such as a server) by limiting how much data that device can receive from each node / client or can output or can send for each response. Bandwidth throttling is also often used in Internet applications, in order to spread a load over a wider network to reduce local network congestion, or over a number of servers to avoid overloading individual ones, and so reduce their risk of the system crashing, and gain additional revenue by giving users an incentive to use more expensive tiered pricing schemes, where bandwidth is not throttled. ==Operation== A computer network typically consists of a number of servers, which host data and provide services to clients. The throttle of a diesel, when present, regulates the air flow into the engine. On a broader level, the Internet service provider may use bandwidth throttling to help reduce a user's usage of bandwidth that is supplied to the local network. Throttle response or vehicle responsiveness is a measure of how quickly a vehicle's prime mover, such as an internal combustion engine, can increase its power output in response to a driver's request for acceleration. The throttle is basically a poppet valve, or series of poppet valves which open in sequence to regulate the amount of steam admitted to the steam chests over the pistons. The term throttle has come to refer, informally, to any mechanism by which the power or speed of an engine is regulated, such as a car's accelerator pedal. For a steam locomotive, the valve which controls the steam is known as the regulator. == Internal combustion engines == thumb|upright|A cross-section view of a butterfly valve In an internal combustion engine, the throttle is a means of controlling an engine's power by regulating the amount of fuel or air entering the engine. For a gasoline engine, the throttle most commonly regulates the amount of air and fuel allowed to enter the engine. In order to prevent such occurrences, a client / server / system administrator may enable (if available) bandwidth throttling: * at , to control the speed of ingoing (received) data and/or to control the speed of outgoing (sent) data: ** a client program could be configured to throttle the sending (upload) of a big file to a server program in order to reserve some network bandwidth for other uses (i.e. for sending emails with attached data, browsing web sites, etc.); ** a server program (i.e. web server) could throttle its outgoing data to allow more concurrent active client connections without using too much network bandwidth (i.e. using only 90% of available bandwidth in order to keep a reserve for other activities, etc.); :: examples: assuming to have a server site with speed access to Internet of 100MB/s (around 1000Mbit/s), assuming that most clients have a 1MB/s (around 10Mbit/s) network speed access to Internet and assuming to be able to download huge files (i.e. 1 GB each): ::* with bandwidth throttling, a server using a max. output speed of 100kB/s (around 1Mbit/s) for each TCP connection, could allow at least (or even 10000 if output is limited to 10kB/s) (active connections means that data content, such as a big file, is being downloaded from server to client); ::* without bandwidth throttling, a server could efficiently serve only (100MB/s / 1MB/s) before saturating network bandwidth; a saturated network (i.e. with a bottleneck through an Internet Access Point) could slow down a lot the attempts to establish other new connections or even to force them to fail because of timeouts, etc.; besides this new active connections could not get easily or fastly their proper share of bandwidth. * at , to control the speed of data received or sent both at low level (data packets) and/or at high level (i.e. by inspecting application protocol data): ** policies similar or even more sophisticated than those of application software level could be set in low level network devices near Internet access point. ==Application== A bandwidth intensive device, such as a server, might limit (throttle) the speed at which it receives or sends data, in order to avoid overloading its processing capacity or to saturate network bandwidth. Some modern internal combustion engines do not use a traditional throttle, instead relying on their variable intake valve timing system to regulate the airflow into the cylinders, although the end result is the same, albeit with less pumping losses. == Throttle body == thumb|The components of a typical throttle body In fuel injected engines, the throttle body is the part of the air intake system that controls the amount of air flowing into the engine, in response to driver accelerator pedal input in the main. Increased throttle response is often confused with increased power (Since increasing throttle response reduces the time needed to reach higher RPM speeds and consequently provides immediate access to an internal combustion engine's power and makes a slow car equipped with that engine, for example, feel quickerhttps://pedalcommander.com/blogs/garage/throttle-response-all- aspects#:~:text=The%20faster%20the%20throttle%20response%20your%20car%20has%2C%20the%20less%20time%20it%20takes%20to%20reach%20higher%20engine%20speeds.%20So%2C%20this%20process%20offers%20instant%20access%20to%20the%20engine%E2%80%99s%20power.%20For%20this%20reason%2C%20a%20good%20throttle%20response%20can%20make%20a%20slow%20car%20faster) but is more accurately described as time rate of change of power levels. == Gasoline vs diesel == Formerly, gasoline/petrol engines exhibited better throttle response than diesel engines. The effective way to increase the throttle's lifespan is through regular maintenance and cleaning. == See also == * Adapted automobile ==References== ==External links== Category:Engine technology Category:Engine fuel system technology Category:Engine components ","The throttling process is a steady flow of a fluid through a flow resistance, such as a valve or porous plug, and is responsible for the pressure increase in domestic refrigerators. This process is important because it is at the heart of the refrigeration cycle.","The throttling process is a steady adiabatic flow of a fluid through a flow resistance, such as a valve or porous plug, and is responsible for the temperature drop in domestic refrigerators. This process is important because it is at the heart of the refrigeration cycle.","The throttling process is a steady adiabatic flow of a fluid through a flow resistance, such as a valve or porous plug, and is responsible for the pressure drop in domestic refrigerators. This process is important because it is at the heart of the refrigeration cycle.","The throttling process is a steady flow of a fluid through a flow resistance, such as a valve or porous plug, and is responsible for the temperature increase in domestic refrigerators. This process is important because it is at the heart of the refrigeration cycle.","The throttling process is a steady adiabatic flow of a fluid through a flow resistance, such as a valve or porous plug, and is responsible for the temperature drop in domestic refrigerators. This process is not important because it is not used in the refrigeration cycle.",B,kaggle200,"The throttling process is a good example of an isoenthalpic process in which significant changes in pressure and temperature can occur to the fluid, and yet the net sum the associated terms in the energy balance is null, thus rendering the transformation isoenthalpic. The lifting of a relief (or safety) valve on a pressure vessel is an example of throttling process. The specific enthalpy of the fluid inside the pressure vessel is the same as the specific enthalpy of the fluid as it escapes through the valve. With a knowledge of the specific enthalpy of the fluid and the pressure outside the pressure vessel, it is possible to determine the temperature and speed of the escaping fluid.
The gas-cooling throttling process is commonly exploited in refrigeration processes such as liquefiers in air separation industrial process. In hydraulics, the warming effect from Joule–Thomson throttling can be used to find internally leaking valves as these will produce heat which can be detected by thermocouple or thermal-imaging camera. Throttling is a fundamentally irreversible process. The throttling due to the flow resistance in supply lines, heat exchangers, regenerators, and other components of (thermal) machines is a source of losses that limits their performance.
The Joule expansion should not be confused with the Joule–Thomson expansion or ""throttling process"" which refers to the steady flow of a gas from a region of higher pressure to one of lower pressure via a valve or porous plug.
One of the simple applications of the concept of enthalpy is the so-called throttling process, also known as Joule–Thomson expansion. It concerns a steady adiabatic flow of a fluid through a flow resistance (valve, porous plug, or any other type of flow resistance) as shown in the figure. This process is very important, since it is at the heart of domestic refrigerators, where it is responsible for the temperature drop between ambient temperature and the interior of the refrigerator. It is also the final stage in many types of liquefiers.","If the pressure of an ideal gas is reduced in a throttling process the temperature of the gas does not change. (If the pressure of a real gas is reduced in a throttling process, its temperature either falls or rises, depending on whether its Joule–Thomson coefficient is positive or negative.)
The gas-cooling throttling process is commonly exploited in refrigeration processes such as liquefiers in air separation industrial process. In hydraulics, the warming effect from Joule–Thomson throttling can be used to find internally leaking valves as these will produce heat which can be detected by thermocouple or thermal-imaging camera. Throttling is a fundamentally irreversible process. The throttling due to the flow resistance in supply lines, heat exchangers, regenerators, and other components of (thermal) machines is a source of losses that limits their performance.
Throttling One of the simple applications of the concept of enthalpy is the so-called throttling process, also known as Joule–Thomson expansion. It concerns a steady adiabatic flow of a fluid through a flow resistance (valve, porous plug, or any other type of flow resistance) as shown in the figure. This process is very important, since it is at the heart of domestic refrigerators, where it is responsible for the temperature drop between ambient temperature and the interior of the refrigerator. It is also the final stage in many types of liquefiers.","Throttling is a fundamentally irreversible processThe throttling due to the flow resistance in supply lines, heat exchangers, regenerators, and other components of (thermal) machines is a source of losses that limits their performance.
The Joule expansion should not be confused with the Joule–Thomson expansion or ""throttling process"" which refers to the steady flow of a gas from a region of higher pressure to one of lower pressure via a valve or porous plug.
One of the simple applications of the concept of enthalpy is the so-called throttling process, also known as Joule–Thomson expansionThe lifting of a relief (or safety) valve on a pressure vessel is an example of throttling process(If the pressure of a real gas is reduced in a throttling process, its temperature either falls or rises, depending on whether its Joule–Thomson coefficient is positive or negative.)
The gas-cooling throttling process is commonly exploited in refrigeration processes such as liquefiers in air separation industrial processThe throttling due to the flow resistance in supply lines, heat exchangers, regenerators, and other components of (thermal) machines is a source of losses that limits their performance.
Throttling One of the simple applications of the concept of enthalpy is the so-called throttling process, also known as Joule–Thomson expansionIf the pressure of an ideal gas is reduced in a throttling process the temperature of the gas does not changeWith a knowledge of the specific enthalpy of the fluid and the pressure outside the pressure vessel, it is possible to determine the temperature and speed of the escaping fluid.
The gas-cooling throttling process is commonly exploited in refrigeration processes such as liquefiers in air separation industrial process- The throttling process is a good example of an isoenthalpic process in which significant changes in pressure and temperature can occur to the fluid, and yet the net sum the associated terms in the energy balance is null, thus rendering the transformation isoenthalpicIt concerns a steady adiabatic flow of a fluid through a flo","Throttling is a fundamentally irreversible processThe throttling due to the flow resistance in supply lines, heat exchangers, regenerators, and other components of (thermal) machines is a source of losses that limits their performance.
The Joule expansion should not be confused with the Joule–Thomson expansion or ""throttling process"" which refers to the steady flow of a gas from a region of higher pressure to one of lower pressure via a valve or porous plug.
One of the simple applications of the concept of enthalpy is the so-called throttling process, also known as Joule–Thomson expansionThe lifting of a relief (or safety) valve on a pressure vessel is an example of throttling process(If the pressure of a real gas is reduced in a throttling process, its temperature either falls or rises, depending on whether its Joule–Thomson coefficient is positive or negative.)
The gas-cooling throttling process is commonly exploited in refrigeration processes such as liquefiers in air separation industrial processThe throttling due to the flow resistance in supply lines, heat exchangers, regenerators, and other components of (thermal) machines is a source of losses that limits their performance.
Throttling One of the simple applications of the concept of enthalpy is the so-called throttling process, also known as Joule–Thomson expansionIf the pressure of an ideal gas is reduced in a throttling process the temperature of the gas does not changeWith a knowledge of the specific enthalpy of the fluid and the pressure outside the pressure vessel, it is possible to determine the temperature and speed of the escaping fluid.
The gas-cooling throttling process is commonly exploited in refrigeration processes such as liquefiers in air separation industrial process- The throttling process is a good example of an isoenthalpic process in which significant changes in pressure and temperature can occur to the fluid, and yet the net sum the associated terms in the energy balance is null, thus rendering the transformation isoenthalpicIt concerns a steady adiabatic flow of a fluid through a flo[SEP]What is the throttling process, and why is it important?","['B', 'C', 'E']",1.0
What happens to excess base metal as a solution cools from the upper transformation temperature towards an insoluble state?,"Furthermore, this melting may begin at a temperature below the equilibrium solidus temperature of the alloy. Recalescence also occurs after supercooling, when the supercooled liquid suddenly crystallizes, forming a solid but releasing heat in the process. ==See also== * Allotropy * Phase transition * Thermal analysis ==References== Category:Metallurgy Category:Phase transitions Category:Thermodynamic properties If we rearrange equation (2) to isolate the number of oscillators per unit volume we get the critical concentration of oscillators (Nc) at which εs becomes infinite, indicating a metallic solid and the transition from an insulator to a metal. During this process, atomic diffusion occurs, which produces compositionally homogeneous grains. Unlike hot working, cold working causes the crystal grains and inclusions to distort following the flow of the metal; which may cause work hardening and anisotropic material properties. As a casting having a cored structure is reheated, grain boundary regions will melt first in as much as they are richer in the low-melting component. The center of each grain, which is the first part to freeze, is rich in the high-melting element (e.g., nickel for this Cu–Ni system), whereas the concentration of the low-melting element increases with position from this region to the grain boundary. An example of this Peierls insulator is the blue bronze K0.3MoO3, which undergoes MIT at T = 180 K. Insulator behavior in metals can also arise from the distortions and lattice defects, the transition of which is known as the Anderson MIT. == Polarization Catastrophe == The polarization catastrophe model describes the transition of a material from an insulator to a metal. Coring may be eliminated by a homogenization heat treatment carried out at a temperature below the solidus point for the particular alloy composition. Therefore, the process will be exothermic. Coring happens when a heated alloy, such as a Cu-Ni system, cools in non- equilibrium conditions. Recalescence is an increase in temperature that occurs while cooling metal when a change in structure with an increase in entropy occurs. This model considers the electrons in a solid to act as oscillators and the conditions for this transition to occur is determined by the number of oscillators per unit volume of the material. In metallurgy, cold forming or cold working is any metalworking process in which metal is shaped below its recrystallization temperature, usually at the ambient temperature. The polarization catastrophe model also theorizes that, with a high enough density, and thus a low enough molar volume, any solid could become metallic in character. Nc = 3ε0mω02/e2 (3) This expression creates a boundary that defines the transition of a material from an insulator to a metal. Metal–insulator transitions are transitions of a material from a metal (material with good electrical conductivity of electric charges) to an insulator (material where conductivity of charges is quickly suppressed). These transitions can be achieved by tuning various ambient parameters such as temperature,_{2}$ |url=https://link.aps.org/doi/10.1103/PhysRevLett.110.056601 |journal=Physical Review Letters |volume=110 |issue=5 |pages=056601 |doi=10.1103/PhysRevLett.110.056601|pmid=23414038 }} pressure or, in case of a semiconductor, doping. == History == The basic distinction between metals and insulators was proposed by Bethe, Sommerfeld and Bloch in 1928/1929. Since then, these materials as well as others exhibiting a transition between a metal and an insulator have been extensively studied, e.g. by Sir Nevill Mott, after whom the insulating state is named Mott insulator. However, some compounds have been found which show insulating behavior even for partially filled bands. ","The excess base metal will often solidify, becoming the proeutectoid until the remaining concentration of solutes reaches the eutectoid level, which will then crystallize as a separate microstructure.","The excess base metal will often crystallize-out, becoming the proeutectoid until the remaining concentration of solutes reaches the eutectoid level, which will then crystallize as a separate microstructure.","The excess base metal will often dissolve, becoming the proeutectoid until the remaining concentration of solutes reaches the eutectoid level, which will then crystallize as a separate microstructure.","The excess base metal will often liquefy, becoming the proeutectoid until the remaining concentration of solutes reaches the eutectoid level, which will then crystallize as a separate microstructure.","The excess base metal will often evaporate, becoming the proeutectoid until the remaining concentration of solutes reaches the eutectoid level, which will then crystallize as a separate microstructure.",B,kaggle200,"To overcome this reversibility, the reaction often uses an excess of base to trap the water as hydrates.
Base excess (or deficit) is one of several values typically reported with arterial blood gas analysis that is derived from other measured data.
In physiology, base excess and base deficit refer to an excess or deficit, respectively, in the amount of base present in the blood. The value is usually reported as a concentration in units of mEq/L (mmol/L), with positive numbers indicating an excess of base and negative a deficit. A typical reference range for base excess is −2 to +2 mEq/L.
Similarly, a hypoeutectoid alloy has two critical temperatures, called ""arrests"". Between these two temperatures, the alloy will exist partly as the solution and partly as a separate crystallizing phase, called the ""pro eutectoid phase"". These two temperatures are called the upper (A) and lower (A) transformation temperatures. As the solution cools from the upper transformation temperature toward an insoluble state, the excess base metal will often be forced to ""crystallize-out"", becoming the pro eutectoid. This will occur until the remaining concentration of solutes reaches the eutectoid level, which will then crystallize as a separate microstructure.","In the context of plated metal products, the base metal underlies the plating metal, as copper underlies silver in Sheffield plate.
Hypereutectoid alloys A hypereutectic alloy also has different melting points. However, between these points, it is the constituent with the higher melting point that will be solid. Similarly, a hypereutectoid alloy has two critical temperatures. When cooling a hypereutectoid alloy from the upper transformation temperature, it will usually be the excess solutes that crystallize-out first, forming the pro-eutectoid. This continues until the concentration in the remaining alloy becomes eutectoid, which then crystallizes into a separate microstructure.
Similarly, a hypoeutectoid alloy has two critical temperatures, called ""arrests"". Between these two temperatures, the alloy will exist partly as the solution and partly as a separate crystallizing phase, called the ""pro eutectoid phase"". These two temperatures are called the upper (A3) and lower (A1) transformation temperatures. As the solution cools from the upper transformation temperature toward an insoluble state, the excess base metal will often be forced to ""crystallize-out"", becoming the pro eutectoid. This will occur until the remaining concentration of solutes reaches the eutectoid level, which will then crystallize as a separate microstructure.","As the solution cools from the upper transformation temperature toward an insoluble state, the excess base metal will often be forced to ""crystallize-out"", becoming the pro eutectoidA typical reference range for base excess is −2 to +2 mEq/L.
Similarly, a hypoeutectoid alloy has two critical temperatures, called ""arrests""When cooling a hypereutectoid alloy from the upper transformation temperature, it will usually be the excess solutes that crystallize-out first, forming the pro-eutectoidBetween these two temperatures, the alloy will exist partly as the solution and partly as a separate crystallizing phase, called the ""pro eutectoid phase""This will occur until the remaining concentration of solutes reaches the eutectoid level, which will then crystallize as a separate microstructureThis will occur until the remaining concentration of solutes reaches the eutectoid level, which will then crystallize as a separate microstructure.This continues until the concentration in the remaining alloy becomes eutectoid, which then crystallizes into a separate microstructure.
Similarly, a hypoeutectoid alloy has two critical temperatures, called ""arrests""These two temperatures are called the upper (A) and lower (A) transformation temperaturesHowever, between these points, it is the constituent with the higher melting point that will be solidThese two temperatures are called the upper (A3) and lower (A1) transformation temperaturesIn the context of plated metal products, the base metal underlies the plating metal, as copper underlies silver in Sheffield plate.
Hypereutectoid alloys A hypereutectic alloy also has different melting pointsSimilarly, a hypereutectoid alloy has two critical temperatures- To overcome this reversibility, the reaction often uses an excess of base to trap the water as hydrates.
Base excess (or deficit) is one of several values typically reported with arterial blood gas analysis that is derived from other measured data.
In physiology, base excess and base deficit refer to an excess or deficit, respectively, in the amount of base present in the bloodThe valu","As the solution cools from the upper transformation temperature toward an insoluble state, the excess base metal will often be forced to ""crystallize-out"", becoming the pro eutectoidA typical reference range for base excess is −2 to +2 mEq/L.
Similarly, a hypoeutectoid alloy has two critical temperatures, called ""arrests""When cooling a hypereutectoid alloy from the upper transformation temperature, it will usually be the excess solutes that crystallize-out first, forming the pro-eutectoidBetween these two temperatures, the alloy will exist partly as the solution and partly as a separate crystallizing phase, called the ""pro eutectoid phase""This will occur until the remaining concentration of solutes reaches the eutectoid level, which will then crystallize as a separate microstructureThis will occur until the remaining concentration of solutes reaches the eutectoid level, which will then crystallize as a separate microstructure.This continues until the concentration in the remaining alloy becomes eutectoid, which then crystallizes into a separate microstructure.
Similarly, a hypoeutectoid alloy has two critical temperatures, called ""arrests""These two temperatures are called the upper (A) and lower (A) transformation temperaturesHowever, between these points, it is the constituent with the higher melting point that will be solidThese two temperatures are called the upper (A3) and lower (A1) transformation temperaturesIn the context of plated metal products, the base metal underlies the plating metal, as copper underlies silver in Sheffield plate.
Hypereutectoid alloys A hypereutectic alloy also has different melting pointsSimilarly, a hypereutectoid alloy has two critical temperatures- To overcome this reversibility, the reaction often uses an excess of base to trap the water as hydrates.
Base excess (or deficit) is one of several values typically reported with arterial blood gas analysis that is derived from other measured data.
In physiology, base excess and base deficit refer to an excess or deficit, respectively, in the amount of base present in the bloodThe valu[SEP]What happens to excess base metal as a solution cools from the upper transformation temperature towards an insoluble state?","['B', 'A', 'C']",1.0
"What is the relationship between mass, force, and acceleration, according to Sir Isaac Newton's laws of motion?","Newton first set out the definition of mass This was then used to define the ""quantity of motion"" (today called momentum), and the principle of inertia in which mass replaces the previous Cartesian notion of intrinsic force. The force is proportional to the product of the two masses and inversely proportional to the square of the distance between them: left|200px|Diagram of two masses attracting one another : F = G \frac{m_1 m_2}{r^2}\ where * F is the force between the masses; * G is the Newtonian constant of gravitation (); * m1 is the first mass; * m2 is the second mass; * r is the distance between the centers of the masses. thumb|upright=2.0|Error plot showing experimental values for G. Assuming SI units, F is measured in newtons (N), m1 and m2 in kilograms (kg), r in meters (m), and the constant G is The value of the constant G was first accurately determined from the results of the Cavendish experiment conducted by the British scientist Henry Cavendish in 1798, although Cavendish did not himself calculate a numerical value for G. The force is proportional to the product of the two masses, and inversely proportional to the square of the distance between them. Also equations of motion can be formulated which connect acceleration and force. Both are inverse-square laws, where force is inversely proportional to the square of the distance between the bodies. Newtonian refers to the work of Isaac Newton, in particular: * Newtonian mechanics, i.e. classical mechanics * Newtonian telescope, a type of reflecting telescope * Newtonian cosmology * Newtonian dynamics * Newtonianism, the philosophical principle of applying Newton's methods in a variety of fields * Newtonian fluid, a fluid that flows like water--its shear stress is linearly proportional to the velocity gradient in the direction perpendicular to the plane of shear ** Non-Newtonian fluids, in which the viscosity changes with the applied shear force ==Supplementary material== * List of things named after Isaac Newton Newton's law of universal gravitation is usually stated as that every particle attracts every other particle in the universe with a force that is proportional to the product of their masses and inversely proportional to the square of the distance between their centers.It was shown separately that separated spherically symmetrical masses attract and are attracted as if all their mass were concentrated at their centers. The equation for universal gravitation thus takes the form: : F=G\frac{m_1m_2}{r^2}, where F is the gravitational force acting between two objects, m1 and m2 are the masses of the objects, r is the distance between the centers of their masses, and G is the gravitational constant. Newton's role in relation to the inverse square law was not as it has sometimes been represented. (English: The Mathematical Principles of Natural Philosophy) often referred to as simply the (), is a book by Isaac Newton that expounds Newton's laws of motion and his law of universal gravitation. Newton's law of gravitation resembles Coulomb's law of electrical forces, which is used to calculate the magnitude of the electrical force arising between two charged bodies. Just as Newton examined consequences of different conceivable laws of attraction in Book 1, here he examines different conceivable laws of resistance; thus Section 1 discusses resistance in direct proportion to velocity, and Section 2 goes on to examine the implications of resistance in proportion to the square of velocity. In his notes, Newton wrote that the inverse square law arose naturally due to the structure of matter. Because of the Lorentz transformation and time dilation, the concepts of time and distance become more complex, which also leads to more complex definitions of ""acceleration"". In this formula, quantities in bold represent vectors. \mathbf{F}_{21} = \- G {m_1 m_2 \over {\vert \mathbf{r}_{21} \vert}^2} \, \mathbf{\hat{r}}_{21} where * F21 is the force applied on object 2 exerted by object 1, * G is the gravitational constant, * m1 and m2 are respectively the masses of objects 1 and 2, * |r21| = |r2 − r1| is the distance between objects 1 and 2, and * \mathbf{\hat{r}}_{21} \ \stackrel{\mathrm{def}}{=}\ \frac{\mathbf{r}_2 - \mathbf{r}_1}{\vert\mathbf{r}_2 - \mathbf{r}_1\vert} is the unit vector from object 1 to object 2.The vector difference r2 − r1 points from object 1 to object 2. In today's language, the law states that every point mass attracts every other point mass by a force acting along the line intersecting the two points. Newton's law has later been superseded by Albert Einstein's theory of general relativity, but the universality of gravitational constant is intact and the law still continues to be used as an excellent approximation of the effects of gravity in most applications. Accelerations in special relativity (SR) follow, as in Newtonian Mechanics, by differentiation of velocity with respect to time. He became a fellow of the Royal Society and the second Lucasian Professor of Mathematics (succeeding Isaac Barrow) at Trinity College, Cambridge. ===Newton's early work on motion=== In the 1660s Newton studied the motion of colliding bodies, and deduced that the centre of mass of two colliding bodies remains in uniform motion. Curiously, for today's readers, the exposition looks dimensionally incorrect, since Newton does not introduce the dimension of time in rates of changes of quantities. ","Mass is a property that determines the weight of an object. According to Newton's laws of motion and the formula F = ma, an object with a mass of one kilogram accelerates at one meter per second per second when acted upon by a force of one newton.","Mass is an inertial property that determines an object's tendency to remain at constant velocity unless acted upon by an outside force. According to Newton's laws of motion and the formula F = ma, an object with a mass of one kilogram accelerates at ten meters per second per second when acted upon by a force of one newton.","Mass is an inertial property that determines an object's tendency to remain at constant velocity unless acted upon by an outside force. According to Newton's laws of motion and the formula F = ma, an object with a mass of one kilogram accelerates at ten meters per second per second when acted upon by a force of ten newtons.","Mass is an inertial property that determines an object's tendency to remain at constant velocity unless acted upon by an outside force. According to Newton's laws of motion and the formula F = ma, an object with a mass of one kilogram accelerates at one meter per second per second when acted upon by a force of one newton.","Mass is a property that determines the size of an object. According to Newton's laws of motion and the formula F = ma, an object with a mass of one kilogram accelerates at one meter per second per second when acted upon by a force of ten newtons.",D,kaggle200,"The newton (symbol: N) is the unit of force in the International System of Units (SI). It is defined as 1 kg⋅m/s, the force which gives a mass of 1 kilogram an acceleration of 1 metre per second per second. It is named after Isaac Newton in recognition of his work on classical mechanics, specifically Newton's second law of motion.
which, upon identifying the negative derivative of the potential with the force, is just Newton's second law once again.
A better scientific definition of mass is its description as being a measure of inertia, which is the tendency of an object to not change its current state of motion (to remain at constant velocity) unless acted on by an external unbalanced force. Gravitational ""weight"" is the force created when a mass is acted upon by a gravitational field and the object is not allowed to free-fall, but is supported or retarded by a mechanical force, such as the surface of a planet. Such a force constitutes weight. This force can be added to by any other kind of force.
Mass is (among other properties) an ""inertial"" property; that is, the tendency of an object to remain at constant velocity unless acted upon by an outside force. Under Sir Isaac Newton's -year-old laws of motion and an important formula that sprang from his work, an object with a mass, ""m"", of one kilogram accelerates, ""a"", at one meter per second per second (about one-tenth the acceleration due to earth's gravity) when acted upon by a force, ""F"", of one newton.","Newton's laws of motion are three basic laws of classical mechanics that describe the relationship between the motion of an object and the forces acting on it. These laws can be paraphrased as follows: A body remains at rest, or in motion at a constant speed in a straight line, unless acted upon by a force.
When a body is acted upon by a force, the time rate of change of its momentum equals the force.
Classical mechanics is fundamentally based on Newton's laws of motion. These laws describe the relationship between the forces acting on a body and the motion of that body. They were first compiled by Sir Isaac Newton in his work Philosophiæ Naturalis Principia Mathematica, which was first published on July 5, 1687. Newton's three laws are: A body at rest will remain at rest, and a body in motion will remain in motion unless it is acted upon by an external force. (This is known as the law of inertia.) Force ( F→ ) is equal to the change in momentum per change in time ( Δmv→Δt ). For a constant mass, force equals mass times acceleration ( F→=ma→ ).
Mass is (among other properties) an inertial property; that is, the tendency of an object to remain at constant velocity unless acted upon by an outside force. Under Sir Isaac Newton's 336-year-old laws of motion and an important formula that sprang from his work, F = ma, an object with a mass, m, of one kilogram accelerates, a, at one meter per second per second (about one-tenth the acceleration due to earth's gravity) when acted upon by a force, F, of one newton.","Newton's laws of motion are three basic laws of classical mechanics that describe the relationship between the motion of an object and the forces acting on itUnder Sir Isaac Newton's 336-year-old laws of motion and an important formula that sprang from his work, F = ma, an object with a mass, m, of one kilogram accelerates, a, at one meter per second per second (about one-tenth the acceleration due to earth's gravity) when acted upon by a force, F, of one newtonUnder Sir Isaac Newton's -year-old laws of motion and an important formula that sprang from his work, an object with a mass, ""m"", of one kilogram accelerates, ""a"", at one meter per second per second (about one-tenth the acceleration due to earth's gravity) when acted upon by a force, ""F"", of one newton.For a constant mass, force equals mass times acceleration ( F→=ma→ ).
Mass is (among other properties) an inertial property; that is, the tendency of an object to remain at constant velocity unless acted upon by an outside forceThese laws describe the relationship between the forces acting on a body and the motion of that bodyIt is named after Isaac Newton in recognition of his work on classical mechanics, specifically Newton's second law of motion.
which, upon identifying the negative derivative of the potential with the force, is just Newton's second law once again.
A better scientific definition of mass is its description as being a measure of inertia, which is the tendency of an object to not change its current state of motion (to remain at constant velocity) unless acted on by an external unbalanced forceThese laws can be paraphrased as follows: A body remains at rest, or in motion at a constant speed in a straight line, unless acted upon by a force.
When a body is acted upon by a force, the time rate of change of its momentum equals the force.
Classical mechanics is fundamentally based on Newton's laws of motionNewton's three laws are: A body at rest will remain at rest, and a body in motion will remain in motion unless it is acted upon by an external force- The newton (symbol: N) is the unit of force i","Newton's laws of motion are three basic laws of classical mechanics that describe the relationship between the motion of an object and the forces acting on itUnder Sir Isaac Newton's 336-year-old laws of motion and an important formula that sprang from his work, F = ma, an object with a mass, m, of one kilogram accelerates, a, at one meter per second per second (about one-tenth the acceleration due to earth's gravity) when acted upon by a force, F, of one newtonUnder Sir Isaac Newton's -year-old laws of motion and an important formula that sprang from his work, an object with a mass, ""m"", of one kilogram accelerates, ""a"", at one meter per second per second (about one-tenth the acceleration due to earth's gravity) when acted upon by a force, ""F"", of one newton.For a constant mass, force equals mass times acceleration ( F→=ma→ ).
Mass is (among other properties) an inertial property; that is, the tendency of an object to remain at constant velocity unless acted upon by an outside forceThese laws describe the relationship between the forces acting on a body and the motion of that bodyIt is named after Isaac Newton in recognition of his work on classical mechanics, specifically Newton's second law of motion.
which, upon identifying the negative derivative of the potential with the force, is just Newton's second law once again.
A better scientific definition of mass is its description as being a measure of inertia, which is the tendency of an object to not change its current state of motion (to remain at constant velocity) unless acted on by an external unbalanced forceThese laws can be paraphrased as follows: A body remains at rest, or in motion at a constant speed in a straight line, unless acted upon by a force.
When a body is acted upon by a force, the time rate of change of its momentum equals the force.
Classical mechanics is fundamentally based on Newton's laws of motionNewton's three laws are: A body at rest will remain at rest, and a body in motion will remain in motion unless it is acted upon by an external force- The newton (symbol: N) is the unit of force i[SEP]What is the relationship between mass, force, and acceleration, according to Sir Isaac Newton's laws of motion?","['D', 'A', 'C']",1.0
What did Arthur Eddington discover about two of Einstein's types of gravitational waves?,"Eddington's criticism seems to have been based partly on a suspicion that a purely mathematical derivation from relativity theory was not enough to explain the seemingly daunting physical paradoxes that were inherent to degenerate stars, but to have ""raised irrelevant objections"" in addition, as Thanu Padmanabhan puts it. ==Relativity== During World War I, Eddington was secretary of the Royal Astronomical Society, which meant he was the first to receive a series of letters and papers from Willem de Sitter regarding Einstein's theory of general relativity. On 3 June, despite the clouds that had reduced the quality of the plates, Eddington recorded in his notebook: ""... one plate I measured gave a result agreeing with Einstein."" It contained the following quatrain: During the 1920s and 30s, Eddington gave numerous lectures, interviews, and radio broadcasts on relativity, in addition to his textbook The Mathematical Theory of Relativity, and later, quantum mechanics. It was named for the noted astronomer Arthur Eddington, who formulated much of the modern theory of stellar atmospheres and stellar structure, popularized Albert Einstein's work in the English language, carried out the first test (gravitational lensing) of the general theory of relativity, and made original contributions to the theory. It is notable that while the Eddington results were seen as a confirmation of Einstein's prediction, and in that capacity soon found their way into general relativity text books,Notably and Ch. 7 in among observers followed a decade-long discussion of the quantitative values of light deflection, with the precise results in contention even after several expeditions had repeated Eddington's observations on the occasion of subsequent eclipses. Afterward, Eddington embarked on a campaign to popularize relativity and the expedition as landmarks both in scientific development and international scientific relations. Throughout this period, Eddington lectured on relativity, and was particularly well known for his ability to explain the concepts in lay terms as well as scientific. Eddington was fortunate in being not only one of the few astronomers with the mathematical skills to understand general relativity, but owing to his internationalist and pacifist views inspired by his Quaker religious beliefs, one of the few at the time who was still interested in pursuing a theory developed by a German physicist. Eddington showed that Newtonian gravitation could be interpreted to predict half the shift predicted by Einstein. The Eddington experiment was an observational test of general relativity, organised by the British astronomers Frank Watson Dyson and Arthur Stanley Eddington in 1919. The rejection of the results from the expedition to Brazil was due to a defect in the telescopes used which, again, was completely accepted and well understood by contemporary astronomers. thumb|The minute book of Cambridge ∇2V Club for the meeting where Eddington presented his observations of the curvature of light around the sun, confirming Einstein's theory of general relativity. Eddington's interest in general relativity began in 1916, during World War I, when he read papers by Einstein (presented in Berlin, Germany, in 1915), which had been sent by the neutral Dutch physicist Willem de Sitter to the Royal Astronomical Society in Britain. Eddington wrote a number of articles that announced and explained Einstein's theory of general relativity to the English-speaking world. Eddington also produced a major report on general relativity for the Physical Society, published as Report on the Relativity Theory of Gravitation (1918). Eddington's observations published the next year allegedly confirmed Einstein's theory, and were hailed at the time as evidence of general relativity over the Newtonian model. Einstein's equations admit gravity wave-like solutions. Eddington and Perrine spent several days together in Brazil and may have discussed their observation programs including Einstein's prediction of light deflection. I was wondering who the third one might be!""As related by Eddington to Chandrasekhar and quoted in Walter Isaacson ""Einstein: His Life and Universe"", p. 262 ==Cosmology== Eddington was also heavily involved with the development of the first generation of general relativistic cosmological models. Eddington, later said to be one of the few people at the time to understand the theory, realised its significance and lectured on relativity at a meeting at the British Association in 1916. Eddington also lectured on relativity at Cambridge University, where he had been professor of astronomy since 1913.Following the eclipse expedition in 1919, Eddington published Space Time and Gravitation (1920), and his university lectures would form the basis for his magnum opus on the subject, Mathematical Theory of Relativity (1923), Wartime conscription in Britain was introduced in 1917. ","Arthur Eddington showed that two of Einstein's types of waves were artifacts of the coordinate system he used, and could only be made to propagate at the speed of gravity by choosing appropriate coordinates.","Arthur Eddington showed that two of Einstein's types of waves were artifacts of the coordinate system he used, and could only be made to propagate at the speed of sound by choosing appropriate coordinates.","Arthur Eddington showed that two of Einstein's types of waves were artifacts of the coordinate system he used, and could be made to propagate at any speed by choosing appropriate coordinates.","Arthur Eddington showed that two of Einstein's types of waves were artifacts of the coordinate system he used, and could not be made to propagate at any speed by choosing appropriate coordinates.","Arthur Eddington showed that two of Einstein's types of waves were artifacts of the coordinate system he used, and could only be made to propagate at the speed of light by choosing appropriate coordinates.",C,kaggle200,"Also, Sir Arthur Eddington had discussed notions similar to operationalization in 1920 before Bridgman. Bridgman's formulation, however, became the most influential.
In the early years after Einstein's theory was published, Sir Arthur Eddington lent his considerable prestige in the British scientific establishment in an effort to champion the work of this German scientist. Because the theory was so complex and abstruse (even today it is popularly considered the pinnacle of scientific thinking; in the early years it was even more so), it was rumored that only three people in the world understood it. There was an illuminating, though probably apocryphal, anecdote about this. As related by Ludwik Silberstein, during one of Eddington's lectures he asked ""Professor Eddington, you must be one of three persons in the world who understands general relativity."" Eddington paused, unable to answer. Silberstein continued ""Don't be modest, Eddington!"" Finally, Eddington replied ""On the contrary, I'm trying to think who the third person is.""
In 1922, Arthur Stanley Eddington wrote a paper expressing (apparently for the first time) the view that gravitational waves are in essence ripples in coordinates, and have no physical meaning. He did not appreciate Einstein's arguments that the waves are real.
However, the nature of Einstein's approximations led many (including Einstein himself) to doubt the result. In 1922, Arthur Eddington showed that two of Einstein's types of waves were artifacts of the coordinate system he used, and could be made to propagate at any speed by choosing appropriate coordinates, leading Eddington to jest that they ""propagate at the speed of thought"". This also cast doubt on the physicality of the third (transverse–transverse) type that Eddington showed always propagate at the speed of light regardless of coordinate system. In 1936, Einstein and Nathan Rosen submitted a paper to ""Physical Review"" in which they claimed gravitational waves could not exist in the full general theory of relativity because any such solution of the field equations would have a singularity. The journal sent their manuscript to be reviewed by Howard P. Robertson, who anonymously reported that the singularities in question were simply the harmless coordinate singularities of the employed cylindrical coordinates. Einstein, who was unfamiliar with the concept of peer review, angrily withdrew the manuscript, never to publish in ""Physical Review"" again. Nonetheless, his assistant Leopold Infeld, who had been in contact with Robertson, convinced Einstein that the criticism was correct, and the paper was rewritten with the opposite conclusion and published elsewhere. In 1956, Felix Pirani remedied the confusion caused by the use of various coordinate systems by rephrasing the gravitational waves in terms of the manifestly observable Riemann curvature tensor.","Originally, Sir Arthur Eddington took only the electron scattering into account when calculating this limit, something that now is called the classical Eddington limit. Nowadays, the modified Eddington limit also counts on other radiation processes such as bound-free and free-free radiation (see Bremsstrahlung) interaction.
The Eddington Medal is awarded by the Royal Astronomical Society for investigations of outstanding merit in theoretical astrophysics. It is named after Sir Arthur Eddington. First awarded in 1953, the frequency of the prize has varied over the years, at times being every one, two or three years. Since 2013 it has been awarded annually.
The possibility of gravitational waves was discussed in 1893 by Oliver Heaviside, using the analogy between the inverse-square law of gravitation and the electrostatic force. In 1905, Henri Poincaré proposed gravitational waves, emanating from a body and propagating at the speed of light, as being required by the Lorentz transformations and suggested that, in analogy to an accelerating electrical charge producing electromagnetic waves, accelerated masses in a relativistic field theory of gravity should produce gravitational waves. When Einstein published his general theory of relativity in 1915, he was skeptical of Poincaré's idea since the theory implied there were no ""gravitational dipoles"". Nonetheless, he still pursued the idea and based on various approximations came to the conclusion there must, in fact, be three types of gravitational waves (dubbed longitudinal–longitudinal, transverse–longitudinal, and transverse–transverse by Hermann Weyl).However, the nature of Einstein's approximations led many (including Einstein himself) to doubt the result. In 1922, Arthur Eddington showed that two of Einstein's types of waves were artifacts of the coordinate system he used, and could be made to propagate at any speed by choosing appropriate coordinates, leading Eddington to jest that they ""propagate at the speed of thought"".: 72 This also cast doubt on the physicality of the third (transverse–transverse) type that Eddington showed always propagate at the speed of light regardless of coordinate system. In 1936, Einstein and Nathan Rosen submitted a paper to Physical Review in which they claimed gravitational waves could not exist in the full general theory of relativity because any such solution of the field equations would have a singularity. The journal sent their manuscript to be reviewed by Howard P. Robertson, who anonymously reported that the singularities in question were simply the harmless coordinate singularities of the employed cylindrical coordinates. Einstein, who was unfamiliar with the concept of peer review, angrily withdrew the manuscript, never to publish in Physical Review again. Nonetheless, his assistant Leopold Infeld, who had been in contact with Robertson, convinced Einstein that the criticism was correct, and the paper was rewritten with the opposite conclusion and published elsewhere.: 79ff In 1956, Felix Pirani remedied the confusion caused by the use of various coordinate systems by rephrasing the gravitational waves in terms of the manifestly observable Riemann curvature tensor.At the time, Pirani's work was overshadowed by the community's focus on a different question: whether gravitational waves could transmit energy. This matter was settled by a thought experiment proposed by Richard Feynman during the first ""GR"" conference at Chapel Hill in 1957. In short, his argument known as the ""sticky bead argument"" notes that if one takes a rod with beads then the effect of a passing gravitational wave would be to move the beads along the rod; friction would then produce heat, implying that the passing wave had done work. Shortly after, Hermann Bondi, published a detailed version of the ""sticky bead argument"". This later lead to a series of articles (1959 to 1989) by Bondi and Pirani that established the existence of plane wave solutions for gravitational waves.After the Chapel Hill conference, Joseph Weber started designing and building the first gravitational wave detectors now known as Weber bars. In 1969, Weber claimed to have detected the first gravitational waves, and by 1970 he was ""detecting"" signals regularly from the Galactic Center; however, the frequency of detection soon raised doubts on the validity of his observations as the implied rate of energy loss of the Milky Way would drain our galaxy of energy on a timescale much shorter than its inferred age. These doubts were strengthened when, by the mid-1970s, repeated experiments from other groups building their own Weber bars across the globe failed to find any signals, and by the late 1970s consensus was that Weber's results were spurious.In the same period, the first indirect evidence of gravitational waves was discovered. In 1974, Russell Alan Hulse and Joseph Hooton Taylor, Jr. discovered the first binary pulsar, which earned them the 1993 Nobel Prize in Physics. Pulsar timing observations over the next decade showed a gradual decay of the orbital period of the Hulse–Taylor pulsar that matched the loss of energy and angular momentum in gravitational radiation predicted by general relativity.This indirect detection of gravitational waves motivated further searches, despite Weber's discredited result. Some groups continued to improve Weber's original concept, while others pursued the detection of gravitational waves using laser interferometers. The idea of using a laser interferometer for this seems to have been floated independently by various people, including M. E. Gertsenshtein and V. I. Pustovoit in 1962, and Vladimir B. Braginskiĭ in 1966. The first prototypes were developed in the 1970s by Robert L. Forward and Rainer Weiss. In the decades that followed, ever more sensitive instruments were constructed, culminating in the construction of GEO600, LIGO, and Virgo.After years of producing null results, improved detectors became operational in 2015. On 11 February 2016, the LIGO-Virgo collaborations announced the first observation of gravitational waves, from a signal (dubbed GW150914) detected at 09:50:45 GMT on 14 September 2015 of two black holes with masses of 29 and 36 solar masses merging about 1.3 billion light-years away. During the final fraction of a second of the merger, it released more than 50 times the power of all the stars in the observable universe combined. The signal increased in frequency from 35 to 250 Hz over 10 cycles (5 orbits) as it rose in strength for a period of 0.2 second. The mass of the new merged black hole was 62 solar masses. Energy equivalent to three solar masses was emitted as gravitational waves. The signal was seen by both LIGO detectors in Livingston and Hanford, with a time difference of 7 milliseconds due to the angle between the two detectors and the source. The signal came from the Southern Celestial Hemisphere, in the rough direction of (but much farther away than) the Magellanic Clouds. The confidence level of this being an observation of gravitational waves was 99.99994%.A year earlier, the BICEP2 collaboration claimed that they had detected the imprint of gravitational waves in the cosmic microwave background. However, they were later forced to retract this result.In 2017, the Nobel Prize in Physics was awarded to Rainer Weiss, Kip Thorne and Barry Barish for their role in the detection of gravitational waves.In 2023, NANOGrav, EPTA, PPTA, and IPTA announced that they found evidence of a universal gravitational wave background. North American Nanohertz Observatory for Gravitational Waves states, that they were created over cosmological time scales by supermassive black holes, identifying the distinctive Hellings-Downs curve in 15 years of radio observations of 25 pulsars.","In 1922, Arthur Eddington showed that two of Einstein's types of waves were artifacts of the coordinate system he used, and could be made to propagate at any speed by choosing appropriate coordinates, leading Eddington to jest that they ""propagate at the speed of thought""Silberstein continued ""Don't be modest, Eddington!"" Finally, Eddington replied ""On the contrary, I'm trying to think who the third person is.""
In 1922, Arthur Stanley Eddington wrote a paper expressing (apparently for the first time) the view that gravitational waves are in essence ripples in coordinates, and have no physical meaningIn 1922, Arthur Eddington showed that two of Einstein's types of waves were artifacts of the coordinate system he used, and could be made to propagate at any speed by choosing appropriate coordinates, leading Eddington to jest that they ""propagate at the speed of thought"".: 72 This also cast doubt on the physicality of the third (transverse–transverse) type that Eddington showed always propagate at the speed of light regardless of coordinate systemNonetheless, he still pursued the idea and based on various approximations came to the conclusion there must, in fact, be three types of gravitational waves (dubbed longitudinal–longitudinal, transverse–longitudinal, and transverse–transverse by Hermann Weyl).However, the nature of Einstein's approximations led many (including Einstein himself) to doubt the resultThis also cast doubt on the physicality of the third (transverse–transverse) type that Eddington showed always propagate at the speed of light regardless of coordinate systemSome groups continued to improve Weber's original concept, while others pursued the detection of gravitational waves using laser interferometersIn 1905, Henri Poincaré proposed gravitational waves, emanating from a body and propagating at the speed of light, as being required by the Lorentz transformations and suggested that, in analogy to an accelerating electrical charge producing electromagnetic waves, accelerated masses in a relativistic field theory of gravity should produce gravitational w","In 1922, Arthur Eddington showed that two of Einstein's types of waves were artifacts of the coordinate system he used, and could be made to propagate at any speed by choosing appropriate coordinates, leading Eddington to jest that they ""propagate at the speed of thought""Silberstein continued ""Don't be modest, Eddington!"" Finally, Eddington replied ""On the contrary, I'm trying to think who the third person is.""
In 1922, Arthur Stanley Eddington wrote a paper expressing (apparently for the first time) the view that gravitational waves are in essence ripples in coordinates, and have no physical meaningIn 1922, Arthur Eddington showed that two of Einstein's types of waves were artifacts of the coordinate system he used, and could be made to propagate at any speed by choosing appropriate coordinates, leading Eddington to jest that they ""propagate at the speed of thought"".: 72 This also cast doubt on the physicality of the third (transverse–transverse) type that Eddington showed always propagate at the speed of light regardless of coordinate systemNonetheless, he still pursued the idea and based on various approximations came to the conclusion there must, in fact, be three types of gravitational waves (dubbed longitudinal–longitudinal, transverse–longitudinal, and transverse–transverse by Hermann Weyl).However, the nature of Einstein's approximations led many (including Einstein himself) to doubt the resultThis also cast doubt on the physicality of the third (transverse–transverse) type that Eddington showed always propagate at the speed of light regardless of coordinate systemSome groups continued to improve Weber's original concept, while others pursued the detection of gravitational waves using laser interferometersIn 1905, Henri Poincaré proposed gravitational waves, emanating from a body and propagating at the speed of light, as being required by the Lorentz transformations and suggested that, in analogy to an accelerating electrical charge producing electromagnetic waves, accelerated masses in a relativistic field theory of gravity should produce gravitational w[SEP]What did Arthur Eddington discover about two of Einstein's types of gravitational waves?","['C', 'E', 'D']",1.0