source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
94,022 | It is known that acid should be added to water and not the opposite because it results in an exothermic reaction. Our stomach contains HCl, so why don't we explode when we drink water? | The hydrochloric acid in the stomach is already quite dilute; its pH is in fact no less than 1.5 so that at the extreme maximum there is only 0.03 molar hydrochloric acid. And even that small amount is, of course, stabilized by being dissociated into solvated ions. There is just not enough stuff to react violently. | {
"source": [
"https://chemistry.stackexchange.com/questions/94022",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/44166/"
]
} |
95,509 | Recently, I was telling my friends about the violent reaction that takes place when you throw potassium into water. Soon after, a friend of mine claimed that lithium would react more violently than potassium. I disagreed with him, because potassium is more electropositive than lithium and thus more reactive. My friend claimed lithium to be more reactive than potassium due to its position in the reactivity series of metals: $$\ce{Li > Cs > Rb > K > Ba > Sr > Ca > Na > Mg}$$ Then we found out that potassium reacts indeed more violently in water.
But what about his argument? Why isn't he right? | For the reaction, $$\ce{M -> M+ + e-}$$ the heat liberated is highest for lithium owing to its high negative $E^\circ$ value so one would think that the reaction must be most vigorous. The reason behind the more violent reactivity of potassium rather than lithium lies in kinetics and not in thermodynamics. No doubt, maximum energy is evolved with lithium but the vaporization and ionization will also consume maximum energy (the melting point and ionization energy of lithium are the highest) and so the reaction proceeds gently. On the other hand, potassium has a lower melting point and ionization enthalpy. The heat of reaction is sufficient to melt it. The molten metal spreads over the water and exposes a larger surface to water. Also, the hydrated radius of lithium is the greatest out of all alkali metals. This reduces the ionic mobility which in turn reduces the speed of the molten metal. That's why potassium gives a more violent reaction with water. Reference: Kumar, Prabhat Conceptual Inorganic Chemistry ; Shri Balaji Publications: Muzaffarnagar, U.P., 2014. | {
"source": [
"https://chemistry.stackexchange.com/questions/95509",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/63045/"
]
} |
96,402 | I was expecting that the water would become less "boily" since the ions in $\ce{NaCl}$ would require energy to disassociate. Instead it turned whitish and the water level suddenly rose and then subsided. I think the white color can be attributed to the fact that table salt is white but why did the water suddenly flare up ? | The salt does not instantly dissolve so the surface of the crystals suddenly provides a lot of nucleation sites for the water to form vapour - hence the surge as it boils from these surfaces. The white is due to turbulence from the boiling just like waterfalls look white. | {
"source": [
"https://chemistry.stackexchange.com/questions/96402",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/46306/"
]
} |
98,490 | I posted the following question in Physics SE and was advised to transfer it to Chemistry SE. I studied physics in college ten years ago and I recently started to learn biochemistry. I enjoy finding out that some familiar concepts in physics play important roles in biochemistry such as entropy and the Gibbs free energy. For example, as a (ex-)student of physics, I am happy to know that the Gibbs free energy determines the directions of chemical reactions. I feel this is a good example where a sort of fundamental law of physics determines how a phenomenon looks like. However, I still can not understand why the chemical reactions in a body need to be so complex. Many chemical systems consist of more than several steps to achieve their purposes. According to Wikipedia , glycolysis takes ten steps through its process. Why are so many steps necessary? I tried to find out a physical law that prohibit the glycolysis process from being achieved in one or two steps, but I could not find an answer. I would like to know (or discuss) whether there is a physical law that makes chemical systems so complex (many steps required). My assumption is that some physical law prohibit the existence of an enzyme that realizes a one-step process of glycolysis. | There is no fundamental law preventing simple chemical reactions: things are complex because of the combinatorial complexity of chemical compounds The complexity of many chemical reactions is a byproduct of the fact that there is a very, very large variety of possible chemicals. Much of that complexity happens because of the almost infinite way even some simple elements can be combined together to give complicated structures (carbon being the archetypal example). Theoretically, for example (theoretical because not all of the examples can exist in 3D space) there are 366,319 ways to build different alkane compounds from just 20 carbon atoms and hydrogen atoms (see this question here and this entry in the Encyclopaedia of integer sequences). And this number drastically understates the real complexity as it ignores mirror images and more complicated ways of joining the carbon atoms together (like in rings, for example). The complexity just gets more mind boggling if you start adding other elements to the mix. No physical law prevent us making any possible compound in one step. But the sheer complexity of the end products makes simple ways to reach many of them extraordinarily unlikely from the laws of probability alone, never mind the specific ways chemical components can be easily joined up to make more complex things. Here is a simple analogy. Let's say you want to assemble a Lego model of the Star Wars Death Star weapon . There are 4,016 pieces of lego that have to be assembled in the right combination and the right order. There is no physical law that says you couldn't somehow do that in a single step. But no sane person's intuition would assume that this was easy or likely. It isn't physical law that prevents one step assembly: it is combinatorial complexity . Chemistry is, do I really need to say this, more complicated than Lego: not least because atoms can be joined up in many more complex ways than the simple, standard-sized physical pins that join Lego bricks together. Both nature and synthetic chemists have explored many ways to achieve particular end products from simpler building blocks. Sometimes new chemical Death Star equivalents (like the geometrically beautiful hydrocarbon dodecahedrane, which, incidentally, has 20 carbons but isn't counted in the list of 20 carbon alkanes) are made only after long sequences of reactions. The original synthesis of dodecahedrane took 29 steps but others found better, higher yielding, routes that took only 20. Many important drugs are first synthesised in long sequences of reactions but are later found to be available via much shorter routes (there is nothing like the economics of manufacturing cost to encourage creativity). So the reason many chemical reactions take multiple steps isn't physical laws but probability theory. There are just too many possible chemicals and too many ways to combine things for single step routes to most given products to be likely to work. Doing one thing at a time (just like you would if building the Lego Death Star) is the way to get what you want. | {
"source": [
"https://chemistry.stackexchange.com/questions/98490",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/65313/"
]
} |
99,659 | Rotting fish seem to give off the same (very pungent) kind of smell, regardless of the kind (salmon, seabreen, tuna, etc). What exactly is it that's responsible for this unique smell? (Though I've answered my own question below, do feel free to post an answer of your own if you want.) | The "fishy" odor that you're familiar with is brought about by a whole bunch of compounds, and not any single one. Then again, if we were to narrow this down a bit, we could say that simple nitrogen compounds are the main culprits. But suppose we want to blame only a single compound for the delightfully pungent odor of rotting fish, and we couldn't be bothered with the nuances of olfactory physiology or food chemistry to go through a list of compounds running into the hundreds (if not thousands). In that case, we can pin the blame on trimethylamine (TMA). With regard to fish, the TMA is a result of the degradation of trimethylamine oxide (TMAO) present in the body fluids (blood, lymph, excreta) of the fish. Just so you don't get the wrong idea, TMAO is present in the body fluids of all vertebrates ̶I̶ ̶c̶a̶n̶ ̶t̶h̶i̶n̶k̶ ̶o̶f̶, not just fish (albeit in much lower concentrations, which is why you don't mistake ground chicken for tuna). Now why is TMAO more abundant in fish? Ever tried to pickle a cucumber (or anything for that matter)? Notice that the cucumber shrivels up over time? It's losing water to the pickling fluid by a process known as exosmosis : the water is drawn out from a less concentrated environment (cucumber's tissue) to a more concentrated one (pickling fluid). Disclaimer: As user @lly pointed out in the comments, this particular bit has the potential to cause a bit of confusion. The inside of a cell (the cytoplasm I mean) is technically an aqueous colloidal solution (the pickling fluid is an aqueous solution as well). In the context of aqueous solutions, "concentration" is an indication of how much solute is present, not water. So in this context, if something is more concentrated, it has less water in it Now the same thing is going to want to happen to a fish in salt water, which isn't so nice from the fish's perspective. To counter the outflow of water, a fish has a significantly higher concentration of the familiar metabolite urea in its body fluids (as compared to terrestrial vertebrates). This establishes a situation where the fish's body fluids have roughly the same "concentration" as the salt water it swims in (the more correct term here would be " osmolarity ", in place of "concentration") That urea, if left to its own devices, will seek to destabilize macromolecular structures and inhibit cellular functions ; the TMAO present alongside the urea helps counteract those detrimental effects. Of course, going by this logic, saltwater fish should have a much stronger "fish odor" than freshwater fish (which is certainly the case). Lets not forget the classic kitchen tip: Rubbing fish with something acidic like vinegar, b̶a̶t̶t̶e̶r̶y̶ ̶a̶c̶i̶d̶ or lemon juice helps reduce the odor (the TMA degrades under acidic conditions). Also, some of you might know that a fish-like smell is observed in rancid food. That, apparently, is the result of the oxidation of omega-3 fatty acids . | {
"source": [
"https://chemistry.stackexchange.com/questions/99659",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/33991/"
]
} |
102,502 | I searched for the strongest oxidising agent and I found different results: $\ce{ClF3}$ , $\ce{HArF}$ , $\ce{F2}$ were among them. Many said $\ce{ClF3}$ is the most powerful as it oxidises everything, even asbestos, sand, concrete, and can set easily fire to anything which can't be stopped; it can only be stored in Teflon. And $\ce{HArF}$ could be a very powerful oxidant due to high instability as a compound of argon with fluorine, but was it even used as such? What compound is actually used as oxidising agent and was proven to be stronger then others, by, for example, standard reduction potential? | Ivan's answer is indeed thought-provoking. But let's have some fun. IUPAC defines oxidation as: The complete, net removal of one or more electrons from a molecular
entity. My humble query is thus - what better way is there to remove an electron than combining it with a literal anti-electron ? Yes, my friends, we shall seek to transcend the problem entirely and swat the fly with a thermonuclear bomb. I submit as the most powerful entry, the positron . Since 1932, we've known that ordinary matter has a mirror image, which we now call antimatter . The antimatter counterpart of the electron ( $\ce{e-}$ ) is the positron ( $\ce{e+}$ ). To the best of our knowledge, they behave exactly alike, except for their opposite electric charges. I stress that the positron has nothing to do with the proton ( $\ce{p+}$ ), another class of particle entirely. As you may know, when matter and antimatter meet, they release tremendous amounts of energy, thanks to $E=mc^2$ . For an electron and positron with no initial energy other than their individual rest masses of $\pu{511 keV c^-2}$ each, the most common annihilation outcome is: $$ \ce{e- +\ e+ -> 2\gamma}$$ However, this process is fully reversible in quantum electrodynamics; it is time-symmetric. The opposite reaction is pair production : $$ \ce{2\gamma -> e- +\ e+ }$$ A reversible reaction? Then there is nothing stopping us from imagining the following chemical equilibrium: \begin{align}
\ce{e- +\ e+ &<=> 2\gamma} &
\Delta_r G^\circ &= \pu{-1.022 MeV} =\pu{-98 607 810 kJ mol^-1}
\end{align} The distinction between enthalpy and Gibbs free energy in such subatomic reactions is completely negligible, as the entropic factor is laughably small in comparison, in any reasonable conditions. I am just going to brashly consider the above value as the standard Gibbs free energy change of reaction. This enormous $\Delta_r G^\circ$ corresponds to an equilibrium constant $K_\mathrm{eq} = 3 \times 10^{17276234}$ , representing a somewhat product-favoured reaction. Plugging the Nernst equation, the standard electrode potential for the "reduction of a positron" is then $\mathrm{\frac{98\ 607\ 810\ kJ\ mol^{-1}}{96\ 485.33212\ C\ mol^{-1}} = +1\ 021\ 998\ V}$ . Ivan mentions in his answer using an alpha particle as an oxidiser. Let's take that further. According to NIST , a rough estimate for the electron affinity of a completely bare darmstadtium nucleus ( $\ce{Ds^{110+}}$ ) is $\pu{-204.4 keV}$ , so even a stripped superheavy atom can't match the oxidising power of a positron! ... that is, until you get to $\ce{Ust^{173+}}$ ... | {
"source": [
"https://chemistry.stackexchange.com/questions/102502",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/68148/"
]
} |
102,971 | I've heard that diamond is the hardest natural known material but, on Google search, I found that it can easily be broken by a hammer as it's not tough. So, what is difference between hardness and toughness? According to me, both should be the same, and if both are different than what is the toughest known substance? | Hardness and toughness are not the same Hardness and toughness are very different qualities in materials and are weakly related. Hardness is strongly related to the more well-defined quantity of stiffness which measures how easily a compound can be deformed under stress. Glass and diamond are very stiff materials, for example. If you try to poke them with something they resist deforming to accommodate your poke. (Hardness is not perfectly aligned to stiffness because of small scale microstructures in many materials, but this is good enough for now). Toughness is a vaguer term for materials and there isn't a simple way to measure it. This is partially because it varies depending on circumstances in a way that stiffness does not and it is a property of the overall structure and not just the materials that make up the structure. Understanding it requires some insight into why things break (and why other things don't). we need to know a lot about the small scale structure of materials not just the substances involved. For example, stainless steel (used in knives and forks) is a well-known tough material but cast iron is brittle. Both are mostly made of iron. The differences are in the crystalline structures. One key property of tough structures is that cracks don't propagate. So a stick of glass will break easily as will a glass fibre. But a bundle of glass fibres embedded in an epoxy resin can be very tough (because the cracks in individual glass fibres are not propagated through the epoxy resin). Some tough metals can adjust the micro defects in their crystalline structures to absorb the strain that would otherwise propagate cracks. Some very soft compounds are very tough because the deform so easily that it is hard to start cracks, nylon rope for example. Diamond is very very stiff. But it has no protection mechanism against cracks. So, like glass, once a crack has started it doesn't take a lot of energy to cause it to spread, so it may be stiff but it isn't tough. A fuller explanation of this took material scientists and engineers a long time to work out would require a whole book to do it justice. Luckily that book has been written and is called The New Science of Strong Materials . It is well worth a read. | {
"source": [
"https://chemistry.stackexchange.com/questions/102971",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/68148/"
]
} |
103,387 | Some topics here have touched on this before (see 1 , 2 , 3 ), but I haven't found a clear definition yet. I would like to know what exact property of the wave function these terms refer to. It would also be helpful to have a clear definition of 'reference' and 'configuration'. I'll try to explain below where my problems are in clearly understanding/defining these terms: Starting with Hartree-Fock, it is obvious that the wave function in HF is both single-reference and 'single-configurational': there is only one Slater determinant. Going to configuration interaction methods, the wave function now becomes a linear combination of several Slater determinants. The additional Slater determinants are excitations of the ground state determinant: virtual orbitals from HF are taken and replace previously occupied orbitals in the determinant. However, these orbitals still have the same coefficients as in HF - only the coefficients in front of the Slater determinants are optimized for the linear combination. If I'm correct, 'configuration' here refers to one particular Slater determinant - a method is, therefore 'multi-configurational' if the wave function that is used has two or more (different) Slater determinants, correct? That also means that none of the CI methods (be it CIS, CISD,..., or Full CI) are multireference methods? Continuing with CASSCF, this method is basically Full CI limited to the chosen active space of orbitals. It is therefore multi-configurational. At the same time, it is also often referred to as being 'multireference'. The only difference to CI, however, seems to be the optimization of the coefficients in the Slater determinants themselves, hence, this must be the defining criterion for 'multireference'? What does 'reference' here refer to? Now there is also multireference-CI. From the above definition, I would expect this to be a form of CI where I also optimize the orbitals, but that does not seem to be the case. The Wikipedia article on MRCI starts with: In quantum chemistry, the multireference configuration interaction (MRCI) method consists of a configuration interaction expansion of the eigenstates of the electronic molecular Hamiltonian in a set of Slater determinants which correspond to excitations of the ground state electronic configuration but also of some excited states. The Slater determinants from which the excitations are performed are called reference determinants. This is confusing to me: Is 'excitation of the ground state electronic configuration' vs. 'excited states' referring to the optimization of the Slater determinants themselves? Or does 'excited state' refer to configurations with different total spin? That would be a different definition of 'reference', but then CASSCF would only be a multireference method if it uses the corresponding SA-CSFs, regardless whether the Slater determinants are optimized or not? | Your problem seems to be with the terminology used in CI methods, so let me go through the different terms you mentioned: A configuration is a certain occupation of (molecular) orbitals. Mathematically configurations can be represented in 2 ways. The first one is the Slater Determinant (SD), an anti-symmetrized product of spin-orbitals. Slater Determinants, however, are not eigenfunctions of the spin operator $\hat S^2$ , but the electronic wave function needs to fulfill this requirement. Therefore one constructs spin-adapted Configuration State Functions (CSF) as certain linear-combinations of SDs. Using CSFs instead of SDs usually makes a calculation more stable. "Configuration" is a more general term, which does not explicitly say whether an SD or CSF is considered. Multi-configurational just means the method considers more than one configuration. Reference means we have a designated configuration from which the excitations are generated. Single-reference thus means we only have one such configurations (usually the HF configuration), e.g. in CISD or CCSD. Multi-reference means we have more than one configurations to generate excitations from. So multi-configurational just means we have many configurations, while single-/multireference says something about how those configurations are selected/generated. How does single-/multireference apply to the different methods? In HF we do not generate any excitations, therefore this concept does not really apply here. But I think one could still call it single-reference and single-configurational. CISD and CCSD are the typical examples of single-reference methods. Nothing special here. In FCI and CASCI we don't restrict the excitations to certain degrees (Single, Double, etc.), instead, we just take all of them. The concept of having one (or more) reference configurations is a possible perspective, but it is not really necessary. FCI can be viewed as having one reference configuration and taking all excitation degrees (up to the number of electrons available), which would make it single-reference. But from another perspective, we can argue that every MRCI wave function is just a truncation of the corresponding FCI wave function. So if FCI covers everything and more than MRCI does, would it not be multi-reference as well? Again, I would just say the concept does not apply here. MCSCF (CASSCF) is a combination of CI (CASCI) and SCF, where SCF means we optimize the orbitals as well. This is not done in CI(SD...), CC(SD...), MRCI, etc. Other than that, the above arguments apply in the same way to the CI space as before. Multi-Reference CI and CASSCF As argued above, personally I would not consider CASSCF to be multi-reference, as the concept does not really apply. But I can see why people would consider it as such. One usually does CASSCF because single-reference methods fall short of describing the wave function correctly, in a qualitative way. Such systems are then called strongly correlated and CASSCF can treat that strong correlation (also called static correlation ) which is missing in single-reference calculations. In turn, however, CASSCF is missing that kind of correlation single-reference method can treat well, called weak or dynamic correlation. Multi-reference is now the approach to combine both, strong and weak correlation, by doing first a CASSCF, and then using those configurations as a reference space , to generate all the excitations from. Orbital optimization, however, is usually not feasible for dynamic correlation, because dynamic correlation usually means to have a lot of configurations and therefore huge CI space. With static correlation there are much fewer configurations, so doing orbital optimization additional to CI optimization is feasible and yields qualitative improvements of the orbitals. Therefore this is done in CASSCF, but not in MRCI. "Excited states" in MRCI What is meant here by "excited states" are actually "excited configurations" (with respect to the HF configuration). If you, for example, choose a CAS as your reference space, then all configurations, except for the HF configuration itself, are excited configurations. Please note, that "excited state" is not a good choice of words here since it is actually something different: a state is a result of a CI calculation and is physically observable. On the other hand, a configuration is a basis function you put in a CI calculation and therefore is more of an abstract mathematical tool. However, you can approximate a state as a configuration to some extent, this is for example done in HF. | {
"source": [
"https://chemistry.stackexchange.com/questions/103387",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/17102/"
]
} |
103,777 | Fundamentally, they're both carbohydrates, although the cellulose in wood is essentially polymerized glucose, which combined with its isomer fructose forms sucrose. So why does wood readily burn while table sugar chars? | Combustion is a gas phase reaction. The heat of the flame vapourises the substrate and it's the vapour that reacts with the air. That's why heat is needed to get combustion started. Anyhow, wood contains lots of relatively volatile compounds so it's not too hard to get combustion started. Once combustion has started the heat of the flame keeps the reaction going. However sugar dehydrates and emits water when you heat it. Water isn't flammable (obviously) so there's no way to get combustion started. Dehydration leaves behind pure carbon and that is non-volatile so again there's no way to get this to burn. Carbon will burn of course, but you need a high temperature to get it going. | {
"source": [
"https://chemistry.stackexchange.com/questions/103777",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/65484/"
]
} |
104,382 | I am a little bit confused about what an isobar is. Its online definition is that it's an element with the same number of neutrons but a different number of protons from an element $\ce{X}$ . To me, it doesn't make sense from the get-go, because once you change the number of protons the element changes as well so why exactly is it defined as the same element $\ce{X}$ with the same number of neutrons and a different number of protons. Definition of an isotope: An isotope is an element $\ce{X}$ with the same number of protons and a different number of neutrons. So to the actual question now. Isn't an isobar just an isotope? Here is an example to clarify what I mean.
If we take for example carbon $\ce{^12C(p:6, n:6)}$ and turn it into an isotope it will be $\ce{^13C(p:6, n:7)}$ , and that makes sense, but if we turn it into an isobar it would be $\ce{^13C(p:7, n:6)}$ , which doesn't make sense, because it looks exactly like an isotope of nitrogen $\ce{^13N(p:7, n:6)}$ . If the atomic number changes than the element changes as well. So isn't an isobar just an isotope of the following element with a smaller neutron number? | Not quite, an isotope has same number of protons ( $ A- N = Z = \mathrm{constant}$ ), but a different number of neutrons ( $\mathrm N$ varies; e.g. $\ce{^3_\color{red}{1}H}$ and $\ce{^2_\color{red}{1}H}$ , or $\ce{^235_\color{red}{92}U}$ and $\ce{^238_\color{red}{92}U}$ are isotopes). An isobar has a fixed number of total nucleons ( $Z + N = A = \mathrm{constant}$ ; e.g. $\ce{^\color{red}{40}_19K}$ and $\ce{^\color{red}{40}_20Ca}$ , or $\ce{^\color{red}{3}_2He}$ and $\ce{^\color{red}{3}_1H}$ are isobars). Not nearly as mainstream as isotopes, but isobars are important to consider when doing mass spectroscopy. Extra fact : For nuclei of the same number of neutrons ( $A - Z = N = \mathrm{constant}$ ), the term is isotones . | {
"source": [
"https://chemistry.stackexchange.com/questions/104382",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/69355/"
]
} |
104,408 | Metals form metallic bonds which explain many of their chemical and physical properties. This is most familiar in the solid state but metallic properties are still quite recognizable in the liquid state e.g. mercury and molten iron. However, gaseous metals are much less familiar and it is not obvious whether they would still have a distinct metallic behaviour. For example, is there a distinction between gas and plasma for a metal? This article mentions that the metallic bond persists in the liquid state but says nothing about the gas state: Metallic bonding at Chemguide . So, do metals retain any distinct metallic behaviour in the gas state? Note that I am not asking only about the electrical properties. My main question is whether the gas consists of neutral atoms, small molecules, or positive ions in a sea of electrons? | No, gaseous metals do not retain metallic bonds, nor metallic conductivity, nor luster, nor any other metallic properties. They are no different from other gases. True, they typically require pretty high temperatures to form, but then again, they are hardly special in this regard, as many non-metallic substances require the same. See, all metallic properties are in fact collective effects. They are caused by metallic bonding, and not just of two atoms, but of an entire piece of metal (like a thousand, only much more). You don't have bonding in gases. The particles (lone atoms in this case) are basically free to go. Maybe they bump into each other, two or three at a time, but definitely not a thousand. Plasma is an entirely different thing, unrelated to your initial question. Yes, plasma is pretty different from a gas; to obtain it, you'll have to heat the gas until it ionizes . So it goes. | {
"source": [
"https://chemistry.stackexchange.com/questions/104408",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/46004/"
]
} |
104,430 | I'm having trouble understanding the following two graphs, which are the radial wavefunction of the hydrogen 1s orbital and the corresponding radial distribution function: Specifically, why is the radial distribution function zero at $r = 0$ , but not the radial wavefunction? Also, why does the radial distribution function have a maximum at the Bohr radius $a_0$ ? (Mathematica input for the graphs: Plot[{2*Exp[-x], 4*Exp[-2 x]*x^2}, {x, 0, 6}, AxesLabel -> {r/Subscript[a, 0], Subscript[f, "1s"][r]}, PlotRange -> {{0, 6}, {0, 2}}, PlotLegends -> {"Radial wavefunction", "Radial distribution function"}, BaseStyle -> {FontSize -> 14}, ImageSize -> Medium] ) | No, gaseous metals do not retain metallic bonds, nor metallic conductivity, nor luster, nor any other metallic properties. They are no different from other gases. True, they typically require pretty high temperatures to form, but then again, they are hardly special in this regard, as many non-metallic substances require the same. See, all metallic properties are in fact collective effects. They are caused by metallic bonding, and not just of two atoms, but of an entire piece of metal (like a thousand, only much more). You don't have bonding in gases. The particles (lone atoms in this case) are basically free to go. Maybe they bump into each other, two or three at a time, but definitely not a thousand. Plasma is an entirely different thing, unrelated to your initial question. Yes, plasma is pretty different from a gas; to obtain it, you'll have to heat the gas until it ionizes . So it goes. | {
"source": [
"https://chemistry.stackexchange.com/questions/104430",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/70303/"
]
} |
104,791 | I have been taught that oxygen is a chemical element, in other words a certain type of atom that has 8 protons in its nucleus. So why is O2 called oxygen? It is not a type of atom but rather a molecule. | I think what you may find most helpful is to know a bit of the history of element discovery and atomic theory. The first pure substance containing only the element oxygen to be isolated was dioxygen ( $\ce{O2}$ ), in 1774, though it was called "dephlogisticated air" until 1777 when Lavoisier used the term "oxygen" for the first time. This was some 30 years before John Dalton even proposed the first empirical atomic theory. Back then, we only barely had an understanding of stoichiometry, such that Dalton famously claimed the molecular formula for water was $\ce{HO}$ . The fact that dioxygen is a substance made of molecules containing two atoms of oxygen probably wasn't widespread knowledge until at least 1811, with the gas stoichiometry experiments of Amadeo Avogadro. Basically, for a point in time, we knew that there was a substance composed of a single type of atom, which could not be broken down into anything simpler. This fit the then-prevalent definition of an element ; " a pure substance that could not be decomposed into any simpler substance ". We knew that Lavoisier's "oxygen" had to be $\ce{O_n}$ , for some n, but we had no reason to assume $n \neq 1$ for decades. By the time we figured out $n = 2$ , the name "oxygen" was already widely used to refer to dioxygen. The fact that $n = 3$ also forms a stable compound in ambient conditions (ozone) would also not be known until 1867. A similar story happened with (di)nitrogen (octa)sulfur, (tetra)phosphorus, and so on. The only elements which form stable monoatomic substances in reasonable conditions are the noble gasses. There is an interesting aspect to consider behind all this. There are some (such as Eric Scerri ) who claim we are doing Chemistry a disservice in muddling together the properties of elements and the pure substances which they make . Nowadays our definition of an element is solely dependent on the number of protons inside an atomic nucleus, with no reference to reactivity or in what form the pure substance can be found. In this sense, the elements do not have "reactivities", "melting points", etc.; these are all properties of the pure substances. The only true properties of the elements are things such as electronic distribution, ionisation energies, and so on. However, it is common to see periodic tables stating the melting and boiling points the pure substances of each chemical element, and even Wikipedia bundles the physical properties of dioxygen with the atomic properties of elemental oxygen. For better or for worse, we're stuck with this subtle ambiguity in nomenclature. | {
"source": [
"https://chemistry.stackexchange.com/questions/104791",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/-1/"
]
} |
105,503 | I'm not an expert, but as far as I understood a sugar solution is completely neutral since sugar can't take hydrogen ions out of the water or donate them in. Sugar is a non ionic compound, so it does not release H and OH ions in the water so it will not make the solution acidic or alkaline. I keep on reading and seeing charts of how sugars make your body acidic, like this one: What process makes a neutral pH solution into an acidic one?
I'm not into chemistry at all and therefore the simpler the answer the better. | It is not proven that "sugar makes your body acidic"! Your body's pH is very tightly regulated by the body's internal systems; it is also different in different parts of the body - the stomach is acidic (1.0-2.5), the intestine are mildly basic (jejunem 7-9) terminal ileum 7.5 reference here . Blood pH is 7.35, and any deviation from this is indicative of serious illness. | {
"source": [
"https://chemistry.stackexchange.com/questions/105503",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/71172/"
]
} |
108,219 | What is the largest (most carbon atoms) alkane having heptane as its base name? For example, 2,2,3,3-tetramethylbutane is the largest (most carbon atoms) alkane retaining butane as its base name. | Any alkyl substituent of butane in position 2 or 3 cannot be longer than $\ce{CH3}$ since that would lead to a longer parent chain. And obviously, there cannot be any alkyl substituent at all in the first or the last position of the butane chain. Therefore, the largest structure based on a butane parent chain is 2,2,3,3-tetramethylbutane. This principle can be expanded to a heptane parent chain. The maximum length for alkyl substituent chains are 0 for position 1 and 7, 1 for position 2 and 6, 2 for position 3 and 5, and 3 for position 4. Therefore, the largest theoretical structure based on a heptane parent chain is 3,3,5,5-tetra- tert -butyl-4,4-bis[3-( tert -butyl)-2,2,4,4-tetramethylpentan-3-yl]-2,2,6,6-tetramethylheptane ( $\ce{C53H108}$ ). | {
"source": [
"https://chemistry.stackexchange.com/questions/108219",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/36619/"
]
} |
109,466 | The gas laws, namely Boyle's Law, Charles' Law, Avogadro's Law and Gay-Lussac's Law, are all experimental laws. Combining these laws, we get the ideal gas law $pV=nRT$ . Also, "real life" gases do not exactly follow this law, so there are more laws for "real life" gases: van der Waals' law, Dieterici equation, etc., which approximately describe these laws within certain boundaries of the gas parameters: pressure $p$ , volume $V$ and temperature $T$ . But there seems to be an apparent logical flaw: the ideal gas law $pV=nRT$ was found by experimenting on "real life" gases, but these "real life" gases do not follow the ideal gas law. How could this be the case? | The ideal gas law is a very good approximation of how gases behave most of the time There is no logical flaw in the laws. Most gases most of the time behave in a way that is close to the ideal gas equation. And, as long as you recognise the times they don't, the equation is good description of the way they behave. The ideal gas equations assume that the molecules making up the gas occupy no volume; they have no attractive forces between them and their interactions consists entirely of elastic collisions. These rules can't explain, for example, why gases ever liquefy (this requires attractive forces). But most of the common gases that were used to develop the laws in the first place (normal atmospheric gases like oxygen or nitrogen) are usually observed far from the point where they do liquefy. As for the volume taken up by the molecules of the gas, consider this. A mole of liquid nitrogen occupies about 35mL and this is a fair approximation of the volume occupied by the molecules. at STP that same mole of gas occupies a volume of about 22,701mL or about 650 times as much. So, at least to a couple of decimal places, the volume occupied by the nitrogen molecules is negligible. The point is that for gases not close to the point where they liquefy (and few components of the atmosphere are), the ideal gas laws are a very good approximation for how gases behave and that is what we observe in experiments on them. The fancy and more sophisticated equations describing them are only really required when the gas gets close to liquefaction. | {
"source": [
"https://chemistry.stackexchange.com/questions/109466",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/-1/"
]
} |
110,794 | The term equilibrium is used in the context of reversible reactions that reach a point where concentrations no longer change. The term steady-state is used in enzyme kinetics when the concentration of the enzyme-substrate complex no longer changes (or hardly changes, in case of a quasi steady state). It is also used to describe multi-step biochemical pathways. Is there a difference between the two, given that both concern a situation where concentrations don't change over time? | Yes, equilibrium and steady-state are distinct concepts. A reaction is at equilibrium if reactants and products are both present, the forward and reverse rates are equal and the concentrations don't change over time. If this is the only reaction in a closed, isolated system, the entropy in the system is constant. Steady-state implies a system that is not at equilibrium (entropy increases). A species is said to be at steady state when the rate of reactions (or more general, processes) that form the species is equal to the rate of reactions (or processes) that remove the species. In both cases, there are rates ( $\mathrm{rate}_1$ and $\mathrm{rate}_2$ ) that are equal. For an equilibrium, the forward and reverse rate of the same reaction are equal to each other. For a steady state, the rates of processes leading to increase of the concentration of a species are equal to the rates of processes leading to decrease of the concentration of the same species. $$\ce{A <=>[rate_1][rate_2] B}\ \ \ \ \ vs \ \ \ \ \ \ce{source->[rate_1]C->[rate_2]sink} $$ For an equilibrium, all concentrations are constant over time. For a steady-state, there is a net reaction, so some amounts change (the amount of source and sink), while at least one species - the one at steady state - has a constant concentration as long as the conditions of steady state prevail. | {
"source": [
"https://chemistry.stackexchange.com/questions/110794",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/72973/"
]
} |
111,511 | Why does 2',6'-dimethylacetophenone not give iodoform test? | As @Waylander pointed out, it appears this reaction has not been performed and/or recorded in any literature, so it is quite dangerous to speculate. But keeping that aside, A 3D perspective reveals that abstraction of protons from the methyl group in quite unhindered. Hence, the triiodo intermediate is well anticipated. However, a quick glance at spatial orientation of iodine atoms reveals the reaction may be dead slow in the next step. Notice that the Burgi-Dunitz trajectory , which we may assume the incoming nucleophile to take, is hindered by the large iodine atoms and the methyl group. It is quite safe to assume that the attack at the carbonyl carbon is unfavoured, preventing the release of the $\ce{CI3-}$ , and ultimately $\ce{CHI3}$ never appears. EDIT : Apparently there is some relevant literature available for similar compounds, as mentioned in this answer . Thanks to Mathew for searching and pointing it out. | {
"source": [
"https://chemistry.stackexchange.com/questions/111511",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/75445/"
]
} |
112,087 | I know this question has been asked previously but I cannot find a satisfactory explanation as to why is it so difficult for $\ce{H4O^2+}$ to exist. There are explanations that it is so because of $+2$ charge, but if only that was the reason then existence of species like $\ce{SO4^2-}$ should not have been possible. So, what is exactly the reason that makes $\ce{H4O^2+}$ so unstable? | I myself was always confused why $\ce{H3O^+}$ is so well-known and yet almost nobody talks of $\ce{H4O^2+}$ . I mean, $\ce{H3O^+}$ still has a lone pair, right? Why can't another proton just latch onto that? Adding to the confusion, $\ce{H4O^2+}$ is very similar to $\ce{NH4+}$ , which again is extremely well-known. Even further, the methanium cation $\ce{CH5+}$ exists (admittedly not something you'll find on a shelf), and that doesn't even have an available lone pair! It is very useful to rephrase the question "why is $\ce{H4O^2+}$ so rare?" into " why won't $\ce{H3O^+}$ accept another proton? ". Now we can think of this in terms of an acid-base reaction: $$\ce{H3O^+ + H+ -> H4O^2+}$$ Yes, that's right. In this reaction $\ce{H3O^+}$ is the base , and $\ce{H^+}$ is the acid. Because solvents can strongly influence the acidity of basicity of dissolved compounds, and because inclusion of solvent makes calculations tremendously more complicated, we will restrict ourselves to the gas phase (hence $\ce{(g)}$ next to all the formulas). This means we will be talking about proton affinities . Before we get to business, though, let's start with something more familiar: $$\ce{H2O(g) + H+(g) -> H3O^+(g)}$$ Because this is in the gas phase, we can visualise the process very simply. We start with a lone water molecule in a perfect vacuum. Then, from a very large distance away, a lone proton begins its approach. We can calculate the potential energy of the whole system as a function of the distance between the oxygen atom and the distant proton. We get a graph that looks something like this: For convenience, we can set the potential energy of the system at 0 when the distance is infinite. At very large distances, the lone proton only very slightly tugs the electrons of the $\ce{H2O}$ molecule, but they attract and the system is slightly stabilised. The attraction gets stronger as the lone proton approaches. However, there is also a repulsive interaction, between the lone proton and the nuclei of the other atoms in the $\ce{H2O}$ molecule. At large distances, the attraction is stronger than the repulsion, but this flips around if the distance is too short. The happy medium is where the extra proton is close enough to dive into the molecule's electron cloud, but not close enough to experience severe repulsions with the other nuclei. In short, a lone proton from infinity is attracted to a water molecule, and the potential energy decreases up to a critical value, the bond length. The amount of energy lost is the proton affinity: in this scenario, a mole of water molecules reacting with a mole of protons would release approximately $\mathrm{697\ kJ\ mol^{-1}}$ (values from this table ). This reaction is highly exothermic Alright, now for the next step: $$\ce{H3O^+(g) + H+(g) -> H4O^2+(g)}$$ This should be similar, right? Actually, no. There is a very important difference between this reaction and the previous one; the reagents now both have a net positive charge. This means there is now a strong additional repulsive force between the two. In fact, the graph above changes completely. Starting from zero potential at infinity, instead of a slow decrease in potential energy, the lone proton has to climb uphill , fighting a net electrostatic repulsion. However, even more interestingly, if the proton does manage to get close enough, the electron cloud can abruptly envelop the additional proton and create a net attraction . The resulting graph now looks more like this: Very interestingly, the bottom of the "pocket" on the left of the graph (the potential well) can have a higher potential energy than if the lone proton was infinitely far away. This means the reaction is endothermic, but with enough effort, an extra proton can be pushed into the molecule, and it gets trapped in the pocket. Indeed, according to Olah et al. , J. Am. Chem. Soc. 1986 , 108 (5), pp 1032-1035 , the formation of $\ce{H4O^2+}$ in the gas phase was calculated to be endothermic by $\mathrm{248\ kJ\ mol^{-1}}$ (that is, the proton affinity of $\ce{H3O^+}$ is $\mathrm{-248\ kJ\ mol^{-1}}$ ), but once formed, it has a barrier towards decomposition (the activation energy towards release of a proton) of $\mathrm{184\ kJ\ mol^{-1}}$ (the potential well has a maximum depth of $\mathrm{184\ kJ\ mol^{-1}}$ ). Due to the fact that $\ce{H4O^2+}$ was calculated to form a potential well, it can in principle exist. However, since it is the product of a highly endothermic reaction, unsurprisingly it is very hard to find. The reality in solution phase is more complicated, but its existence has been physically verified (if indirectly). But why stop here? What about $\ce{H5O^3+}$ ? $$\ce{H4O^2+(g) + H+(g) -> H5O^3+(g)}$$ I've run a rough calculation myself using computational chemistry software, and here it seems we really do reach a wall. It appears that $\ce{H5O^3+}$ is an unbound system, which is to say that its potential energy curve has no pocket like the ones above. $\ce{H5O^3+}$ could only ever be made transiently, and it would immediately spit out at least one proton. The reason here really is the massive amount of electrical repulsion, combined with the fact that the electron cloud can't reach out to the distance necessary to accommodate another atom. You can make your own potential energy graphs here . Note how depending on the combination of parameters, the potential well can lie at negative potential energies (an exothermic reaction) or positive potential energies (an endothermic reaction). Alternatively, the pocket may not exist at all - these are the unbound systems. EDIT: I've done some calculations of proton affinities/stabilities on several other simple molecules, for comparison. I do not claim the results to be quantitatively correct. $$
\begin{array}{lllll}
\text{Species} & \ce{CH4} & \ce{CH5+} & \ce{CH6^2+} & \ce{CH7^3+} & \ce{CH8^4+} \\
\text{Stable in gas phase?} & \text{Yes} & \text{Yes} & \text{Yes} & \text{Yes} & \text{No} \\
\text{Approximate proton affinity}\ (\mathrm{kJ\ mol^{-1}}) & 556 & -246 & -1020 & N/A & N/A \\
\end{array}
$$ Notes: Even without a lone pair, methane ( $\ce{CH4}$ ) protonates very exothermically in the gas phase. This is a testament to the enormous reactivity of a bare proton, and the huge difference it makes to not have push a proton into an already positively-charged ion. For most of the seemingly hypercoordinate species in these tables (more than four bonds), the excess hydrogen atoms "pair up" such that it can be viewed as a $\ce{H2}$ molecule binding sideways to the central atom. See the methanium link at the start. $$
\begin{array}{lllll}
\text{Species} & \ce{NH3} & \ce{NH4+} & \ce{NH5^2+} & \ce{NH6^3+} \\
\text{Stable in gas phase?} & \text{Yes} & \text{Yes} & \text{Yes} & \text{No} \\
\text{Approximate proton affinity}\ (\mathrm{kJ\ mol^{-1}}) & 896 & -410 & N/A & N/A \\
\end{array}
$$ Notes: Even though the first protonation is easier relative to $\ce{CH4}$ , the second one is harder. This is likely because increasing the electronegativity of the central atom makes the electron cloud "stiffer", and less accommodating to all those extra protons. The $\ce{NH5^{2+}}$ ion, unlike other ions listed here with more than four hydrogens, appears to be a true hypercoordinate species. Del Bene et al. indicate a five-coordinate square pyramidal structure with delocalized nitrogen-hydrogen bonds. $$
\begin{array}{lllll}
\text{Species} & \ce{H2O} & \ce{H3O+} & \ce{H4O^2+} & \ce{H5O^3+} \\
\text{Stable in gas phase?} & \text{Yes} & \text{Yes} & \text{Yes} & \text{No} \\
\text{Approximate proton affinity}\ (\mathrm{kJ\ mol^{-1}}) & 722 & -236 & N/A & N/A \\
\end{array}
$$ Notes: The first series which does not accommodate proton hypercoordination. $\ce{H3O+}$ is easier to protonate than $\ce{NH4+}$ , even though oxygen is more electronegative. This is because the $\ce{H4O^2+}$ nicely accommodates all protons, while one of the protons in $\ce{NH5^2+}$ has to fight for its space. $$
\begin{array}{lllll}
\text{Species} & \ce{HF} & \ce{H2F+} & \ce{H3F^2+} & \ce{H4F^3+} \\
\text{Stable in gas phase?} & \text{Yes} & \text{Yes} & \text{Yes} & \text{No} \\
\text{Approximate proton affinity}\ (\mathrm{kJ\ mol^{-1}}) & 501 & -459 & N/A & N/A \\
\end{array}
$$ Notes: Even though $\ce{H3F^2+}$ still formally has a lone pair, its electron cloud is now so stiff that it cannot reach out to another proton even at normal bonding distance. $$
\begin{array}{lllll}
\text{Species} & \ce{Ne} & \ce{NeH+} & \ce{NeH2^2+} \\
\text{Stable in gas phase?} & \text{Yes} & \text{Yes} & \text{No} \\
\text{Approximate proton affinity}\ (\mathrm{kJ\ mol^{-1}}) & 204 & N/A & N/A \\
\end{array}
$$ Notes: $\ce{Ne}$ is a notoriously unreactive noble gas, but it too will react exothermically with a bare proton in the gas phase. Depending on the definition of electronegativity used, it is possible to determine an electronegativity for $\ce{Ne}$ , which turns out to be even higher than $\ce{F}$ . Accordingly, its electron cloud is even stiffer. $$
\begin{array}{lllll}
\text{Species} & \ce{H2S} & \ce{H3S+} & \ce{H4S^2+} & \ce{H5S^3+} & \ce{H6S^4+} \\
\text{Stable in gas phase?} & \text{Yes} & \text{Yes} & \text{Yes} & \text{Yes} & \text{No} \\
\text{Approximate proton affinity}\ (\mathrm{kJ\ mol^{-1}}) & 752 & -121 & -1080 & N/A & N/A \\
\end{array}
$$ Notes: The lower electronegativity and larger size of $\ce{S}$ means its electrons can reach out further and accommodate protons at a larger distance, while reducing repulsions between the nuclei. Thus, in the gas phase , $\ce{H2S}$ is a stronger base than $\ce{H2O}$ . The situation is inverted in aqueous solution due to uniquely strong intermolecular interactions (hydrogen bonding) which are much more important for $\ce{H2O}$ . $\ce{H3S+}$ also has an endothermic proton affinity, but it is lower than for $\ce{H3O+}$ , and therefore $\ce{H4S^2+}$ is easier to make. Accordingly, $\ce{H4S^2+}$ has been detected in milder (though still superacidic!) conditions than $\ce{H4O^2+}$ . The larger size and lower electronegativity of $\ce{S}$ once again are shown to be important; the hypercoodinate $\ce{H5S^3+}$ appears to exist, while the oxygen analogue doesn't. | {
"source": [
"https://chemistry.stackexchange.com/questions/112087",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/75583/"
]
} |
112,891 | Totally an elementary question. Staring at a candle, it appears that the bottom of the wick is dark whereas the top glows. However the bottom of the flame (the blue) is the hottest. Is the reason for this that the concentration of liquid wax is greater at the bottom, offsetting the greater temperature at the bottom? | The wick temperature does not have to be the same as the flame temperature.The flame is hottest at the bottom , but the wick is hottest at the top . For a candle, the wick burning isn't the intended purpose of the wick; light comes from burning wax (more generally: fuel), you want to burn the wax not the wick. Rather the purpose of a wick is to help fuel evaporate by soaking up wax and allowing the radiant energy from the flame to heat the wax causing it to evaporate and burn also. As wax travels up the wick, it evaporates and less wax is in the wick the further up you go. Eventually the wax dries up and the radiant energy is heating a wick without any wax. Eventually the wick gets so hot at the tip, that it will glow due to black-body radiation. In summary: though the blue is the hottest part of the flame, the wick can evaporate wax to cool towards the bottom. There is no wax at the top and thus as the radiant energy of the flame causes it to get hotter until it starts to glow. Extra: I will note that the top of the wick does not burn when lit as the gas around the wick has too little oxygen to burn (unless the wick slumps out side the flame). When you blow out the candle the low oxygen environment of a flame is gone and thus the hot wick will smoulder. | {
"source": [
"https://chemistry.stackexchange.com/questions/112891",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/77070/"
]
} |
114,837 | The Medium.com article Mars Phoenix Lander, 10 Years Later shows several remarkable images and discoveries on Mars by the Mars Phoenix Lander circa 2008. One image (shown below) shows what looks like droplets of liquid water, condensed on the surface of one of the lander's legs. The article says (emphasis mine): Shortly after landing, the camera on Phoenix’s robotic arm captured views of blobs of material on one of the landing struts. Over time, these blobs moved, darkened, and coalesced, behaving like droplets of liquid water. The hypothesis here was that these blobs “splashed up” on the struts when the descent thrusters melted the ice exposed upon landing mentioned above. But if liquid water isn’t stable on the martian surface, how did Phoenix observe liquid water on Mars? The key here lies in salt. If you live anywhere that gets snow, you’re probably familiar with salt as a de-icer for roads, sidewalks, etc. Salt lowers the freezing point of water, allowing it to remain liquid at temperatures lower than that of non-salty water. For example, pure water freezes at 0 °C/32 °F, but ocean saltwater freezes around −2 °C/28.4 °F. While the de-icing salts you get at the hardware store lower the freezing point by a few degrees, more exotic salts can lower the freezing point as much as −70 °C/−89 °F! Phoenix discovered some of these exotic salts in the soil around the lander—in particular, magnesium perchlorate. (note, minor editorial changes have been made ) Question: Which "exotic salt" can lower water's freezing point by 70 °C? Is it in fact magnesium perchlorate (which was found on Mars) or is it a different salt? Blobs of possible brine ( really salty water) imaged on one of Phoenix’s landing struts shortly after arriving on Mars. Credit: NASA/JPL-Caltech/University of Arizona/Max Planck Institute | I recently got a chance to attend a talk by someone who was working on developing analytical instrumentation on Mars. The interesting story is that the initial results by ion-selective electrode was that Mars soil is full of nitrates. Nobody knew on Earth that the nitrate ion selective electrode is far more responsive to perchlorate than nitrate. After learning this, it was an eye opener for analytical chemists! Now they wish to use chromatography rather than electrochemistry. So this was a good lesson for us on Earth. The perchlorate ion was discovered in 2008 by the nitrate selective electrode. No specific electrode was attached to detect perchlorate, it was rather an accidental discovery. The Science Report makes a footnote " Detection of Perchlorate and the Soluble Chemistry of Martian Soil at the Phoenix Lander Site " (paper: Science 2009, 325 (5936), 64–67 ) A Hofmeister anion ISE was intended to monitor nitrate from a $\ce{LiNO3}$ reference electrolyte that was part of the leaching solution, but was ultimately used for perchlorate detection [Footnote] The relative sensitivity of the Hofmeister series ISE to perchlorate
over nitrate is 1000:1, and substantial quantities of perchlorate will
overwhelm any other signal. If, as was observed, >1 mM perchlorate
accounts for the observed signal, it would require >1000 mM nitrate to
produce the same response. This would correspond to more than the mass
of the entire sample. Now that they know it is a perchlorate ion, people did some studies on supercooled brines. See this paper: Toner, J.; Catling, D.; Light, B. The formation of supercooled brines, viscous liquids, and low-temperature perchlorate glasses in aqueous solutions relevant to Mars. Icarus 2014, 233 , 36–47 (also available here ). They clearly show that if calcium or magnesium perchlorates are slowly cooled, one can get supercooled brines up to -120 Celcius. This is a rather amazing finding. They call it a glassy state. | {
"source": [
"https://chemistry.stackexchange.com/questions/114837",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/16035/"
]
} |
115,646 | Instead of making potassium-strengthened glass by creating ordinary soda-lime glass first, then replacing the sodium atoms/ions with potassium by putting the glass in a bath/solution of potassium nitrate, why not replace the 'soda' (sodium carbonate) in the initial process(es) with potassium carbonate? | The potassium is not added first because the potassium does not intrinsically make stronger glass, it is the substitution of a larger ion for a smaller one at the surface that does. To understand why ion exchange strengthens glass, you have to understand why the ion exchange makes the glass harder. The process of ion exchange hardening is done at a temperature that allows ion diffusion, but disallows a reconfiguration of the glass structure or relaxation of bonds (i.e. below $T_{g}$ , the glass transition temperature). When sodium atoms are replaced by potassium in glass, the potassium ions occupy a site that is sized for sodium. This substitution of ions creates a compressive force in the sites where it occurs. Since the process is diffusive, the compressive force is produced on the surface of the glass and the interior is put in tension. The compressive force at the surface causes any surface defects, namely scratches, to be in compression. These surface defects being in compression prevents their ability to grow or cause failure of the glass, thus strengthening the glass. If you make a glass with potassium, you have potassium ions occupying potassium sites, thus no strengthening occurs. Thus the only way to chemically harden the glass is to substitute larger ions into the glass. Preemptive Question: What about internal defects causing failure? Gorilla Glass is made via the fusion draw process where molten glass flow over two sides of a platinum trough and meets at the bottom to be rejoined. This process prevents many internal defects to the point that they are very rare and quality control can remove the few pieces with defects that are made. | {
"source": [
"https://chemistry.stackexchange.com/questions/115646",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/64075/"
]
} |
118,198 | I was studying chemical bonding when I noticed something odd. We say compounds like $\ce{CCl4}$ and $\ce{CH4}$ have a tetrahedral geometry (which is a 3D structure) but when we talk about their dipole moments, we say they have no dipole moment. We give the reason that as the H atoms are opposite each other (hence assuming it to be a 2D structure), they cancel out their bond moments. But why? In the beginning we saw that they are 3D structures with one H being at the top and other 3 at the bottom, with the bond dipole moment directed from each H towards C. Due to this the component of the bond moment of the 3 downward H atoms along the line of the upper H atom would cause a net upwards bond moment which is not equal to zero. But certainly I am wrong as these values have been calculated scientifically. So can someone please point out where i am going wrong in my understanding? | Karsten's answer is excellent, but here is a figure that shows the mathematics involved: The central atom (green) is at the center of the cube, the four other atoms (purple) are at alternating vertices and the geometry should be clear. Alternatively, if you orient the molecule so one peripheral (purple) atom is directly "above" the central (green) atom, then each of the other three atoms is just $1 - \theta$ ( $\approx 70.52877940 ^\circ$ ) away from being directly "under" the central atom, so each contributes $\cos(1 - \theta)$ times the bond dipole. This is the downward component of the bond dipole from one of the lower atoms. But $\cos(1 - \theta) = 1/3$ , so this is simply (bond dipole)/3, and there are three of these lower atoms, so the three downward components exactly balance the one upward component. | {
"source": [
"https://chemistry.stackexchange.com/questions/118198",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/77131/"
]
} |
118,570 | I recently read in a book that rain is considered acid rain if the pH falls below 5.6. However a substance is acidic when the pH is below 7; so why is the boundary for acid rain 5.6? I was thinking pH between 5.6-7 would be too diluted to have an effect on limestone and other materials, and that's why the pH has to be under 5.6. But I wasn't sure if this is true so I wanted to find out. | The $\mathrm{pH}$ of pure water (rain as well as distilled water) in equilibrium with the atmosphere ( $p_{\ce{CO2}}= 10^{-3.5}\ \mathrm{atm}$ ) can be calculated as follows. $$[\ce{H2CO3^*}]=K_\mathrm H\cdot p_{\ce{CO2}}$$ where $[\ce{H2CO3^*}]$ is the total analytical concentration of dissolved $\ce{CO2}$ , i.e. $[\ce{H2CO3^*}]=[\ce{CO2(aq)}]+[\ce{H2CO3}]$ , and $K_\mathrm H= 3.39\times10^{-2}\ \mathrm{mol\ l^{-1}\ atm^{-1}}$ is Henry's law constant for $\ce{CO2}$ . $$\begin{align}
\log[\ce{H2CO3^*}]&=\log K_\mathrm H+\log p_{\ce{CO2}}\\
&=-1.5-3.5\\
&=-5.0
\end{align}$$ The commonly used first acid dissociation constant of carbonic acid $\mathrm pK_{\mathrm a1}=6.3$ (at $25\ \mathrm{^\circ C}$ ) actually is a composite constant that includes both the hydration reaction $$\ce{H2O + CO2(aq) <=> H2CO3}$$ and the protolysis of true $\ce{H2CO3}$ $$\ce{H2CO3 <=> H+ + HCO3-}$$ For a weak acid $$\begin{align}
\log[\ce{H+}]&\approx\frac12\left(\log K_\mathrm a+\log[\ce{H2CO3^*}]\right)\\
&=\frac12\left(-6.3-5.0\right)\\
&=-5.65\\
\mathrm{pH}&=5.65
\end{align}$$ Thus, pure rain in equilibrium with the atmosphere has about $\mathrm{pH}=5.65$ . Any acid rain with lower $\mathrm{pH}$ would be caused by additional acids. | {
"source": [
"https://chemistry.stackexchange.com/questions/118570",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/78670/"
]
} |
118,808 | I know the general idea behind x-ray crystallography is to take a high quality crystal and place it in the path of an x-ray beam. Areas of high electron density will diffract the beam and lead to spots on a detector screen where photons have constructively interfered with each other (and nothing appears where destructive interference has occurred). The crystal is then rotated and another diffraction pattern image is taken and the cycle repeats until there are enough views to reconstruct the atomic structure of whatever you crystallized. My question is what is the general process for doing this without computer software? How was this done in the 20s? The only equation I know for x-ray diffraction is Bragg's Law but is this the only equation used to interpret the data? Surely there must be others? How do you translate the spots on a detector to electron density plots using Bragg's law? | I agree with @andselisk that this question is quite broad. I will focus on two specific questions asked The only equation I know for x-ray diffraction is Bragg's Law but is this the only equation used to interpret the data? [...] How do you translate the spots on a detector to electron density plots using Braggs law? Apart from Bragg's law (which tells you where the diffraction spots are for a known orientation of a crystal with know unit cell), it was also known that real space (electron density) and reciprocal space (diffraction pattern) are related by 3D Fourier transform. For structures containing one or two atoms, just knowing the unit cell parameters and the symmetry is enough to get the entire structure. For anything slightly more complicated, the Fourier transform of amplitudes or later, of intensities (Patterson methods, http://reference.iucr.org/dictionary/Patterson_methods ), had to be used. The first crystal structure analyses were of crystals with centrosymmetry, where the phase problem is easier to solve. My question is what is the general process for doing this without computer software? Diffraction data was measured on film, with gray-scales to assess intensity of signals. To calculate a Fourier transform, pre-computed tables were used, such as the Beevers-Lipson strips . As Andselisk commented, Fourier transform was used late in the 20s, and initially for problems that were one- or two-dimensional. There is a nice account by P.P. Ewald available here: https://www.iucr.org/publ/50yearsofxraydiffraction/full-text/structure-analysis This was written at a time when the myoglobin structure had just been solved, after "50 years of X-ray diffraction". | {
"source": [
"https://chemistry.stackexchange.com/questions/118808",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/46159/"
]
} |
121,484 | I purchased a used Canon Powershot A1100 IS camera online. When I received it in the mail it had a very strange sticky substance that looked like a thin film. It resisted attempts at cleaning. Water, water + soap, baking soda + vinegar, baking soda only, vinegar only and rubbing alcohol all did not resolve the issue. Only vigorous scrubbing kind of helped. In addition I could not get it off my hands at all even after about 10 hand washing attempts and soaking my hands in soap and water. After my hands dried the areas where my hands touched the camera were still sticky. The only thing that helps was after it dried rubbing my hands with a paper towel. When wet it was not sticky then when it dried it came back. The battery terminal was completely corroded with a blue color and it seems like there was some but possibly less of the substance internally. This is part of the battery door. It is used to complete the circuit between the two AA batteries that the camera takes. Scraping away the blue crystal only results in more of what you can already see in the picture. Metal that looks grey in some places and brown/red in others. The closest thing I can think of is petroleum jelly but I can't be sure that, that is it. I don't have any and don't work with it much so I can't compare what it looks like. Based on my description do you have any possible ideas on what it is? If you do have something that matches these symptoms would you happen to know what its toxicity is and proper cleanup procedure? Also in general how should one handle touching something they don't have knowledge of but that could be dangerous and when hand-washing is not working/removing it. Thank you in advance for your questions and answers. | After the lengthy back and forth perhaps an official answer is due: the blue salt deposit on the camera battery holder door is most likely copper(II) hydroxide by reaction of leaked alkaline battery electrolyte (likely $\ce{KOH}$ ) with copper metal in the battery door contact. It could possibly also consist of copper(II) carbonate hydroxide formed by reaction of the copper(II) hydroxide with carbon dioxide, but the deep blue rather than green color in the picture suggests it is the previous. Copper hydroxide is regarded as moderately hazardous, but you are unlikely to be at risk unless you ingest it. Feel free to consult a MSDS for this compound (such as here ) for more information. The sticky substance coating the camera is possibly a degradation product of plastic components in the camera housing. It is not likely to be toxic unless ingested or if you are in contact with it persistently. Without more information about how the camera was maintained it is difficult to say much more. | {
"source": [
"https://chemistry.stackexchange.com/questions/121484",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/83917/"
]
} |
125,257 | I'm sorry if this is a broad question, but I am trying to plan a simple experiment. I am wondering is there a somewhat simple way of continuously measuring $\ce{CO2}$ . I already know the initial carbon dioxide content is around $\pu{2.2g}$ from research, but I don't know how this result is reached. | Measure the change in mass over time of the remaining liquid. Though some water will also evaporate, you can control for that by keeping the humidity near 100%. If you have to be precise , collect the outgassed $\ce{CO2}$ in a liquid-nitrogen cold trap. Check the mass of the condensate, which should equal that lost from the soda. Check purity, to be really finicky. | {
"source": [
"https://chemistry.stackexchange.com/questions/125257",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/87077/"
]
} |
127,021 | On the Medical Sciences StackExchange site I asked Can a calorie be neither protein, carb, nor fat? and got a very helpful answer, which was that ethanol (in alcoholic drinks) is caloric but neither a protein, nor a carb, nor a fat. I then asked a follow-up question as a comment: Where can I find a list of all such substances? That is, substances that are caloric but neither protein, nor carb, nor fat. The person who answered my question recommended I post the question to this SE site. | If you go to https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:01990L0496-20081211&qid=1580028914722&from=EN you will see a fairly extensive list of caloric compounds (screenshot pasted below). This list is probably not comprehensive -- likely there are many compounds we (which includes our gut bacteria) can metabolize for calories. Rather, it's limited to componds that are caloric and approved for use in food. [It also doesn't include compounds that are naturally found in foods, and may be caloric, but are only present in very small quantities, e.g., nucleic acids.] You can see that, in addition to fat, proteins, carbohydrates, and ethanol, the table lists polyols, organic acids, salatrims, and fiber. Some explanation of the table may be helpful: Polyols are artifical sweeteners. They include lactitol, maltitol, mannitol, sorbitol, xylitol, erythritol, glycerol, hydrogenated starch hydrolysates, and isomalt. [Though note that while erythritol is a polyol, it is listed separately as non-caloric. And while glycerol, aka glycerin or glycerine, is a sweetener, it is also used in food for other purposes.] Organic acids include acetic acid (the main component of vinegar, other than water), as well as citric acid, ascorbic acid, and malic acid (the latter three are found in citrus fruits). Fiber refers to dietary fiber, which is the type of fiber we can digest completely. Salatrims are "short and long chain acyl triglyceride molecules"; they are a type of low-calorie fat substitute. Source: Consolidated text: Council Directive of 24 September 1990 on nutrition labelling for foodstuffs (90/496/EEC) Select: 6 CELEX number: 01990L0496-20081211 Author: Council of the European Union Date of document: 11/12/2008 | {
"source": [
"https://chemistry.stackexchange.com/questions/127021",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/88248/"
]
} |
128,488 | Do molecules with bridges through rings (in a manner illustrated by this) exist? I sometimes get results like this when doing Energy Minimization on molview.org. For example: Is this actually a thing? EDIT: As a slightly more realistic example, consider this: | I'm not sure about the existence of molecules with bridges through rings. However, there are several publications of synthesis of molecules mimicking wheels and axles ([2]rotaxanes; The “[2]” refers to the number of interlocked components) as one shown below (Ref. 1): (The diagram is from Reference 1 ) This specific molecule ( 8 ; an “impossible” [2]rotaxane) represents a macro-cycle with a straight-chain molecule with bulky end groups going through its center. The inclusion of two bulky end groups prevents the straight-chain molecule leaving the macro-cycle (mechanically interlocked) as depicted in the diagram (See Ref. 2 for the total synthesis of the molecule). Note that Ref. 1 also cited articles for the synthesis of [2]catenanes, which contain two interlocked rings (instead of one axle and one macrocycle). Keep in mind that there are some advanced catenanes and rotaxanes that exist ( e.g. , [3]catenanes and [3]rotaxanes). (The structures are from Reference 1 ) References: Edward A. Neal, Stephen M. Goldup, "Chemical consequences of mechanical bonding in catenanes and rotaxanes: isomerism, modification, catalysis and molecular machines for synthesis," Chem. Commun. 2014 , 50(40) , 5128-5142 ( https://doi.org/10.1039/C3CC47842D ). Jeffrey S. Hannam, Stephen M. Lacy, David A. Leigh, Carlos G. Saiz, Alexandra M. Z. Slawin, Sheila G. Stitchell, "Controlled Submolecular Translational Motion in Synthesis: A Mechanically Interlocking Auxiliary," Angew. Chem., Intl. Fd. 2004 , 43(25) , 3260-3264 ( https://doi.org/10.1002/anie.200353606 ). | {
"source": [
"https://chemistry.stackexchange.com/questions/128488",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/87934/"
]
} |
129,122 | Our college is switching to teaching online mid-semester in hopes of slowing the spread of Covid-19. All the molecular model kits are still on campus. How could you build a model of tetrahedral coordination (say methane) from materials found at home? I'm aware of computer visualizations (and will make those available), but I think having a physical model when first encountering three-dimensional structures adds value. I made a model from lawn toys (see below), but these are not common household items. | Inflate balloons, and tie them «at their stem» like a bouquet of flowers. If you take four of them, not too much inflated, you well demonstrate a situation close to $sp^3$ hybridization. These models equally work well in larger lecture halls by the way, and intentionally using different colors allows many options. ( source ) ( screen photo You need some worked examples? See videos like this or this . You need a scientific paper? Well, there are as well, e.g., this (open access) expanding the picture to extended $\pi$ -systems: | {
"source": [
"https://chemistry.stackexchange.com/questions/129122",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/72973/"
]
} |
129,133 | I’m wondering what would happen if the filter paper in a vacuum filtration process is bigger than the buchner funnel diameter?
I mean would some differences occurs? Or would the whole experiment damage? If yes, may I know how and why? | Inflate balloons, and tie them «at their stem» like a bouquet of flowers. If you take four of them, not too much inflated, you well demonstrate a situation close to $sp^3$ hybridization. These models equally work well in larger lecture halls by the way, and intentionally using different colors allows many options. ( source ) ( screen photo You need some worked examples? See videos like this or this . You need a scientific paper? Well, there are as well, e.g., this (open access) expanding the picture to extended $\pi$ -systems: | {
"source": [
"https://chemistry.stackexchange.com/questions/129133",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/89928/"
]
} |
130,963 | I never thought that modern American nickels actually contained nickel anymore. However, according to this wiki article, the coins actually do contain 25% nickel, the rest being copper. And yet, no US coin produced today is officially magnetic. Why is this alloy of nickel not attracted to a magnet? (and yes, I tried time and again to find and answer elsewhere online.) | There are many types of magnetic properties, including ferromagnetism, paramagnetism, diamagnetism, antiferromagnetism, ferrimagnetism, superparamagnetism, metamagnetism, spin glasses, and helimagnetism. Many of these are too weak to cause any noticeable interaction with a magnet. The type of everyday magnetism you're thinking of, which nickel has, is ferromagnetism. While nickel is ferromagnetic, copper is not. As you said, the American nickel is currently 25% nickel and 75% copper. According to this paper (from 1931!), in order for a nickel-copper alloy to be ferrogmagnetic, it must contain at least 56% nickel: ... 56 percent nickel is required before the alloy shows ferromagnetic properties at ordinary temperatures. https://journals.aps.org/pr/abstract/10.1103/PhysRev.38.828 E. H. Williams, Magnetic Properties of Copper-Nickel Alloys. Phys. Rev. 38, 828 (1931). | {
"source": [
"https://chemistry.stackexchange.com/questions/130963",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/90348/"
]
} |
132,488 | A racemate consists of 50 % $d$ and 50 % $l$ forms of an optically active compound. But how can someone ensure exact 50% quantity comparing molecule by molecule? There will always be some difference in number of molecules in both forms of optically active compounds. Is this sufficient for the mixture to rotate the plane polarised light in clockwise or anticlockwise direction? According to wikipedia "Racemization can be achieved by simply mixing equal quantities of two pure enantiomers." | Racemization isn't "exact," but rather very very close to equality. It is just simple probability. Think of flipping a coin, p=probability for heads, and q=probability of tails. Now for a fair flip p=q=0.5. From binomial theory the standard deviation is $\sqrt{n\cdot p \cdot q}$ where n is the number of flips. Now let's assume 2 standard deviations difference, which is roughly at the 95% confidence interval. If you flip 10 pennies then a two standard deviation difference is $2\times \sqrt{0.5^2\times 10} \approx 3$ in the number of heads. Now flip $6.022\times10^{23}$ dimes then a two standard deviation difference is $2\times \sqrt{0.5^2\times 6.022\times10^{23}} \approx 7.8\times10^{11} $ in the number of heads. But now think of the % difference. $3$ heads in 10 tries for the pennies is $30\%$ . $7.8\times10^{11}$ more heads when flipping $6.022\times10^{23}$ dimes is only a difference of $1.3\times 10^{-10}\%$ which is an insignificantly small difference. | {
"source": [
"https://chemistry.stackexchange.com/questions/132488",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/55479/"
]
} |
132,495 | The reaction: (a) m -hydroxyphenol + aq. $\ce{NaHCO3}$ /boil; (b) workup with $\ce{H3O+/H2O}$ I know that $\ce{CO2}$ will be formed by boiling aq. $\ce{NaHCO3}$ , and so the reaction will be like Kolbe-Schmitt type, i.e. $\ce{COOH}$ group will be attached on the benzene ring. But I have a doubt that where will the addition of $\ce{COOH}$ group take place, (1) ortho -position w.r.t. both hydroxy groups $$\text{or}$$ (2) ortho -position w.r.t. one hydroxy group and para -position w.r.t. the other group. So, Which will be the major product and why? | Racemization isn't "exact," but rather very very close to equality. It is just simple probability. Think of flipping a coin, p=probability for heads, and q=probability of tails. Now for a fair flip p=q=0.5. From binomial theory the standard deviation is $\sqrt{n\cdot p \cdot q}$ where n is the number of flips. Now let's assume 2 standard deviations difference, which is roughly at the 95% confidence interval. If you flip 10 pennies then a two standard deviation difference is $2\times \sqrt{0.5^2\times 10} \approx 3$ in the number of heads. Now flip $6.022\times10^{23}$ dimes then a two standard deviation difference is $2\times \sqrt{0.5^2\times 6.022\times10^{23}} \approx 7.8\times10^{11} $ in the number of heads. But now think of the % difference. $3$ heads in 10 tries for the pennies is $30\%$ . $7.8\times10^{11}$ more heads when flipping $6.022\times10^{23}$ dimes is only a difference of $1.3\times 10^{-10}\%$ which is an insignificantly small difference. | {
"source": [
"https://chemistry.stackexchange.com/questions/132495",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/91789/"
]
} |
134,828 | We know that when two objects are placed in contact with each other, after a period of time, the two objects will have the same temperature. Thus, if a hot body comes into contact with a relatively cold body, it will lose its heat and its temperature will reduce, while the temperature of the 'cold' body will increase. If I put a thermometer in an object to measure its temperature, the body will lose its heat (or gain the heat, if the thermometer is hotter) until it reaches the same temperature as the thermometer. Doesn't this mean the thermometer is not giving us the EXACT temperature of the object, i.e the temperature before the measurement was taken, and is in fact, showing us a lower/higher temperature? | I'll start by mentioning that there's no such thing as an exact measurement—there is always some measurement error. The only observations that can be numerically exact are counted numbers of discrete objects (e.g., the number of electrons in a neutral carbon atom is exactly 6). And I say "can be", because if the numbers are sufficiently large, even with counted numbers there can be errors (we see this with vote counting). Having said that, your idea is correct—if the heat capacity of the temperature probe is significant relative to that of the object being measured, then the measurement can significantly change the temperature of your object. This is a particular concern when measuring small objects. For such applications, researchers can employ (as Jon Custer mentioned in his comment) non-contact thermometry. See, for instance: https://www.omega.co.uk/temperature/z/noncontacttm.html | {
"source": [
"https://chemistry.stackexchange.com/questions/134828",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/89183/"
]
} |
134,855 | The paper Analysis of cyclic pyrolysis products formed from amino acid monomer has the following to say in it's experimental section: Glycine, phenylalanine, tyrosine, tryptophan, serine, and valine were purchased from Daejung Chemicals & Metals Co. (Korea). Arginine, aspartic acid, glutamic acid, and proline were purchased from Samchun Pure Chemical Co. (Korea). Alanine, asparagine, glu- tamine, histidine, leucine, and threonine were purchased from Junsei Chemical Co. (Japan). Isoleucine and lysine were purchased from Acros Organics (USA), and cysteine was purchased from Merck Co. (Germany). Methionine was purchased from Yakuri Pure Chemicals Co. (Japan). Fumed silica was purchased from Merck Co. (Germany).
... Pyrolysis–GC/MS was conducted with a CDS Pyroprobe 1500 heated filament pyrolyzer (Chemical Data System, Oxford, USA) coupled to an Agilent 6890 gas chromatograph equipped with a 5973 mass spectrometer of Agilent Technology Inc. (USA) Why is there so much emphasis on the sources and country of origin of chemicals/instruments in these sections? This behaviour is not restricted to this paper; several other papers go into similar levels of detail regarding their procedures. | Experimental details are very, very important: they are used to ensure that results are reproducible, or at least that is the aim. Let's talk about this specific example of suppliers and sources. One might naively think that a chemical purchased from supplier X is the same as the chemical from supplier Y. In an ideal world, that would be how it is. Unfortunately, this isn't the case, due to various considerations e.g. method of preparation, packaging, and so on. Therefore, even though the majority of the compound may be the same, there can be different impurities. And unfortunately, these impurities can lead to reactions working or failing. Many experimental chemists will personally know of cases where a reaction doesn't work until a reagent from a different company is used, or a reaction that stops working after buying a different bottle of a reagent. There are many examples of this in the literature, but I'll choose the first one that came to mind, which is a reasonably recent paper from my old group (open access): Mekareeya, A.; Walker, P. R.; Couce-Rios, A.; Campbell, C. D.; Steven, A.; Paton, R. S.; Anderson, E. A. Mechanistic Insight into Palladium-Catalyzed Cycloisomerization: A Combined Experimental and Theoretical Study. J. Am. Chem. Soc. 2017, 139 (29), 10104–10114. DOI: 10.1021/jacs.7b05436 . During these investigations, we also uncovered a critical dependence of the stereochemical outcome of the reaction on the batch of Pd(OAc) 2 employed as catalyst. This discovery was made through the chance purchase of Pd(OAc) 2 from a different supplier, which led to an unexpected
ratio of enamide alkene geometries (( Z ):( E ) = 80:20), rather than the typical ratio of ∼97:3. Screening of further samples of Pd(OAc) 2 gave variable results, the most extreme being a reversal of stereoselectivity to 40:60 in favor of the ( E )-isomer. The exact supplier isn't named, but 1 H NMR spectra of various batches are given and the performance in the reaction can be correlated with the exact form of Pd(II) in the catalyst, which ensures that future readers are aware of what to look out for. This is very important for good science, because what's the point of publishing your fancy new method if nobody can replicate it? Of course, this doesn't always happen. However, it's far better to include the information, just in case it is needed. As mentioned by Waylander in the comments, commercial samples of samarium diiodide (SmI 2 ) also have a reputation for being inconsistent: see e.g. this report by Szostak, M.; Spain, M.; Procter, D. J. J. Org. Chem. 2012, 77 (7), 3049–3059. DOI: 10.1021/jo300135v . | {
"source": [
"https://chemistry.stackexchange.com/questions/134855",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/65872/"
]
} |
135,167 | This question is derived from a question asked in my school test. What happens when a magnesium ribbon is heated in air? My first response was the formation of magnesium oxide $(\ce{MgO})$ when oxygen in air reacts with magnesium at a high temperature which can be expressed in the form of a chemical equation like this: $$\ce{2 Mg(s) + O2(g) ->[\Delta] 2 MgO(s)},$$ but I was wondering if magnesium could react with any other gas in the air to form a compound with that gas and I found out that magnesium does react with nitrogen in the air to form magnesium nitride too: $$\ce{3 Mg(s) + N2(g) ->[\Delta] Mg3N2(s)}.$$ What determines whether the heated magnesium ribbon will react with the oxygen in the atmosphere or the nitrogen in the atmosphere? Two possibilities that I can think of are: composition of the air; temperature. I don't think that composition is the answer because on average the atmosphere of Earth has more nitrogen than oxygen, so I think that the answer may be temperature. I'd also like to know how the factor affects the chemical reaction on an atomic level. | A large pile of grey magnesium powder, when lit in air, produces a smouldering pile which cools down to reveal a crusty white solid of magnesium oxide. However, if you break apart the mound, you can find something quite strange in the middle - a clearly brownish powder that wasn't there before. Seeing is believing! The author of the video also has a clever idea to identify the brown solid. By adding water and placing some moist pH paper above the puddle, it clearly shows the transfer of some alkaline substance across the gap. This is ammonia gas, $\ce{NH3}$ , whose presence is explained by the hydrolysis of magnesium nitride: $$\ce{Mg3N2(s) + 6H2O(l) -> 3 Mg(OH)2(aq) + 2 NH3(g)}$$ It is important that the pH paper not come in direct contact with the water used to hydrolyze the magnesium oxide, as $\ce{Mg(OH)2}$ is itself also basic, and could also be formed by reaction with either $\ce{MgO}$ or $\ce{Mg}$ directly. Only $\ce{Mg3N2}$ produces a basic gas which forms an alkaline solution in water. As you can see, magnesium metal does react directly with molecular nitrogen ( $\ce{N2}$ ) when burned in air. However, the reaction is thermodynamically and kinetically less favourable than the reaction with molecular oxygen ( $\ce{O2}$ ). This is almost certainly due to the extreme strength of the bond between nitrogen atoms in molecular $\ce{N2}$ , whose bond dissociation energy of $\mathrm{945\ kJ\ mol^{-1}}$ is one of the strongest in all of chemistry, second only to the bond in carbon monoxide. For comparison, the bond dissociation energy of molecular $\ce{O2}$ is drastically lower, at $\mathrm{498\ kJ\ mol^{-1}}$ . So why did the Chem13 magazine article referenced in Aniruddha Deb's answer not find any magnesium nitride? It is likely that 1 g of magnesium metal is far too little for the experiment run under their conditions. It takes a significant amount of "sacrificial" magnesium to completely consume the oxygen in its surroundings. Only once practically all the oxygen is consumed (and while the pile of magnesium is still hot enough from the reaction between magnesium and oxygen) will the remaining magnesium metal react with the nitrogen in air. Alternatively, the reaction would have to be performed in an oxygen-free environment. Magnesium metal is such a strong reductant that many substances can act as an oxidant for it, including pure $\ce{CO2}$ (also shown in the video above) and water ( never put out a magnesium fire with water! ). | {
"source": [
"https://chemistry.stackexchange.com/questions/135167",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/92430/"
]
} |
135,436 | I understand that we fill potato chips bag with nitrogen to prevent oxidation. But why do we use nitrogen, instead of neon or hydrogen or something else? My first guess is that nitrogen is lighter than neon/argon but what about hydrogen or helium? | As Nilay Ghosh said, nitrogen is cheap. Very cheap. Neon is expensive. Argon is cheaper than neon, but considerably more expensive than nitrogen. Helium is also expensive and needs to be used wisely, for important things, e.g., cryogenics. And hydrogen! I can just see the ads: “Buy our chips: they are lighter than air! But avoid open flames and sparks unless you want to be Hindenburged to a crisp (no pun intended)” Another fill gas to avoid is sulfur hexafluoride. A tennis ball manufacturer once decided to fill tennis balls with sulfur hexafluoride, assuming this would prevent the balls from going flat as a consequence of the high molar mass of sulfur hexafluoride. But the tennis balls exploded on the shelves because air diffused in. Thankfully, no one has ever tried using nitrous oxide as the fill gas in potato chip bags! | {
"source": [
"https://chemistry.stackexchange.com/questions/135436",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/2303/"
]
} |
137,171 | Methanol ( $\ce{CH3OH}$ ) and ethanol ( $\ce{C2H5OH}$ ) both are the organic compounds having an alcoholic group. The alcoholic beverages (liquors and spirits) for human consumption (albeit injurious to health) contain a certain percentage of ethanol but nothing of methanol. What chemical properties of ethanol ( $\ce{C2H5OH}$ ) make it usable in beverages as compared to those of methanol ( $\ce{CH3OH}$ )? EDIT : I am looking for a comparison of chemical properties of ethanol and methanol with respect to suitability for drinks i.e. comparison of their mechanism upon consumption. | The problem arises from the metabolized products of methanol. Methanol oxidizes in the liver by an enzyme called alcohol dehydrogenase to formaldehyde which is further metabolized to formic acid by another enzyme called aldehyde dehydrogenase . This formic acid is the source for acute toxicity associated with methanol poisoning. Accumulation of this chemical in the blood deprives cells of oxygen by inhibiting the enzyme cytochrome c oxidase in their mitochondria, a key element of the respiratory electron transport chain.
Formic acid, together with formaldehyde, are responsible for nerve damage, blindness, and other unpleasant effects associated with methanol poisoning. Note that ethanol is also metabolized in the same way by the same pair of enzymes to ultimately form acetic acid but human can tolerate acetic acid to an extent because it is less toxic compared to formic acid and thus can be consumed like for example, vinegar. In fact, ethanol is used as a remedy for methanol poisoning as it acts as a competitive inhibitor by more effectively binding and saturating the alcohol dehydrogenase enzyme in the liver, thus blocking the binding of methanol with the enzyme rendering it useless. References https://theskepticalchemist.com/methanol-toxic-ethanol/ (above reaction source) https://metode.org/metodes-whys-and-wherefores/why-can-we-drink-ethanol-but-not-methanol.html https://en.wikipedia.org/wiki/Methanol_toxicity | {
"source": [
"https://chemistry.stackexchange.com/questions/137171",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/22511/"
]
} |
137,193 | What is the best chromatographic technique to analyse a solution containing benzene, toluene, and ethylbenzene in n -hexane? Is gas chromatography (GC) the best way to analyse this solution? Or is using liquid chromatography with a non polar eluent and column better than GC in this case? | The problem arises from the metabolized products of methanol. Methanol oxidizes in the liver by an enzyme called alcohol dehydrogenase to formaldehyde which is further metabolized to formic acid by another enzyme called aldehyde dehydrogenase . This formic acid is the source for acute toxicity associated with methanol poisoning. Accumulation of this chemical in the blood deprives cells of oxygen by inhibiting the enzyme cytochrome c oxidase in their mitochondria, a key element of the respiratory electron transport chain.
Formic acid, together with formaldehyde, are responsible for nerve damage, blindness, and other unpleasant effects associated with methanol poisoning. Note that ethanol is also metabolized in the same way by the same pair of enzymes to ultimately form acetic acid but human can tolerate acetic acid to an extent because it is less toxic compared to formic acid and thus can be consumed like for example, vinegar. In fact, ethanol is used as a remedy for methanol poisoning as it acts as a competitive inhibitor by more effectively binding and saturating the alcohol dehydrogenase enzyme in the liver, thus blocking the binding of methanol with the enzyme rendering it useless. References https://theskepticalchemist.com/methanol-toxic-ethanol/ (above reaction source) https://metode.org/metodes-whys-and-wherefores/why-can-we-drink-ethanol-but-not-methanol.html https://en.wikipedia.org/wiki/Methanol_toxicity | {
"source": [
"https://chemistry.stackexchange.com/questions/137193",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/96343/"
]
} |
138,440 | I thought ammonium nitrate was an oxidizer that needed to be mixed with fuel to form a high explosive (e.g., ANFO ). But apparently there have been accidental explosions involving just the "fertilizer". Are these explosions also detonations? What is the chemical formula for the process? $$\ce{NH4NO3 -> ???}$$ Part of my motive for asking is the news today (Aug 4, 2020) of an explosion in Beirut . Initial reports say it was caused by "2750 tons of stored ammonium nitrate". | It is known that ammonium nitrate decompose exothermically when heated to form nitrous oxide and water. This paper 1 notes that the irreversible decomposition of ammonium nitrate occurs at the temperature range of $\pu{230-260 ^\circ C}$ . $$\ce{NH4NO3 ->[t >230 ^\circ C] N2O + 2H2O}$$ They also further noted that beyond $\pu{280 ^\circ C}$ , $\ce{NH4NO3}$ is capable of rapid, self-accelerating decomposition (to the point of detonation). But at the detonation temperature, $\mathrm{t_d}$ (the temperature at which compounds detonate), ammonium nitrate fully decomposes to nitrogen, oxygen and water releasing a tremendous amount of energy. $$\ce{2NH4NO3 ->[t_d] 2N2 + O2 + 4H2O}$$ In the context of Beirut explosion, the question that raised was "when did ammonium nitrate reached detonation temperature, and why did it suddenly explode?". According to a news report from cnet.com : When heated to above 170 degrees Fahrenheit, ammonium nitrate begins
to undergo decomposition. But with rapid heating or detonation, a
chemical reaction can occur that converts ammonium nitrate to nitrogen
and oxygen gas and water vapor. The products of the reaction are
harmless -- they're found in our atmosphere -- but the process
releases huge amounts of energy. [...] Additionally, in the explosion, not all of the ammonium nitrate is
used up and exploded. Some of it decomposes slowly creating toxic
gases like nitrogen oxides. It's these gases that are responsible for
the red-brown plume of smoke seen in the aftermath of the Beirut
explosion, Rae said. So, my theory is that ammonium nitrate started heating (from the fire) releasing all sorts of nitrogen oxides (the red fumes). This fire further accelerated the reaction, further heating the remaining ammonium nitrate to the point of detonation and that's when ammonium nitrate exploded instantaneously releasing tremendous amount of energy which send shockwaves around the site along with a white mushroom shaped cloud (from @DDuck's comment ) which could probably be nitrogen and/or water vapours where the humid air (water vapor laden air) condensed due to the explosion(@StianYttervik) with release of nitrogen. It is a sad and quite devastating incident. References On the Thermal Decomposition of Ammonium Nitrate. Steady-state Reaction
Temperatures and Reaction Rate By George Feick and R. M. Hainer, 1954 ( PDF ) Reaction Rates of Ammonium Nitrate in Detonation
Melvin A. Cook, Earle B. Mayfield, and William S. Partridge
The Journal of Physical Chemistry 1955 59 (8), 675-680
DOI: 10.1021/j150530a002 ( PDF ) https://en.wikipedia.org/wiki/2020_Beirut_explosions | {
"source": [
"https://chemistry.stackexchange.com/questions/138440",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/4121/"
]
} |
138,559 | If we take some aqueous solution and dilute it further and further, will the concentration of the solution ever get to zero? I would say no, simply because total dilution implies that all the molecules of the solute have literally disappeared. But, the fact that I am unable to figure out where the molecules have gone doesn't make my argument compelling at all. I am led to believe that there is a far better answer and/or explanation to my question UPDATE: While it is true that this question closely resembles the linked one, the answer provided in this question is a lot better as it gives a much deeper insight into the dilution process. | It depends how you dilute it. If you take an aqueous solution of A and just add pure water (absolutely 100% water), the concentration of A will never quite be null. In this case however, you will reach a point where the concentration of A is so small that it can be considered null for your applications. If, however, you dilute the solution, take a sample, then dilute that sample (and so on), you could reach a concentration of exactly 0M. Imagine you have diluted the solution enough so that it contains exactly 1 molecule of A. When you take your sample for the next dilution, if this molecule isn't in the sample, the concentration will be exactly null. If it does happen to be in the sample, it could be left behind when you draw the next sample, or the next, and so on. In practice though, the water you use for the dilution will likely contain impurities. You will maybe not achieve exactly 0M, but the concentration could be so small that it is undetectable and have no measurable consequence. | {
"source": [
"https://chemistry.stackexchange.com/questions/138559",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/97618/"
]
} |
138,678 | So I am reading a book called "Voices from Chernobyl" where witnesses, nuclear plant workers, firefighters and other persons involved in the 1986 accident give testimony of their experiences. The very first chapter goes along the lines of the wife of a firefighter that was dispatched for duty on the very day of the incident. He suffered acute radiation and was hospitalized. His wife remain with him throughout the last days before he died. One thing it was mentioned in the book very clearly is how nurses constantly tried to warn his wife not to touch him, hug him or even share objects with him . In this article , it is mentioned that a person who was exposed to radiation should not be a danger for others once his clothes have been disposed. So, my question is, I am getting confused between the two readings. Is a person who has been exposed to a big dose of radiation a danger for other people? Why? | Summary: there is not necessarily a contradiction between the two. Radiation is not contagious , and a person who has been exposed to ionizing radiation is not dangerous to other people once they are not contaminated with radioactive material any more, but while they are still contaminated with radioactive material, they may pose a danger (highest danger is to themselves, though). In any case, other people are dangerous to someone who suffers radiation illness (infection risk). Moreover, Thinking radiation contagious is (still!) a fairly widespread mistake, and I'd think misunderstanding the purpose of the instructions (she being dangerous for her husband rather than the other way round) quite likely in a highly stressful situations such as the husband dying. I.e., I wouldn't expect her to spend time then on double-checking whether she correctly understood the reason behind the instructions and ask, "Sorry, sister, is this because he's dangerous to me or because I'm dangerous to him?". Thus, it should not be surprising to meet any such misconceptions in such a book. Nor would I be surprised if readers misunderstand the book this way even in case the firefighter's wife is and was perfectly aware that her husband did not pose a radiation danger to her. I'm writing the remainder of this answer mostly assuming that the mistake is in the book rather than on OP's side. There are basically only two possibilities clear a misunderstanding that happened back then in a book of witness reports: the witness saying "nowadays I know I was dangerous to him, not he to me. But back then I didn't know that." or the editor putting a footnote explaining the misunderstanding and that it was widespread at the time. Ionizing radiation and contamination with radioactive substances Radiation is not contagious in the sense that the firefighter's exposure to ionizing radiation did make him radiate himself. That being said, Induced radioactivity exists. But that needs very particular conditions to happen to any practically relevant extent. In the sense of the question, you won't be able to produce any measureable effect in a living organism, not even a living organism that is about to die of the radiation. The firefighter of course radiates like any other living organism. Humans radiate with about 4 kBq due to containing about 15 mg of $\ce{^{40}K}$ . But: chemical contamination, including radioactive substances, can be transferred from one body to another, and they can be incorporated and accumulated where they cause much damage. (I also wouldn't call that contagious, since the total amount of radioactive material does not increase, but if you consider, say, crystal violet or methylene blue contagious "since if you touch someone, the ones touched by them will be violet/blue", then you can also call the radioactive substances contagious) However, any radioactive substance contamination that is transferred to a practically relevant amount by touching can also be washed off - and that is the first thing to do for decontamination besides taking off potentially contaminated clothes. Any injury that means that the area cannot be thoroughly washed can also not be touched (for reason of the injury). Thirdly, during the decay of such radioactive material, other substances form, which may be far more difficult to get rid of (see radon example below). Here, one may say that one can "catch" a contamination that one doesn't easily get rid off again. However, any such conamination would not be transferred from the firefighter to his wife. X-rays/γ-rays: these are ionizing electromagnetic rays, i.e. high energy photons. They cause damage when being absorbed, either by directly damaging some biomolecule or by forming OH⋅ radicals/ROS which in turn cause further damage. The radicals in themselves are nothing very special - they occur all the time as side products of our energy metabolism and we have powerful mechanisms to cope with them. Part of radiation illness is that these mechanisms are overwhelmed. So, after exposure to X- or γ-rays, we have radicals inside the body, but no "foreign" nuclei, and no body surface contamination. I.e. nothing that became radioactive because of the exposure to high energy photons. A somewhat different example related to Chernobyl and Fukushima would be incorporation of radioactive $\ce{^{131}I}$ in the thyroid gland. In particular, if someone with iodine deficiency incorporates iodine, pretty much all of it will end up in the thyroid gland. If that available iodine is $\ce{^{131}I}$ , their thyroid gland will subsequently be exposed to large radiation doses. This incorporated radioactivity does include γ radiation of which a part leaves their body. $\ce{^{131}I}$ is administered in radiotherapy in doses where the patients are e.g. kept in hospital for approximately 2 days (at least here in Germany) so as to not contaminate the wastewater with the radioactive $\ce{^{131}I}$ they excrete in their urine. Such patients are also advised to avoid close contact for e.g. a week after treatment in order to not cause accidental exposure to others, in particular children and pregnant women. . Such radiotherapy treatements use dosages in the 100 - 400 Gy range to the thyroid . And the guidelines are of course with a safety margin. A quick search for $\ce{^{131}I}$ radiation doeses to the thyroid in Ukrainian children after Chernobyl got me to Brenner et al: I‐131 Dose Response for Incident Thyroid Cancers in Ukraine Related to the Chornobyl Accident . The largst dose category is > 3.0 Gy, and a diagram has a point a bit below 5 Gy, so 1 - 2 orders of magnitude below the radiotherapy doses. My conclusion from this is that even in case the firefighter got a $\ce{^{131}I}$ dose to kill off his thyroid, the wife giving the dying husband several close goodbye hugs 10 - 14 days after the exposure would be unlikely to pose a significant threat to her health due to radiation from his thyroid (and under the particular circumstances, the $\ce{^{131}I}$ she ingested after the accident would be a far more important concern for her health). Again, I would not describe this as "contagious" - but your word use may vary. In this guest post to cancer letter , R. P. Gale discusses some of his experiences as an MD at the famous hospital 6 (the Soviet Russian radiation clinic) treating the radiation illness patients from Chernobyl with at particular view to the HBO series. Another error was to portray the victims as being dangerously radioactive. Most radiation contamination was superficial and relatively easily managed by routine procedures. This is entirely different than the Goiania accident, where the victims ate 137-cesium and we had to isolate them from most medical personnel. Lastly, there is the dangerous representation that, because one of the victims was radioactive, his pregnant wife endangered her unborn child by entering his hospital room. First, as discussed, none of the victims were radioactive—their exposures were almost exclusively external, not internal. More importantly, risk to a fetus from an exposure like this is infinitesimally small. Valid Reasons for not allowing the wife close to the husband that have nothing to do with radiation being "contagious". Radiation illness: the bone marrow is rather radiation sensitive, and leukopenia (too low leukocyte counts, a type of immune suppression) are a typical part of radiation illness. A radiation illness patient is thus at a very high risk from infections. Radiation illness often comes with burns (the skin is most exposed, and for α and β radiation, almost all damage happens in the skin). Already "normal" severe burns are doubly difficult in terms of infections: the skin damage means that the normal protective barrier against microorganisms is broken down in those areas, and in addition there is a severe immune suppression (after initial inflammatory response). Infections cause half of the deaths after severe burns Both are very valid safety reasons, just for the firefighter rather than for his wife's safety. Saying that the wife can go close to the firefighter would amount to saying "He's anyways going to die within the next days - it doesn't matter whether he catches an additional sepsis." "Contagious" radiation as wrong but possibly valid concern after the Chernobyl accident So from a scientific point of view radiation is not "contagious". Nevertheless, there is a still widespread, though mistaken fear of this. Personally, here and now I count this pretty much in the tin foil hat corner. But OTOH, in the situation in the Ukraine(ian SSR) immediately after the accident I think it a more understandable concern since the possibilities to check whether this concern is valid or not were severely limited. Not only for the general population, but even for the medical staff. In such a situation, it is a valid decision to err on the side of caution. How much did the hospital staff know about radiation medicine? Had they any experience with radiation injuries of this kind? There was a major accident, no internet, and a very restrictive information politic already without disaster. In the video interview (thanks to @TAR86), Alla Shapiro describes that the "hush up" included that the medical staff was deliberately hindered/prevented from accessing medical information about radiation. She also explains that their medical training did not include radiation. . Note however, that she was at a clinic in Kiev. P.R. Gale describes much better expertise at hospital 6 in Moscow (where most of the radiation illnesses were treated) . The post you link says that thorough washing is sufficient for the Fukushima evacuees, and the experts linked above agree for the Chernobyl fire fighters. However, Shapiro also says that there was a whole lot of this fear in the general population that radiation/radioactivity could be "contagious"/someone who was exposed to radioactive radiation would radiate themselves and thus pose a danger*. In a situation where probably almost everyone realized that the government tried to hush up a big problem (correct assessment), and in addition any relevant information being locked in the "poison cupboard" of the libraries and not accessible (no way to obtain factual information): would you trust "information" that these patients are not "contagious" wrt. radiation, i.e. information is very much what the government wishes or would you tend to err on the safe side? In addition: How much did the hospital staff know about what actually had happened and what the firefighter had actually been exposed to? With those activities to hush up the Chernobyl incident, the hospital staff may have been unsure about what else that firefighter had been exposed to besides high doses of radiation. Remaining contamination with radioactive material (including incorporated material) can be measured comparatively easily. However, I have no idea whether such instruments were available at the hospitals, say, in Kiev, to measure remaining contamination of their patients: a) the available instruements may have been needed more urgently at the site of the power plant and b) also political coniderations/hushing up may have been standing against that. I'd expect the Moskow hospital to have all kinds of instruments (but that may be my predjudice) * I'm pretty sure that a non-negligible fraction of the population here in Germany would express that fear if you ask them. Including medical staff, and even after Chernobyl and Fukushima. If you want an example: have a look at this post (in German) about radiation treatment for food on a web site by the offical consumer protection organizations Werden Lebensmittel mit ionisierenden Strahlen behandelt, wird die Strahlenmenge genau dosiert. Die Energiemenge ist so gering, dass die Lebensmittel nicht radioaktiv werden und sich nur leicht erwärmen. My translation and my emphasis: When food is treated with ionizing radiation, the amount of radiation is accurately dosed. The energs is so small that the food does not become radioactive and only heats up slightly. While it is of course true that the food does not become radioactive, that sentence IMHO does insinuate that this could [easily] be the case with higher doses. And one comment (out of a total of 14) clearly indictates that the writer thinks radiated potatoes will radiate themselves. | {
"source": [
"https://chemistry.stackexchange.com/questions/138678",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/63274/"
]
} |
138,685 | I found a package containing some Iodine I bought a while ago. It was contained in a plastic jar placed inside an Aluminium coated plastic pocket. The pocket had strange green worm like trails over its surface. In the most advanced place, it actually seems to have removed the Aluminium and left the plastic transparent with a green / yellow tinge. I wonder what reaction was occurring here? Aluminium and Iodine should produce Aluminium Iodide which is white. Why it appears green from the outside of the pocket is not clear. Perhaps it is the hydrate form which is yellow so with the plastic colour somehow appears green? Why this pattern advanced in strange worm like patterns is also not clear to me. As an aside, I assume this was the Iodine I purchased, it is present as a solid spheres which appears through the brown plastic bottle as shiny black/brown. When it was shipped I assumed it would be safe to leave as it is. It has never been exposed to high temperatures or sunlight. Apparently I need to research a better storage system than that provided to me as the wax seal they appear to have tried to apply round the neck of the jar has not worked to contained the Iodine. | Summary: there is not necessarily a contradiction between the two. Radiation is not contagious , and a person who has been exposed to ionizing radiation is not dangerous to other people once they are not contaminated with radioactive material any more, but while they are still contaminated with radioactive material, they may pose a danger (highest danger is to themselves, though). In any case, other people are dangerous to someone who suffers radiation illness (infection risk). Moreover, Thinking radiation contagious is (still!) a fairly widespread mistake, and I'd think misunderstanding the purpose of the instructions (she being dangerous for her husband rather than the other way round) quite likely in a highly stressful situations such as the husband dying. I.e., I wouldn't expect her to spend time then on double-checking whether she correctly understood the reason behind the instructions and ask, "Sorry, sister, is this because he's dangerous to me or because I'm dangerous to him?". Thus, it should not be surprising to meet any such misconceptions in such a book. Nor would I be surprised if readers misunderstand the book this way even in case the firefighter's wife is and was perfectly aware that her husband did not pose a radiation danger to her. I'm writing the remainder of this answer mostly assuming that the mistake is in the book rather than on OP's side. There are basically only two possibilities clear a misunderstanding that happened back then in a book of witness reports: the witness saying "nowadays I know I was dangerous to him, not he to me. But back then I didn't know that." or the editor putting a footnote explaining the misunderstanding and that it was widespread at the time. Ionizing radiation and contamination with radioactive substances Radiation is not contagious in the sense that the firefighter's exposure to ionizing radiation did make him radiate himself. That being said, Induced radioactivity exists. But that needs very particular conditions to happen to any practically relevant extent. In the sense of the question, you won't be able to produce any measureable effect in a living organism, not even a living organism that is about to die of the radiation. The firefighter of course radiates like any other living organism. Humans radiate with about 4 kBq due to containing about 15 mg of $\ce{^{40}K}$ . But: chemical contamination, including radioactive substances, can be transferred from one body to another, and they can be incorporated and accumulated where they cause much damage. (I also wouldn't call that contagious, since the total amount of radioactive material does not increase, but if you consider, say, crystal violet or methylene blue contagious "since if you touch someone, the ones touched by them will be violet/blue", then you can also call the radioactive substances contagious) However, any radioactive substance contamination that is transferred to a practically relevant amount by touching can also be washed off - and that is the first thing to do for decontamination besides taking off potentially contaminated clothes. Any injury that means that the area cannot be thoroughly washed can also not be touched (for reason of the injury). Thirdly, during the decay of such radioactive material, other substances form, which may be far more difficult to get rid of (see radon example below). Here, one may say that one can "catch" a contamination that one doesn't easily get rid off again. However, any such conamination would not be transferred from the firefighter to his wife. X-rays/γ-rays: these are ionizing electromagnetic rays, i.e. high energy photons. They cause damage when being absorbed, either by directly damaging some biomolecule or by forming OH⋅ radicals/ROS which in turn cause further damage. The radicals in themselves are nothing very special - they occur all the time as side products of our energy metabolism and we have powerful mechanisms to cope with them. Part of radiation illness is that these mechanisms are overwhelmed. So, after exposure to X- or γ-rays, we have radicals inside the body, but no "foreign" nuclei, and no body surface contamination. I.e. nothing that became radioactive because of the exposure to high energy photons. A somewhat different example related to Chernobyl and Fukushima would be incorporation of radioactive $\ce{^{131}I}$ in the thyroid gland. In particular, if someone with iodine deficiency incorporates iodine, pretty much all of it will end up in the thyroid gland. If that available iodine is $\ce{^{131}I}$ , their thyroid gland will subsequently be exposed to large radiation doses. This incorporated radioactivity does include γ radiation of which a part leaves their body. $\ce{^{131}I}$ is administered in radiotherapy in doses where the patients are e.g. kept in hospital for approximately 2 days (at least here in Germany) so as to not contaminate the wastewater with the radioactive $\ce{^{131}I}$ they excrete in their urine. Such patients are also advised to avoid close contact for e.g. a week after treatment in order to not cause accidental exposure to others, in particular children and pregnant women. . Such radiotherapy treatements use dosages in the 100 - 400 Gy range to the thyroid . And the guidelines are of course with a safety margin. A quick search for $\ce{^{131}I}$ radiation doeses to the thyroid in Ukrainian children after Chernobyl got me to Brenner et al: I‐131 Dose Response for Incident Thyroid Cancers in Ukraine Related to the Chornobyl Accident . The largst dose category is > 3.0 Gy, and a diagram has a point a bit below 5 Gy, so 1 - 2 orders of magnitude below the radiotherapy doses. My conclusion from this is that even in case the firefighter got a $\ce{^{131}I}$ dose to kill off his thyroid, the wife giving the dying husband several close goodbye hugs 10 - 14 days after the exposure would be unlikely to pose a significant threat to her health due to radiation from his thyroid (and under the particular circumstances, the $\ce{^{131}I}$ she ingested after the accident would be a far more important concern for her health). Again, I would not describe this as "contagious" - but your word use may vary. In this guest post to cancer letter , R. P. Gale discusses some of his experiences as an MD at the famous hospital 6 (the Soviet Russian radiation clinic) treating the radiation illness patients from Chernobyl with at particular view to the HBO series. Another error was to portray the victims as being dangerously radioactive. Most radiation contamination was superficial and relatively easily managed by routine procedures. This is entirely different than the Goiania accident, where the victims ate 137-cesium and we had to isolate them from most medical personnel. Lastly, there is the dangerous representation that, because one of the victims was radioactive, his pregnant wife endangered her unborn child by entering his hospital room. First, as discussed, none of the victims were radioactive—their exposures were almost exclusively external, not internal. More importantly, risk to a fetus from an exposure like this is infinitesimally small. Valid Reasons for not allowing the wife close to the husband that have nothing to do with radiation being "contagious". Radiation illness: the bone marrow is rather radiation sensitive, and leukopenia (too low leukocyte counts, a type of immune suppression) are a typical part of radiation illness. A radiation illness patient is thus at a very high risk from infections. Radiation illness often comes with burns (the skin is most exposed, and for α and β radiation, almost all damage happens in the skin). Already "normal" severe burns are doubly difficult in terms of infections: the skin damage means that the normal protective barrier against microorganisms is broken down in those areas, and in addition there is a severe immune suppression (after initial inflammatory response). Infections cause half of the deaths after severe burns Both are very valid safety reasons, just for the firefighter rather than for his wife's safety. Saying that the wife can go close to the firefighter would amount to saying "He's anyways going to die within the next days - it doesn't matter whether he catches an additional sepsis." "Contagious" radiation as wrong but possibly valid concern after the Chernobyl accident So from a scientific point of view radiation is not "contagious". Nevertheless, there is a still widespread, though mistaken fear of this. Personally, here and now I count this pretty much in the tin foil hat corner. But OTOH, in the situation in the Ukraine(ian SSR) immediately after the accident I think it a more understandable concern since the possibilities to check whether this concern is valid or not were severely limited. Not only for the general population, but even for the medical staff. In such a situation, it is a valid decision to err on the side of caution. How much did the hospital staff know about radiation medicine? Had they any experience with radiation injuries of this kind? There was a major accident, no internet, and a very restrictive information politic already without disaster. In the video interview (thanks to @TAR86), Alla Shapiro describes that the "hush up" included that the medical staff was deliberately hindered/prevented from accessing medical information about radiation. She also explains that their medical training did not include radiation. . Note however, that she was at a clinic in Kiev. P.R. Gale describes much better expertise at hospital 6 in Moscow (where most of the radiation illnesses were treated) . The post you link says that thorough washing is sufficient for the Fukushima evacuees, and the experts linked above agree for the Chernobyl fire fighters. However, Shapiro also says that there was a whole lot of this fear in the general population that radiation/radioactivity could be "contagious"/someone who was exposed to radioactive radiation would radiate themselves and thus pose a danger*. In a situation where probably almost everyone realized that the government tried to hush up a big problem (correct assessment), and in addition any relevant information being locked in the "poison cupboard" of the libraries and not accessible (no way to obtain factual information): would you trust "information" that these patients are not "contagious" wrt. radiation, i.e. information is very much what the government wishes or would you tend to err on the safe side? In addition: How much did the hospital staff know about what actually had happened and what the firefighter had actually been exposed to? With those activities to hush up the Chernobyl incident, the hospital staff may have been unsure about what else that firefighter had been exposed to besides high doses of radiation. Remaining contamination with radioactive material (including incorporated material) can be measured comparatively easily. However, I have no idea whether such instruments were available at the hospitals, say, in Kiev, to measure remaining contamination of their patients: a) the available instruements may have been needed more urgently at the site of the power plant and b) also political coniderations/hushing up may have been standing against that. I'd expect the Moskow hospital to have all kinds of instruments (but that may be my predjudice) * I'm pretty sure that a non-negligible fraction of the population here in Germany would express that fear if you ask them. Including medical staff, and even after Chernobyl and Fukushima. If you want an example: have a look at this post (in German) about radiation treatment for food on a web site by the offical consumer protection organizations Werden Lebensmittel mit ionisierenden Strahlen behandelt, wird die Strahlenmenge genau dosiert. Die Energiemenge ist so gering, dass die Lebensmittel nicht radioaktiv werden und sich nur leicht erwärmen. My translation and my emphasis: When food is treated with ionizing radiation, the amount of radiation is accurately dosed. The energs is so small that the food does not become radioactive and only heats up slightly. While it is of course true that the food does not become radioactive, that sentence IMHO does insinuate that this could [easily] be the case with higher doses. And one comment (out of a total of 14) clearly indictates that the writer thinks radiated potatoes will radiate themselves. | {
"source": [
"https://chemistry.stackexchange.com/questions/138685",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/21773/"
]
} |
139,475 | According to what I've seen, fat has a listed caloric value of 9 Cal/g, while carbs and protein have listed caloric values of 4 Cal/g.* Are these numbers exact, or have they been rounded? And if they have been rounded what are the exact numbers? One hit said that the numbers are rounded, but not by how much. *These are "food calories" (aka "large calories" or "Calories" with a capital "C"). 1 food calorie = 1000 "small calories" or "gram calories" = 1 kcal = 4.184 kJ See: https://en.wikipedia.org/wiki/Calorie | They're not exact numbers. These numbers aren't exact for three reasons: Each type of carb, protein, and fat has a different caloric value. These are overall averages for each class. Even if you were dealing with a single pure compound, the value couldn't be exact because there is individual variation in how much of that compound is metabolized based on digestion, absorption, etc. Even if #1 and #2 were not issues, the number still couldn't be exact, because it is a measured value, and measured values are never exact. Only counted numbers, and values that are exact by defintion (e.g., the speed of light in SI units) are exact. To get a better idea of how these values are determined, and the issues and uncertainties associated with measuring them, I'd recommend reading through: Food and Agriculture Organization of the United Nations. FAO food and nutrition paper . Chapter 3: Calculation of the energy content of foods - energy conversion factors. Food and Agriculture Organization of the United Nations, 1979. Here is a direct link to chapter 3: http://www.fao.org/3/Y5022E/y5022e04.htm | {
"source": [
"https://chemistry.stackexchange.com/questions/139475",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/88137/"
]
} |
139,687 | There are lots of questions about reducing or burning CO 2 to carbon and oxygen to solve climate change, but of course that wouldn't work because it takes a lot of energy. But carbon monoxide is more stable than the dioxide , so could CO 2 be split into CO and oxygen to create more heat? Of course this is a bad idea since it'd produce a toxic gas but is it at least theoretically possible? Apologies if I'm misusing chemistry terms or concepts; please inform me in the comments section. | Unfortunately, the question as stated is thermodynamically impossible . Let's look at the proposed reaction: $$\ce{CO2(g) -> CO(g) + O(g)}$$ This reaction is simply a bond dissociation (specifically, a carbon-oxygen covalent double bond is broken). We can look up the enthalpy change associated with it. From a table of values on Wikipedia , we find in the row for carbon dioxide that this reaction has an enthalpy change of $\mathrm{+532\ kJ\ mol^{-1}}$ at $\mathrm{298\ K}$ . The proposed reaction is therefore heavily endothermic . That is to say, it must absorb energy . Interestingly, it's true that the extreme strength of the bond in carbon monoxide has a measurable effect, making this process more favourable than expected. However, it is still overall extremely unfavourable, and therefore requires a large input of energy. I stress that this is unavoidable, no matter how fancy your machine - if the end result is the reaction stated above, then you must pay the energy cost somehow. Part of the problem, though, is that we have monoatomic oxygen as a product, which is a very reactive, high energy species - it doesn't actually exist except in special conditions. A simple adjustment therefore is to have molecular dioxygen , $\ce{O2}$ (the kind in the atmosphere that you breathe). The reaction then becomes: $$\ce{2 CO2(g) -> 2 CO(g) + O2(g)}$$ So what's the enthalpy change associated with this reaction? Looking up another table , this turns out to be $\mathrm{+283\ kJ\ mol^{-1}}$ at $\mathrm{298\ K}$ . Again, this reaction is endothermic, though much less so than the first one. Regardless, once more this reaction is an energy sink. If you want more visceral confirmation of this fact, consider the following. It is well known that pure carbon monoxide burns in an oxygen atmosphere . The reaction is self-sustaining and releases considerable heat. If you pay close attention, the reaction in the video is the exact inverse of the second equation. By chemical thermodynamics, if the combustion of $\ce{CO}$ to $\ce{CO2}$ releases heat, then it is necessarily true that cleaving $\ce{CO2}$ to form $\ce{CO}$ and $\ce{O2}$ will consume energy. As a last point, there are ways to make the production of $\ce{CO}$ from $\ce{CO2}$ feasible, but it requires changing the products. For example, if hydrogen gas is used as a reagent, the following becomes possible: $$\ce{CO2(g) + H2(g) -> CO(g) + H2O(g)}$$ The enthalpy change for this reaction is $\mathrm{+41\ kJ\ mol^{-1}}$ at $\mathrm{298\ K}$ , which is still endothermic, but approaching the break-even point. This is not too surprising, as hydrogen gas can behave as a reductant, and the bonds in water molecules are strong, pushing the reaction forwards. Let us make one last tiny modification: $$\ce{CO2(g) + H2(g) -> CO(g) + H2O\color{red}{(l)}}$$ By assuming the water produced is in the liquid state as opposed to a gas, the reaction surrenders a little more energy, and the calculated reaction enthalpy becomes $\mathrm{-3\ kJ\ mol^{-1}}$ at $\mathrm{298\ K}$ . This reaction is very mildly exothermic , which is to say it releases heat (admittedly, so little that it's within margin of error, and slightly different conditions could make the reaction overall endothermic). If you're not dead-set on having carbon monoxide as a product, then there are further options still. For example, here is the complete reduction of $\ce{CO2}$ to methane ( $\ce{CH4}$ ), a considerably exothermic process with a reaction enthalpy of $\mathrm{-253\ kJ\ mol^{-1}}$ at $\mathrm{298\ K}$ : $$\ce{CO2(g) + 4H2(g) -> CH4(g) + 2 H2O(l)}$$ Methane is not an ideal product, as it too is a greenhouse gas and is a low value chemical feedstock due to its abundance and relative lack of useful chemistry. There is much more interest in conversion of $\ce{CO2}$ to compounds such as methanol $\ce{CH3OH}$ and formic acid $\ce{HCOOH}$ . These two particular reactions are also exothermic. There are several issues with using hydrogen reduction of $\ce{CO2}$ as a carbon capture strategy to combat climate change, but perhaps the main one is a real-world factor: most hydrogen we currently produce is derived from fossil fuels, notably from the partial combustion of fossil methane (natural gas) with water at high temperatures, known as steam reforming . Therefore, while alternative sources of hydrogen gas using renewable low-carbon intensity energy are not available, this is a poor strategy to remove anthropogenic $\ce{CO2}$ from the atmosphere. | {
"source": [
"https://chemistry.stackexchange.com/questions/139687",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/97949/"
]
} |
140,457 | According to my textbook, when a flame test of an iron salt is performed, it produces an orange, mostly yellow flame. Sodium salts also produce a yellow flame. As the colours of these two flames are too similar, how do I differentiate an iron flame from a sodium flame? | Sodium flame is yellow, but all the light is due to two lines in the yellow region. If you look a sodium flame through a purple glass (cobalt glass), the yellow color is absorbed, and you do not see the sodium flame any more. The flame looks dark and colorless through a cobalt glass. On the contrary iron flame looks yellow, but it is made of a huge number of lines belonging to all regions of the spectrum. The sum of these colors look yellow, but it's a visual effect. The flame contains all colors. If you look at such an iron flame through a cobalt glass, the flame is visible. It looks purple or violet, but it is visible. | {
"source": [
"https://chemistry.stackexchange.com/questions/140457",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/98987/"
]
} |
140,462 | I have only started to learn balancing redox reactions, and a recent question has me confused. The question is : Balance in both acidic medium and basic medium: $$\ce{Cr2O7^2- + C2H5OH -> Cr^3+ + CO2}$$ I usually look for symmetries between the reactant and product sides of a reaction to pair reactants with the corresponding products to form the half reactions, as per the first steps of balancing a redox reaction. I'm having a difficult time doing that here; what am I supposed to pair Cr with? I would personally pair it with the $\ce{Cr2O7^2-}$ molecule but that's just by process of elimination because I would naturally pair $\ce{C2H5OH}$ with $\ce{CO2}$ due to the stronger resemblance they have with each other. Another matter of concern is the answer in the answer key; the answer in acidic solution is, apparently: $$\ce{2Cr2O7^2- + 16H+ + C2H5OH -> 4Cr^3+ + 2CO2 + 11H2O}$$ In a basic solution: $$\ce{2Cr2O7^2- + 5H2O + C2H5OH -> 4Cr^3+ + 2CO2 + 16OH-}$$ It seems like in the answer for the reaction in acidic medium, the left side of the reaction has a charge of +14 and the right side has a charge of +12. However, in the basic medium the answer has the left side with a total charge of -2 and the right side -4. How can these answers be correct if the charge isn't balanced? How do you balance a redox reaction lacking a molecular reactant on the product side? I am referring to the atom or molecule that changes oxidation state when I say molecular reactant. | Sodium flame is yellow, but all the light is due to two lines in the yellow region. If you look a sodium flame through a purple glass (cobalt glass), the yellow color is absorbed, and you do not see the sodium flame any more. The flame looks dark and colorless through a cobalt glass. On the contrary iron flame looks yellow, but it is made of a huge number of lines belonging to all regions of the spectrum. The sum of these colors look yellow, but it's a visual effect. The flame contains all colors. If you look at such an iron flame through a cobalt glass, the flame is visible. It looks purple or violet, but it is visible. | {
"source": [
"https://chemistry.stackexchange.com/questions/140462",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/99055/"
]
} |
140,496 | I want to increase a fixed-size object's internal gas pressure by generating hydrogen in it, but I could not find the proper phase diagram for it. So I am wondering how high pressures I can get. | $\ce{H2}$ cannot be liquified at room temperature, whatever the pressure. Generally speaking, all gases can only be liquified when the temperature is under its critical value. | {
"source": [
"https://chemistry.stackexchange.com/questions/140496",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/99084/"
]
} |
140,805 | There are these everyday things that one should know as a scientist and especially as a chemist, but which never come to light in an academic curriculum — at least not in mine. One such thing is the purpose of imperative non-disruption of the cold chain. It is clear that there are plenty of reasons why it is crucial to transport food and also pharmaceuticals in a refrigerated condition. On the one hand, it reduces the risk of bacteria proliferating significantly, but it also prevents labile substances from decomposing or reacting with other ingredients. So far, so good. But why shouldn't you cool down products again once the cold chain has been interrupted? I have read, for example, in the package insert of a pharmaceutical product that the drug should always be stored below 25 °C. In pharmacies, the product is kept in the refrigerator, and I, as the consumer, should continue storage in the refrigerator if possible. If this is not feasible for me, e.g., due to lack of space, the storage temperature should not exceed 25 °C. It is also pointed out that I should not keep the drug in the refrigerator after storage at room temperature. This reminded me of the wise words of my grandmother, who said that fish, for example, should only be frozen once because otherwise, it will go bad. With many other foods, similar claims are made. Since these indications can be found across countries and cultures, I assume that there are known scientific reasons for this, but I am not aware of them. I can't figure out why it should be bad to refreeze a defrosted piece of meat or why I shouldn't put my medicine in the refrigerator on hot days anymore after storing it at room temperature for a while. Maybe a few food chemists and/or pharmacists can shed some light on this. Edit: Poutnik and Julian, I wish I could accept both of your answers. I find it difficult to decide which answer is "the best" , as they both address a different aspect of my question which they explain in a very understandable way. I have decided to mark Julian's answer as accepted since he — despite his long membership — has a much lower reputation than Poutnik. | Answer here from a quality manager in the pharmaceutical field in Europe. Pharmaceutical companies are obliged to perform stability tests for their products according to the relevant pharmaceutical (GMP) agency in your market (FDA/EMA/etc) and the international agreed guidelines like ICH Q1A-F and the WHO ). These tests are carried out in stability chambers with specific temperature and humidity according to your market. A product intended to be stored at room temperature in Europe and the US will need a study for minimum 12 month up to 5 years at 25 °C and 60% relative humidity (RH) as well as an intermediate study at 30 °C and 65% RH for 6 month to 5 years and a an accelerated test at 40 °C and 75% RH for at least 6 month. Cool stored products will have the long time storage at 5 °C and the accelerated test at 25 °C and 60% RH and there is a whole lot of other stability studies you can (or can be forced to) test.
Based on a risk assessment and your stability protocol you will analyze your drug after defined times (like 1, 3, 6, 9, 12, 18, 24, … month). For each pharmaceutical product there must be a defined specification. This might includes the assay, impurities like degradation products, metal residues, solvent residues, color, taste, microbiological contamination etc. Now, after each time point you test your defined specification if everything is still within the specification. The scope is based again (as nowadays everything in the pharmaceutical environment) by a risk assessment. Using this information you can define your storage conditions and shelf life of your drug. If you find out you fulfill the specification at 25 °C for the whole duration but not at 30 °C, you will define the storage condition to be below 25 °C to be on the safe/legal side. Same with the storage at 5 °C. If the tests at 25 °C fail after 2 month, you can e.g. define a storage of max 2 weeks at room temperature to be on the safe side. | {
"source": [
"https://chemistry.stackexchange.com/questions/140805",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/34122/"
]
} |
141,662 | Placed in equivalent freezers, would a liter of water or a liter of lava turn from liquid to solid first? (question from a 6 year old) Based on this page in a “Blaze” book, my six year old asked “which would win?” between water and lava. On further investigation, we refined the question to: which would turn solid first in similar conditions, a liter of room temperature water or a liter of volcanic lava? | You've got the right idea — you want to simplify the problem — but I don't think you're using quite the right simplification. Take a look at the book's set-up of the situation, and ask yourself how the water is stopping the lava. You'll see the idea is that we're using liquid water, not ice, to solidify the lava. So your question should really be: How much heat do we need to absorb from a liter of lava to turn it into a solid, and how much heat can a liter of water at room temperature absorb before it turns to steam? If the latter is larger than the former, then a liter of water can cool a liter of lava to the point where it solidifies before the water all changes into steam. [I'm using "heat" when I should really be using "thermal energy", but this is for a $6$ y.o., so I'm keeping it simple.] First, let's do the calculation for water. Here (since it's for a $6$ y.o.), I'm not going to show all the steps in the calculations: Energy to heat $\pu{1 L}$ liquid water at room temp ( $25 \,\pu{^{\circ}C}$ ) to $100 \,\pu{^{\circ}C}$ = $\pu{75 kcal}$ Energy to turn $\pu{1 L}$ liquid water to steam at $100 \,\pu{^{\circ}C}$ = $\pu{533 kcal}$ Total = $\pu{608 kcal}$ According to https://en.wikipedia.org/wiki/Lava , lava is typically $700 \,\pu{^{\circ}C}$ to $1200 \,\pu{^{\circ}C}$ , so let's call it $1000 \,\pu{^{\circ}C}$ . And, using this source ( https://ocw.mit.edu/courses/earth-atmospheric-and-planetary-sciences/12-109-petrology-fall-2005/lecture-notes/Nov3notes.pdf ), let's assume it melts at $900 \,\pu{^{\circ}C}$ (there's actually a wide range of lava types, and thus a wide range of lava temps and melting points). Density of lava = $\pu{3.1 \frac{g}{cm^3}}$ , so $\pu{1 L}$ lava weighs $\pu{3100 g}$ Energy released when $\pu{1 L}$ lava cools from $1000 \,\pu{^{\circ}C}$ to $900 \,\pu{^{\circ}C}$ = $\pu{93 kcal}$ Energy released when $\pu{1 L}$ lava solidifies at $900 \,\pu{^{\circ}C}$ = $\pu{310 kcal}$ Total = $\pu{403 kcal}$ So, based on the above, $\pu{1 L}$ of water is enough to solidify: $$\pu{1 L} \times \frac{\pu{608 kcal}}{\pu{403 kcal}} \approx \pu{1.5 L of lava}.$$ Finally, why does the water win? The main reason is that the water is going between a liquid and a gas, while the lava is going between a solid and a liquid. And, in general, it takes much more energy to change liquids into gases than solids into liquids, because in the former case you are pulling the molecules completely apart from each other. But I said it takes "much more energy" to change a liquid to a gas than a solid to a liquid, yet the difference here is only a factor of $1.5$ . The discrepancy is because in this case you're not comparing masses of water and lava, you're comparing volumes (and for the same volume, the mass of lava is $3.1$ x greater). Thus, if you do it on a mass basis, $\pu{1 kg}$ of water is enough to solidify $\pu{6 kg}$ of lava (because, while $\pu{1 L}$ of water weighs $\pu{1 kg}$ , $\pu{1.5 L}$ of lava weighs $\pu{6 kg}$ ). [For these quantities, mass is typically a more fundamental basis of comparison than volume, which is why heat capacities, heats of fusion, and heats of vaporization are usually quoted on a mass basis. When you see the term "specific", as in "specific heat capacity, that means "per unit mass". Of course, one can also use a number basis (moles).] | {
"source": [
"https://chemistry.stackexchange.com/questions/141662",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/34027/"
]
} |
141,689 | Disclaimer: I am not a chemist by any means, and I only have knowledge limited to what I learned in my university's Chemistry III course. Basic understanding of everything up to valence electron orbitals. Why is there no set of rules to follow which can predict the product of chemical reactions? To me, it seems that every other STEM field has models to predict results (physics, thermodynamics, fluid mechanics, probability, etc) but chemistry is the outlier. Refer to this previous question: How can I predict if a reaction will occur between any two (or more) substances? The answers given state that empirical tests are the best way we've gotten to predict reactions, because we can discern patterns or "families" of reactions to predict outcomes. Are we only limited to guessing at "family" reactions? In other words, why am I limited to knowing my reactants and products, then figuring out the process? Can I know the reactants, hypothesize the process, and predict the product? If the answer is "It's complicated", I would enjoy a push in the right direciton - like if valence orbitals actually do help us predict, or any laws of energy conservations etc, please give me something which I can go research. | First of all, I'd ask: what do you admit as "chemistry"? You mentioned thermodynamics as being a field where you have "models to predict results". But thermodynamics is extremely important in chemistry ; it wouldn't be right if we classified it as being solely physics. There is a large amount of chemistry that can be predicted very well from first principles, especially using quantum mechanics. As of the time of writing, I work in spectroscopy, which is a field that is pretty well described by QM. Although there is a certain degree of overlap with physics, we again can't dismiss these as not being chemistry. But, I guess, you are probably asking about chemical reactivity . There are several different answers to this depending on what angle you want to approach it from. All of these rely on the fact that the fundamental theory that underlies the behaviour of atoms and molecules is quantum mechanics, i.e. the Schrödinger equation.* Addendum: please also look at the other answers, as each of them bring up different excellent points and perspectives. (1) It's too difficult to do QM predictions on a large scale Now, the Schrödinger equation cannot be solved on real-life scales.† Recall that Avogadro's number, which relates molecular scales to real-life scales, is ~ $10^{23}$ . If you have a beaker full of molecules, it's quite literally impossible to quantum mechanically simulate all of them, as well as all the possible things that they could do. "Large"-ish systems (still nowhere near real-life scales, mind you — let's say ~ $10^3$ to $10^5$ ) can be simulated using approximate laws, such as classical mechanics. But then you lose out on the quantum mechanical behaviour. So, fundamentally, it is not possible to predict chemistry from first principles simply because of the scale that would be needed. (2) Small-scale QM predictions are not accurate enough to be trusted on their own That is not entirely true: we are getting better and better at simulating things, and so often there's a reasonable chance that if you simulate a tiny bunch of molecules, their behaviour accurately matches real-life molecules. However, we are not at the stage where people would take this for granted. Therefore, the ultimate test of whether a prediction is correct or wrong is to do the experiment in the lab. If the computation matches experiment, great: if not, then the computation is wrong. (Obviously, in this hypothetical and idealised discussion, we exclude unimportant considerations such as "the experimentalist messed up the reaction"). In a way, that means that you "can't predict chemistry": even if you could, it "doesn't count", because you'd have to then verify it by doing it in the lab. (3) Whatever predictions we can make are too specific There's another problem that is a bit more philosophical, but perhaps the most important. Let's say that we design a superquantum computer which allowed you to QM-simulate a gigantic bunch of molecules to predict how they would react. This simulation would give you an equally gigantic bunch of numbers: positions, velocities, orbital energies, etc. How would you distil all of this into a "principle" that is intuitive to a human reader, but at the same time doesn't compromise on any of the theoretical purity? In fact, this is already pretty tough or even impossible for the things that we can simulate. There are plenty of papers out there that do QM calculations on very specific reactions, and they can tell you that so-and-so reacts with so-and-so because of this transition state and that orbital. But these are highly specialised analyses: they don't necessarily work for any of the billions of different molecules that may exist. Now, the best you can do is to find a bunch of trends that work for a bunch of related molecules. For example, you could study a bunch of ketones and a bunch of Grignards, and you might realise a pattern in that they are pretty likely to form alcohols. You could even come up with an explanation in terms of the frontier orbitals: the C=O π* and the Grignard C–Mg σ. But what we gain in simplicity, we lose in generality. That means that your heuristic cannot cover all of chemistry. What are we left with? A bunch of assorted rules for different use cases. And that's exactly what chemistry is. It just so happens that many of these things were discovered empirically before we could simulate them. As we find new theoretical tools, and as we expand our use of the tools we have, we continually find better and more solid explanations for these empirical observations. Conclusion Let me be clear: it is not true that chemistry is solely based on empirical data. There are plenty of well-founded theories (usually rooted in QM) that are capable of explaining a wide range of chemical reactivity: the Woodward–Hoffmann rules , for example. In fact, pretty much everything that you would learn in a chemistry degree can already be explained by some sort of theory, and indeed you would be taught these in a degree. But , there is no (human-understandable) master principle in the same way that Newton's laws exist for classical mechanics, or Maxwell's equations for electromagnetism. The master principle is the Schrödinger equation, and in theory, all chemical reactivity stems from it. But due to the various issues discussed above, it cannot be used in any realistic sense to "predict" all of chemistry. * Technically, this should be its relativistic cousins , such as the Dirac equation. But, let's keep it simple for now. † In theory it cannot be solved for anything harder than a hydrogen atom, but in the last few decades or so we have made a lot of progress in finding approximate solutions to it, and that is what "solving" it refers to in this text. | {
"source": [
"https://chemistry.stackexchange.com/questions/141689",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/100134/"
]
} |
142,839 | Is there a chemical or historical significance in the fact that 'ethane' is just 'methane' without the 'm'? | meth vs eth [OP] Why is '-ethane' in 'methane'? This is a coincidence. Methyl is ultimately from Greek methy "wine" + hylē "wood. Source: https://www.etymonline.com . The terminology was created by Dumas and Péligot in 1834 to distinguish wood alcohol from ethanol, and was published first in the French language: Source: https://gallica.bnf.fr/ark:/12148/bpt6k6569005x/f15.item Ethyl is from Greek aithēr "upper air; bright, purer air; the sky" (opposed to aēr "the lower air"), from aithein "to burn, shine," from PIE *aidh- "to burn" Source: https://www.etymonline.com . Liebig had argued in 1834 that what now is known to be $\ce{-C2H5}$ is a "radical", i.e. a reoccuring group in molecules such as diethyl ether, ethanol and ethyl ethanoate, and gave it the name ethyl. Source: https://babel.hathitrust.org/cgi/pt?id=uva.x002457887&view=1up&seq=30 Using the ending -yl for both the methyl and the ethyl group is a suggestion by Berzelius from 1835 (the correct atomic weight of carbon was not know yet, so the chemical formulas are off by a factor of two): Source: https://books.google.com/books?id=1DM1AAAAcAAJ&pg=PA376#v=onepage&q&f=false , see Wikipedia entry on Ethyl group. methane vs ethane That these both end in -ane is not a coincidence - all alkanes end in -ane. As stated in AChem's answer, this nomenclature was introduced by A. W. Hofmann in 1865. Acknowledgements Maurice's answer inspired me to find the first work by Dumas that led to the methyl terminology. I have edited my answer to incorporate information from the excellent answers by AChem and Jan and the insightful comments. In trying to figure out the sequence of events in the 1830s, I made use of this account by Frederick E. Ziegler. References Dumas, J.-B, Mémoire sur l'esprit de bois et sur les divers composés éthérés qui en proviennent. lu à l'Académie des Sciences les 27 Octobre et 3 novembre 1834, Paris, 1834 Jean-Baptiste Dumas et Eugène Péligot, Mémoire sur l’Esprit de Bois et sur les divers Composés Ethérés qui en proviennent, Annales de chimie et de physique, 58 (1835) p. 5-74 Justus Liebig (1834) "Ueber die Constitution des Aethers und seiner Verbindungen" (On the composition of ethers and their compounds), Annalen der Pharmacie, 9 : 1–39 Jacob Berzelius, Årsberättelsen om framsteg i fysik och kemi [Annual report on progress in physics and chemistry] (Stockholm, Sweden: P.A. Norstedt & Söner, 1835), p. 376 A. W. Hofmann 1866, in Monatsbericht der Königl. Preuss. Akad. der Wissenschaften zu Berlin, 1865 653 | {
"source": [
"https://chemistry.stackexchange.com/questions/142839",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/95447/"
]
} |
142,850 | When we perform a organic reaction without any addtional catalyst except that the reaction is carried out under microwave condition we can accelerate this reaction in comparison with one under heating only. So, microwaves can be considered a form of catalysis according to the concept of reaction kinetics? | meth vs eth [OP] Why is '-ethane' in 'methane'? This is a coincidence. Methyl is ultimately from Greek methy "wine" + hylē "wood. Source: https://www.etymonline.com . The terminology was created by Dumas and Péligot in 1834 to distinguish wood alcohol from ethanol, and was published first in the French language: Source: https://gallica.bnf.fr/ark:/12148/bpt6k6569005x/f15.item Ethyl is from Greek aithēr "upper air; bright, purer air; the sky" (opposed to aēr "the lower air"), from aithein "to burn, shine," from PIE *aidh- "to burn" Source: https://www.etymonline.com . Liebig had argued in 1834 that what now is known to be $\ce{-C2H5}$ is a "radical", i.e. a reoccuring group in molecules such as diethyl ether, ethanol and ethyl ethanoate, and gave it the name ethyl. Source: https://babel.hathitrust.org/cgi/pt?id=uva.x002457887&view=1up&seq=30 Using the ending -yl for both the methyl and the ethyl group is a suggestion by Berzelius from 1835 (the correct atomic weight of carbon was not know yet, so the chemical formulas are off by a factor of two): Source: https://books.google.com/books?id=1DM1AAAAcAAJ&pg=PA376#v=onepage&q&f=false , see Wikipedia entry on Ethyl group. methane vs ethane That these both end in -ane is not a coincidence - all alkanes end in -ane. As stated in AChem's answer, this nomenclature was introduced by A. W. Hofmann in 1865. Acknowledgements Maurice's answer inspired me to find the first work by Dumas that led to the methyl terminology. I have edited my answer to incorporate information from the excellent answers by AChem and Jan and the insightful comments. In trying to figure out the sequence of events in the 1830s, I made use of this account by Frederick E. Ziegler. References Dumas, J.-B, Mémoire sur l'esprit de bois et sur les divers composés éthérés qui en proviennent. lu à l'Académie des Sciences les 27 Octobre et 3 novembre 1834, Paris, 1834 Jean-Baptiste Dumas et Eugène Péligot, Mémoire sur l’Esprit de Bois et sur les divers Composés Ethérés qui en proviennent, Annales de chimie et de physique, 58 (1835) p. 5-74 Justus Liebig (1834) "Ueber die Constitution des Aethers und seiner Verbindungen" (On the composition of ethers and their compounds), Annalen der Pharmacie, 9 : 1–39 Jacob Berzelius, Årsberättelsen om framsteg i fysik och kemi [Annual report on progress in physics and chemistry] (Stockholm, Sweden: P.A. Norstedt & Söner, 1835), p. 376 A. W. Hofmann 1866, in Monatsbericht der Königl. Preuss. Akad. der Wissenschaften zu Berlin, 1865 653 | {
"source": [
"https://chemistry.stackexchange.com/questions/142850",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/62900/"
]
} |
143,984 | Presumably, it would be expensive to use ozone ( $\ce{O3}$ ) as an oxidizer instead of $\ce{O2}$ , but would the extra oomph be worth it? Does $\ce{O2}$ provide as much thrust/energy/heat as can be provided, given the liquid hydrogen propellant? I figured Chemistry was better than Physics, Engineering or Astronomy for this question. | As is usual with rocket fuels, the problems of ozone are practicality not performance Almost every answer for why a specific rocket fuel component is used or not will end up referring to John D Clarke's magnificent and sparklingly written book: Ignition: An informal history of liquid rocket propellants (a rare technical book worth reading for the brilliant and humorous style in addition to the technical content). His summary of why ozone, despite its apparent functional advantages, is not more widely used is fairly simple: the practical problems outweigh the apparent advantages. He points out the advantages: What makes it attractive as a propellant is that (1) its liquid density is considerably higher than that of liquid oxygen, and (2) when a mole of it decomposes to oxygen during combustion it gives off 34 kilocalories of energy, which will boost your performance correspondingly. Sänger was interested in it in the 30’s, and the interest has endured to the present. In the face of considerable disillusionment. But every available way of creating a liquid with a high proportion of ozone is dangerous. Ozone is extremely toxic and unstable: For it has its drawbacks. The least of these is that it’s at least as toxic as fluorine. ... Much more important is the fact that it’s unstable—murderously so. At the slightest provocation and sometimes for no apparent reason, it may revert explosively to oxygen. And this reversion is catalyzed by water, chlorine, metal oxides, alkalis—and by, apparently, certain substances which have not been identified. Compared to ozone, hydrogen peroxide has the sensitivity of a heavyweight wrestler. Work was done on solutions of ozone in liquid oxygen which is somehow more stable. But has the disadvantage that ozone/oxygen mixtures separate into two phases the ozone rich version of which is difficult to prevent in feed tubes after firing and is extremely unstable. Another mixture considered to make handling easier was with liquid fluorine (!!!). Ultimately he concludes the known work on ozone mixtures of any sort: For ozone still explodes. Some investigators believe that the explosions are initiated by traces of organic peroxides in the stuff, which come from traces, say, of oil in the oxygen it was made of. Other workers are convinced that it’s just the nature of ozone to explode, and still others are sure that original sin has something to do with it. So although ozone research has been continuing in a desultory fashion, there are very few true believers left, who are still convinced that ozone will somehow, someday, come into its own. I’m not one of them. Maybe there are theoretical advantages, but they are outweighed by the practical and safety problems. In a profession used to testing things like FOOF (fluorine dioxide) and chlorine trifluoride, this is some admission to make. | {
"source": [
"https://chemistry.stackexchange.com/questions/143984",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/64075/"
]
} |
145,029 | When naming compounds, sometimes when there are two vowels in a row the second is elided: this happens for example with "mono-oxide", which becomes "monoxide" instead. Why is this not always applied, e.g. with "diiodine"? Why aren't the repeated i's removed to make it "diodine"? | Both "monooxide" and "monoxide" are used in the literature, yet "monoxide" is being used more often ( Google Books Ngram Viewer ).
Although this is an accepted elision, it is not the preferred one, and must not set a precedent for other cases when multiplicative prefix ends with the same vowel as the root word begins with, such as "diiodine". According to the current version of Nomenclature of Inorganic Chemistry, IUPAC Recommendations 2005 [1, p. 31]: IR-2.7 ELISIONS In general, in compositional and additive nomenclature no elisions are made when using
multiplicative prefixes. Example: tetraaqua ( not tetraqua) monooxygen ( not monoxygen) tetraarsenic hexaoxide However, monoxide, rather than monooxide, is an allowed exception through general use. Further, from section IR-5.2 Stoichiometric names of elements and binary compounds [1, p. 69]: The multiplicative prefixes precede the names they multiply, and are joined directly to them without spaces or hyphens. The final vowels of multiplicative prefixes should not be elided (although ‘monoxide’, rather than ‘monooxide’, is an allowed exception because of general usage). […] Examples: […] $\ce{NO}$ nitrogen oxide, or nitrogen monooxide, or nitrogen monoxide References IUPAC. Nomenclature of Inorganic Chemistry, IUPAC Recommendations 2005 (the “Red Book”) , 1st ed.; Connelly, N. G., Damhus, T., Hartshorn, R. M., Hutton, A. T., Eds.; RSC Publishing: Cambridge, UK, 2005 . ISBN 978-0-85404-438-2. IUPAC website | {
"source": [
"https://chemistry.stackexchange.com/questions/145029",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/103168/"
]
} |
147,710 | Fluorine is more reactive than Chlorine. But does it mean that Fluorine can cause more damage to living tissues? If so, why wasn't Fluorine used in WW1 instead of Chlorine? | Fluorine is much more reactive than chlorine and would certainly cause more damage to living tissues. You can even check out https://www.youtube.com/watch?v=vtWp45Eewtw for some fun demonstrations of its oxidizing power too! Likewise, compared to chlorine gas, I'd assume fluorine munitions would be significantly harder to manufacture, store, and handle for any side attempting to wield them in combat. (And, as Waylander mentioned in the comments, especially for "WW1-era technology" too!) Also, due to its great reactivity, fluorine gas will readily react even with the atmospheric humidity to form hydrogen fluoride : $$\ce{2F2(g) + 2H2O(g) -> 4HF(g) + O2(g)}$$ which, while still toxic, is much lighter than chlorine gas and so may not be as good at staying "close to the ground" and/or "flowing down " into an enemy's trenches as intended. Or even worse -- potentially thus dispersing even back to your own troops and damaging them as well! | {
"source": [
"https://chemistry.stackexchange.com/questions/147710",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/105790/"
]
} |
151,267 | I am an organic chemistry student learning how atomic orbitals interfere to give rise to molecular obitals. The image below suggests that each of the hydrogens' atomic orbitals interfere both constructively AND destructively. How is this possible? More fundamentally, how does the 1s atomic orbital of a hydrogen atom have both positive and negative phase in the first place? I realize that there is a similar question on this topic but I wasn't able to understand the answer to that post due to my limited background in quantum mechanics and was hence was looking for someone who could explain in simple terms. | What does MO formation entail? A very common misconception is that the formation of MOs involve addition, or subtraction, of two physical objects . And the confusion arises because, how can the same physical object have two different phases when added or subtracted? However, this is not an accurate picture. MO construction is not a physical process which occurs between two real objects which each possess a definite phase. There are at least two reasons for this: Orbitals are not objects. The electrons that go into orbitals are objects, but the orbitals themselves are wavefunctions. Wavefunctions do not have definite phases. When we talk about the phase of an orbital, what it refers to is a constant $k$ which the entire wavefunction is multiplied by. It turns out that a fundamental postulate of quantum mechanics is that that the wavefunction $\psi(x)$ and the wavefunction $k\cdot\psi(x)$ represent exactly the same state , so the phase $k$ is actually completely meaningless. The 1s orbital does not possess a particular phase; it can possess any phase and still be a 1s orbital. So, what does it really entail? I think a better description is that orbitals are possibilities for where the electrons can go. Let's say you have a hydrogen atom with one electron. This electron can either be in a 1s orbital, or (if you excite the electron) it can be in a 2s orbital, or a 2p orbital, etc. etc. So, the electron has many different options as to where it can go. Of course, in ground-state hydrogen, the eventual choice is the 1s orbital. Alternatively, let's say you have two hydrogen nuclei, but only one electron. (So there is a net positive charge.) This electron could be in the 1s orbital of hydrogen atom A, or the 1s orbital of hydrogen atom B, or the 2s orbital of hydrogen atom A, etc. etc. These different atomic orbitals represent the possibilities which the electron can take. In this regard, the orbitals are not objects or boxes to be inhabited by the electron, but they represent a set of choices . If we adopt this mindset, then it already becomes clearer how both bonding and antibonding orbitals form: these are simply the two "possibilities" which can happen. It is not a question of how can they both be true at the same time, because that isn't the point: the point is that MOs themselves are different possibilities for an electron to adopt, and it isn't necessary that both of them be true at the same time. Indeed, they can't be, because an electron can't simultaneously inhabit two MOs. Cars I hesitated to include this analogy in the original answer, but will include it now, since it may prove helpful to some. There are some problems with it, however, so don't take it as a literal description, but rather a conceptual one. Consider, then, that you're standing on a hill overlooking a two-lane motorway which runs north to south. However, you don't have a compass on hand, so you don't actually know which way is north or south. (This is analogous to not knowing the actual phase of an orbital $k$ : you can't tell whether it's $+1$ or $-1$ , for example.) Suppose you see two cars driving along the motorway, one on each lane. If you focus on one car at a time , what information can you give about each of the two cars? You can't tell which way they're headed, because you don't know north from south. However, you can tell which lane they are driving on: for example, because one lane is closer to you than the other. So you can furnish two pieces of information: There is a car on the nearer lane. It is going either north or south, but we don't know which. There is a car on the further lane. It is also going either north or south, but we don't know which. This situation is exactly analogous to having two hydrogen atoms, with one electron in each 1s orbital. In other words, this corresponds to the "pre-MO" description of the $\ce{H2}$ system: There is a 1s orbital on hydrogen atom A. It has some phase, but we don't know which. There is a 1s orbital in hydrogen atom B. It has some phase, but we don't know which. Now, let's return to the cars. You might think that there actually is another way we can describe their movement, and that is based on not just zooming onto one of them at a time, but rather by comparing their relative motion . So, again we have two possibilities: There are two cars on either lane, and they are moving in the same direction. (North/north or south/south, but we don't know.) There are two cars on either lane, and they are moving in opposite directions. (North/south or south/north, but we don't know.) This is precisely analogous to the MO description of $\ce{H2}$ : There is a bonding orbital formed from two 1s orbitals on either atom, and they have the same phase, although we don't know exactly what that phase is. There is an antibonding orbital formed from two 1s orbitals on either atom, and they have opposite phases, although we don't know exactly what phases those are. Note that we didn't somehow physically stick the two cars together, or force them to travel in any particular direction. What we have added together are not cars, but rather possibilities: we combined the two "base" descriptions, of one car moving along each lane, into two "possibilities" where they are either going in the same or opposite direction. At any given point in time, whenever you see two cars going along the motorway, either of these possibilities can be realised. The same is true of MOs. The MOs don't represent physical addition of two orbitals, but rather different possibilities of how these orbitals can combine: they can either combine constructively, or destructively. For any given electron, either of these possibilities can be realised, too: and unlike the motorway, it's not mutually exclusive, because one (or two) electron(s) can be in the bonding orbital and another can be in the antibonding orbital. Linear algebra This is a more formal description of how MOs are constructed, and much closer to the truth. There are very few, and possibly no, quick and dirty ways to explain MO theory, in my opinion. However, if you have studied linear algebra before,* then the transition from AOs and MOs is the same as a change of basis . A basis refers to a minimal set of vectors, from which any arbitrary vector may be constructed through linear combination. If you have a vector $(a, b)$ , you can express this as: $$\begin{pmatrix}a\\b\end{pmatrix} = a\begin{pmatrix}1\\0\end{pmatrix} + b\begin{pmatrix}0\\1\end{pmatrix}.$$ It doesn't matter what $a$ and $b$ are, you can always express $(a, b)$ as a linear combination of the two basis vectors. You can think of the basis vectors $(1, 0)$ and $(0,1)$ as being the 1s orbitals on both individual atoms. The MOs correspond to using the basis $(0.5, 0.5)$ and $(0.5, -0.5)$ .† Notice that this is still a valid basis, because we can still express any arbitrary vector $(a, b)$ as a linear combination of these: $$\begin{pmatrix}a\\b\end{pmatrix} = (a+b)\begin{pmatrix}0.5\\0.5\end{pmatrix} + (a-b)\begin{pmatrix}0.5\\-0.5\end{pmatrix}.$$ If we had rejected the "antibonding" combination $(0.5, -0.5)$ , and only taken the "bonding" combination $(0.5, 0.5)$ , then we would not be able to form a valid basis. For example, we could not express the vector $(1, 2)$ as just a linear combination of $(0.5, 0.5)$ . From this point of view, AOs and MOs are simply different bases.‡ The existence of both the bonding and antibonding MO is not only logical, but also mandatory in order to form a complete basis. The familiar AOs that you have studied (1s, 2s, 2p, ...) are simply a series of basis states for the hydrogen atom. In the $\ce{H2}$ molecule, there are two sets of basis states, one for each atom. The transition from "2 sets of AOs" to "1 set of MOs" is just a change of basis: we go from two sets of atomic bases to one single molecular basis. Epilogue The proper answer is, of course, to pick up a QM textbook and study it. By QM I do indeed mean quantum mechanics , not just MO theory. It is a long journey, and you may not necessarily be fully prepared for it right now, but it will reward you with a better understanding of how this works. MO theory fundamentally relies on quantum mechanics , and to learn MO theory without understanding quantum mechanics is what leads to confusion and questions like yours. Footnotes * The analogy I use above is slightly simplified, and might look slightly contrived, but the link between QM and linear algebra is actually very deep. QM is somewhat analogous to linear algebra but on an infinite-dimensional vector space (technically, a Hilbert space ). † Technically this should be $1/\sqrt{2}$ , not $0.5$ . In QM we are nearly always concerned with orthonormal bases, i.e. ones where each basis vector has a length of $1$ and forms a scalar product of $0$ with every other basis vector. (Mathematically, we have a basis $\{|i\rangle\}$ for which $\langle i | j \rangle = \delta_{ij}$ .) ‡ "Bases" being the plural of "basis". | {
"source": [
"https://chemistry.stackexchange.com/questions/151267",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/109653/"
]
} |
153,037 | While researching chromate conversion coating for edits to this answer in Space Exploration SE, I came upon the following passage in Corrosion Resistance of Stainless Steel to Sulfuric Acid Sulfuric acid is quite corrosive in water although it makes a poor electrolyte due to the fact that very little of it will dissociate into ions , according to Chemical Land 21’s description of sulfuric acid. The concentration of the acid is what determines its corrosive effectiveness, as British Stainless Steel Association (BSSA) explains. Most types of stainless steel can resist low or high concentrations, but it will attack the metal at intermediate temperatures. The concentration is affected by temperature. Wikipedia's Sulfuric_acid; Polarity and conductivity says: In spite of the viscosity of the acid, the effective conductivities of the $\ce{H3SO4+}$ and $\ce{HSO4-}$ ions are high due to an intramolecular proton-switch mechanism (analogous to the Grotthuss mechanism in water), making sulfuric acid a good conductor of electricity. It is also an excellent solvent for many reactions. Question: So one source explains that "Sulfuric acid is... a poor electrolyte due to the fact that very little of it will dissociate into ions", and the other seems to suggest that the "effective electrical conductivities" of the resulting ions are high "making sulfuric acid a good conductor of electricity". Is the fact that these appear to disagree to me due to my failure to understand what each means, or is one incorrect or at least incomplete? | The best known conducting aqueous solutions are that of strong acids in water because the hydronium ion (=protonated water) has the highest electrical conductivity known today. The infinite dilution conductivity of of hydronium ion are compared below from the Book Chapter "Proton Transfer Reactions and Kinetics in Water" by Stillingerenter . The units are cm $^2$ /ohm equivalents. As you can see, the best conducting ions in water are the hydronium ion and the hydroxide ion. The "rest" of the ions are nowhere close. $$
\begin{array}{rllll}
\hline t\left({ }^{\circ} \mathrm{C}\right) & \lambda_{\mathrm{H}^{+}}^{\circ} & \lambda_{\mathrm{OH}^{-}} & \lambda_{\mathrm{Na}^{+}}^{\circ} & \lambda_{\mathrm{Cl}^{-}}^{\circ} \\
\hline 0 & 225 & 105 & 26.5 & 41.0 \\
5 & 250.1 & & 30.3 & 47.5 \\
15 & 300.6 & 165.9 & 39.7 & 61.4 \\
18 & 315 & 175.8 & 42.8 & 66.0 \\
25 & 349.8 & 199.1 & 50.10 & 76.35 \\
35 & 397.0 & 233.0 & 61.5 & 92.2 \\
45 & 441.4 & 267.2 & 73.7 & 108.9 \\
55 & 483.1 & 301.4 & 86.8 & 126.4 \\
100 & 630 & 450 & 145 & 212 \\
\hline
\end{array}
$$ So, when people say good or poor aqueous electrolytes, these are all relative terms . You can choose these above values as your measuring bar. Dilute sulfuric acid in water therefore a very good conductor with respect to other salt solutions because it furnishes "protons" in water. When it comes to the conductivity pure sulfuric acid, it will be an apple to orange comparison with its aqueous solution. There is no water (or very little), small amount of hydronium ions. The current carrying ions are now different. Look at the conductance (reciprocal of ohms, the old paper labels the y axis as conductivity) as a function of sulfuric acid concentration, it follows a very non-linear behavior and its resistance is relative high as compared to aqueous solution. Thus the Wikipedia claim "sulfuric acid a good conductor of electricity." is a very relative comparision. Good conductor of electricity as compared to (??). The writer is silent after that! From this graph one can easily deduce that conc. sulfuric is not a very good conductor when compared with its aqueous solutions. (Ref: Darling, Horace E. "Conductivity of sulfuric acid solutions." Journal of Chemical & Engineering Data 9.3 (1964): 421-426.) | {
"source": [
"https://chemistry.stackexchange.com/questions/153037",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/16035/"
]
} |
153,700 | I recently saw "hydrophobic water" in my school science fair. I have no idea on the procedure of how to make it, so, can I make it? I did google it and read some articles (zero helpful) and see some images (which seem to match with the one I saw). So, could you tell me how to make hydrophobic water if I even can? Thanks in advance. Articles that didn't help: How to make stuff super hydrophobic How to make stuff hydrophobic or hydrophilic | You can't make hydrophobic water molecules. You can, however, make hydrophobic droplets containing mostly water. They are not made of pure water, but are coated with a substance that remains on the surface of the water droplet and changes the properties of the surface. Here is an example of making such a hydrophobic drop: https://www.youtube.com/watch?v=H0spGzO2FSo . And here is a demonstration of the properties (this drop also contains hydrophilic dye which remains mixed with the water in the bulk of the droplet: https://www.youtube.com/watch?v=MkLbVLGcn-A . Here is another demonstration with a drop of pure water (top left) encountering a coated drop (bottom right, pushed with a finger): Source: https://www.youtube.com/watch?v=Z4bVP7hEcKI | {
"source": [
"https://chemistry.stackexchange.com/questions/153700",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/112066/"
]
} |
155,510 | I'm a high school student. I noticed $\ce{H+}$ ion is commonly present in my books while I didn't find any presence of $\ce{H-}$ ions in my books. However, I found on internet that $\ce{H-}$ also exists but it is less common. Because Hydrogen has just one electron, it can either receive one electron to be $\ce{H-}$ or omit an electron to be $\ce{H+}$ . So, both should have the same possibility to exist. Then, why is $\ce{H+}$ more common than $\ce {H-}$ ? The answer to the question might be obvious to most of the users here with their knowledge. But please share a detailed explanation that is suitable for a high school student. | This is because we live in a world dominated by oxygen and water. In other words, it is an oxidized world. Most metals occur naturally in the form of oxides, silicates, halides, or other derivatives. Hydrogen occurs as $\ce{H+}$ . In a hypothetical world dominated by metals, all that could have turned out otherwise. Oxygen would be a scarcity, and would come in the form of metal oxides. Nitrogen would be found in nitrides, hydrogen in hydrides (so, a lot of $\ce{H-}$ ), and so on. There would be no free water or free oxygen. In our world, it is the other way around. Water is ubiquitous (that is, found pretty much everywhere); oxygen is even more so. $\ce{H-}$ can't exist in their presence. It will quickly react with either and cease to be $\ce{H-}$ . It can only exist in an artificial environment. So it goes. | {
"source": [
"https://chemistry.stackexchange.com/questions/155510",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/112624/"
]
} |
159,942 | I recently took a Gen. Chem. 2 exam that contained this question. I answered false, but my professor said the answer is true. My reasoning was that any electrons that leave the anode end up at the cathode, so the number of electrons should be conserved. This was consistent with the way we studied redox reactions and electric cells: reactions were always broken down into half-reactions in which the electrons exchanged appeared on opposite sides of the half-reaction equations and canceled out. We never discussed whether electrons are lost in electrical circuits - a strict conservation of electrons was always implied. I asked a physics professor who teaches a class on electromagnetism for his thoughts, and he sent me the following reply: The number of electrons is conserved if there are no losses or leakage. Probably what was meant is whether these are free electrons or not. Clearly, in a used battery you have less free electrons, since there is no more energy to strip them from whatever compound is used in the battery." I sent this (as well as a list of other sources that I won't quote here) to my chemistry professor and received the following reply: The question states electrons. Period. There are no "free" electrons in a battery (there can be delocalized electrons, but that's not the question). Batteries are made of atoms. Atoms are made of protons, neutrons and electrons. As a battery is used, through the flow of electrons, electrons are lost to the environment (fyi - there is energy/electron loss, albeit small to "run" the voltmeter and even in the flow of electrons through a conducting wire). Those electrons are no longer in the battery. Thus, the battery has the same number of protons and neutrons, but less electrons. This also means more unreactive metal cations exist in a used battery. I appreciate all your research to make a point, but hopefully you now see the answer is true. Even your physics professor agrees because there is loss/leakage. Thus, less electrons in the battery. Story: I have a family friend, who is a full professor of electrical engineering at Caltech. She is clearly on the cutting-edge of this field. In one of our discussions, she shared displeasure in online information. She told me her grad students often cited sources that were not true. There is more to this story, but I think the point has been made. Keep it simple. Electrons are energy. They flow. That energy goes elsewhere, leaving the initial system with less energy/electrons. I was a bit baffled by the "electrons are energy" remark - this seems at best poetic. He seems to be conflating electrical potential energy with electrons, but these aren't the same thing. If electrons actually "were" the electrical potential energy in a battery, wouldn't that imply that the compound at the cathode would never actually be reduced, since all energy, and therefore all electrons that "flowed" through the circuit, would be lost after depletion of the battery's potential energy? My understanding is that electrons can have energy but are not themselves energy. Although I have a very low level of knowledge regarding this topic, I've done a few hours of research and found that the common notion of electricity as a flow of electrons akin to a river is wrong, and that although electrons do move very slowly through a circuit, the flow of energy is due to electromagnetic fields associated with charged particles. Unfortunately, I could not find any sources that directly answered this question, so I would greatly appreciate direct answers to this question from experts on this topic. | Very bad explanation in the email response. The explanation reads.. "Thus, the battery has the same number of protons and neutrons, but
less electrons. This also means more unreactive metal cations exist in
a used battery." No, not at all. An electrochemical cell is not ionized. It is always neutral overall. However, the electrodes are indeed electrostatically charged (just like your charged comb). If you had a charge sniffer (e.g., a charge sensor from Vernier) and if you touch the positive end of the battery with such a sensor, it will indeed show that this end was electrostatically charged with a positive sign. The negative pole of the battery is equally charged but with a negative sign. Overall an isolated battery as a system is electrically neutral. Why don't you feel the charge like a charged balloon or a comb near a battery? The reason is the voltage is very low! Second, point: No, electrons are not energy. Electrons in a wire/electrode behave very much like negative charge (see Hall effect if you are interested) While you cannot say True or False for "A used AA battery contains fewer moles of electrons than a new AA battery." The number of electrons before and after are the same for the reasons that a battery is electrically neutral overall , assuming no mass loss during the battery usage. For a closed ciruit, no electrons were wasted or lost. The electrons were just travelling from one pole to the other pole while traveling with a direction in the circuit. A used battery is reaching equilibrium which means that the battery is no longer able to do useful work (on the electrons in the external conductor). The voltage starts slowly to drop as you keep drawing current. The current also decreases. For example, keep a flashlight running on battery overnight and you will initially see a bright bulb going dimmer, dimmer and finally no light at all. | {
"source": [
"https://chemistry.stackexchange.com/questions/159942",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/119296/"
]
} |
171,252 | I have calibrated pH sensors in the lab on several occasions and have used standards of 4.0, 7.0 and 10.0 usually. Recently I received a sensor and was tasked to calibrate it, but it requested standards of 6.86 and 9.18 in addition to the 4.0 — with no ability to change this. I see that you can buy these standards but they are kinda tricky to come by. My question is: Why these specific numbers, and who uses these? (as I have never seen them before) | The pH 6.86 and 9.18 values come from NIST standard buffer solutions for pH calibration, as described in, for example, NBS special publication 260-53 (1988) (pdf available here ). It appears they chose those values because they are close to the desired values of 7 and 9, but are also easy to prepare. For example, the 6.86 results from combining equal concentrations of $\ce{KH2PO4}$ and $\ce{Na2HPO4}$ (see Table 1 of the above-referenced document), whereas the 9.18 buffer is a conveniently round number concentration (0.01 M) of borax. The reason that you might be required to use these standards is to ensure that your calibration meets NIST standards. (For those outside of the USA, NIST is the United States National Institute of Standards and Technology, formerly the National Bureau of Standards, responsible for national-level standardization of measurements.) | {
"source": [
"https://chemistry.stackexchange.com/questions/171252",
"https://chemistry.stackexchange.com",
"https://chemistry.stackexchange.com/users/131244/"
]
} |
18 | As of now, I haven't seen any questions that neccesarily need MathJax formatting, but I'm sure that at some point it'll be needed. Should we have MathJax support? | Edited to clarify my opinion, feel free to change your vote. Computer graphics can be very maths-heavy so I think MathJax would be a great addition to this site. I think we should definitely get MathJax , but the community should be aware of the following side effects of activating it: We've had MathJax for a short while on CodeGolf.SE, and there is at least one bug that may affect Stack Snippets if we get them as well. This is probably not a big deal on this site, but should be kept in mind. The other minor drawback is that posts containing MathJax look pretty messy in the search results, but all the other sites using MathJax seem to be getting along with that as well. | {
"source": [
"https://computergraphics.meta.stackexchange.com/questions/18",
"https://computergraphics.meta.stackexchange.com",
"https://computergraphics.meta.stackexchange.com/users/17/"
]
} |
6 | I've heard that the quality of a monte carlo ray tracer (based on path tracing algorithms) is much more realistic than a distributed (stochastic) engine. I try to understand why, but I'm just at the beginning. In order to dive into this topic and understand the basics, can someone point me into the right direction? What part of the algorithm leads into a more realistic render result? | The term "distributed ray tracing" was originally coined by Robert Cook in this 1984 paper . His observation was that in order to perform anti-aliasing in a ray-tracer, the renderer needs to perform spatial upsampling - that is, to take more samples (i.e. shoot more rays) than the number of pixels in the image and combine their results. One way to do this is to shoot multiple rays within a pixel and average their color values, for example. However, if the renderer is already tracing multiple rays per pixel anyway to obtain an anti-aliased image, then these rays can also be "distributed" among additional dimensions than just the pixel position to sample effects that could not be captured by a single ray. The important bit is that this comes without any additional cost on top of spatial upsampling, since you're already tracing those additional rays anyway. For example, if you shoot multiple rays within your pixel to compute an anti-aliased result, you can get motion blur absolutely for free if you also use a different time value for each ray (or soft shadows if they connect to a different point on the light source, or depth of field if they use a different starting point on the aperture, etc.). Monte Carlo ray tracing is a term that is slightly ambiguous. In most cases, it refers to rendering techniques that solve the rendering equation , introduced by Jim Kajiya in 1986, using Monte Carlo integration. Practically all modern rendering techniques that solve the rendering equation, such as path tracing, bidirectional path tracing, progressive photon mapping and VCM, can be classified as Monte Carlo ray tracing techniques. The idea of Monte Carlo integration is that we can compute the integral of any function by randomly choosing points in the integration domain and averaging the value of the function at these points. At a high level, in Monte Carlo ray tracing we can use this technique to integrate the amount of light arriving at the camera within a pixel in order to compute the pixel value. For example, a path tracer does this by randomly picking a point within the pixel to shoot the first ray, and then continues to randomly pick a direction to continue on the surface it lands on, and so forth. We could also randomly pick a position on the time axis if we want to do motion blur, or randomly pick a point on the aperture if wanted to do depth of field, or... If this sounds very similar to distributed ray tracing, that's because it is! We can think of distributed ray tracing as a very informal description of a Monte Carlo algorithm that samples certain effects like soft shadows. Cook's paper lacks the mathematical framework to really reason about it properly, but you could certainly implement distributed ray tracing using a simple Monte Carlo renderer. It's worth noting that distributed ray tracing lacks any description of global illumination effects, which are naturally modeled in the rendering equation (it should be mentioned that Kajiya's paper was published two years after Cook's paper). You can think of Monte Carlo ray tracing as being a more general version of distributed ray tracing. Monte Carlo ray tracing contains a general mathematical framework that allows you to handle practically any effect, including those mentioned in the distributed ray tracing paper. These days, "distributed ray tracing" is not really a term that's used to refer to the original algorithm. More often you will hear it in conjunction with "distribution effects", which are simply effects such as motion blur, depth of field or soft shadows that cannot be handled with a single-sample raytracer. | {
"source": [
"https://computergraphics.stackexchange.com/questions/6",
"https://computergraphics.stackexchange.com",
"https://computergraphics.stackexchange.com/users/18/"
]
} |
12 | Wikipedia states that a stencil buffer is some arbitrary buffer a shader can use. However, it hints that it's used for clipping, or otherwise "tightly binding" the depth and pixel buffers, slightly contradicting itself. What does the stencil buffer really do, and how is it practically used in modern applications? | The stencil buffer definition by Wikipedia is indeed not great, it focuses too much on the details of modern implementations (OpenGL). I find the disambiguated version easier to understand: A stencil is a template used to draw or paint identical letters, symbols, shapes, or patterns every time it is used. The design produced by such a template is also called a stencil. That's what stencil meant before Computer Graphics. If you type stencil on Google Images, this is one of the first results: As you can see, it is simply a mask or pattern that can be used to "paint" the negative of the pattern onto something. The stencil buffer works in the exact same way. One can fill the stencil buffer with a selected pattern by doing a stencil render pass, then set the appropriate stencil function which will define how the pattern is to be interpreted on subsequent drawings, then render the final scene. Pixels that fall into the rejected areas of the stencil mask, according to the compare function, are not drawn. When it comes to implementing the stencil buffer, sometimes it is indeed coupled with the depth buffer. Most graphics hardware uses a 1 byte (8 bits) stencil buffer, which is enough for most applications. Depth buffers are usually implemented using 3 bytes (24 bits), which again is normally enough for most kinds of 3D rendering. So it is only logical to pack the 8 bits of the stencil buffer with the other 24 of the depth buffer, making it possible to store each depth + stencil pixel into a 32 bit integer. That's what Wikipedia meant by: The depth buffer and stencil buffer often share the same area in the RAM of the graphics hardware. One application in which the stencil buffer used to be king was for shadow rendering, in a technique called shadow volumes , or sometimes also appropriately called stencil shadows . This was a very clever use of the buffer, but nowadays most of the rendering field seems to have shifted towards depth-based shadow maps. | {
"source": [
"https://computergraphics.stackexchange.com/questions/12",
"https://computergraphics.stackexchange.com",
"https://computergraphics.stackexchange.com/users/71/"
]
} |
37 | Programmers are supposed to have a fairly good idea of the cost of certain operations: for example the cost of an instruction on CPU, the cost of a L1, L2, or L3 cache miss, the cost of a LHS. When it comes to graphics, I realize I have little to no idea what they are. I have in mind that if we order them by cost, state changes are something like: Shader uniform change. Active vertex buffer change. Active texture unit change. Active shader program change. Active frame buffer change. But that is a very rough rule of thumb, it might not even be correct, and I have no idea what are the orders of magnitude. If we try to put units, ns, clock cycles or number of instructions, how much are we talking about? | The most data I've seen is on the relative expense of various state changes is from Cass Everitt and John McDonald's talk on reducing OpenGL API overhead from January 2014. Their talk included this slide (at 31:55): The talk doesn't give any more info on how they measured this (or even whether they're measuring CPU or GPU cost, or both!). But at least it dovetails with the conventional wisdom: render target and shader program changes are the most expensive, uniform updates the least, with vertex buffers and texture changes somewhere in the middle. The rest of their talk also has a lot of interesting wisdom about reducing state-change overhead. | {
"source": [
"https://computergraphics.stackexchange.com/questions/37",
"https://computergraphics.stackexchange.com",
"https://computergraphics.stackexchange.com/users/182/"
]
} |
39 | I've read that blur is done in real time graphics by doing it on one axis and then the other. I've done a bit of convolution in 1D in the past but I am not super comfortable with it, nor know what to convolve in this case exactly. Can anyone explain in plain terms how a 2D Gaussian Blur of an image is done? I've also heard that the radius of the Blur can impact the performance. Is that due to having to do a larger convolution? | In convolution, two mathematical functions are combined to produce a third function. In image processing functions are usually called kernels. A kernel is nothing more than a (square) array of pixels (a small image so to speak). Usually, the values in the kernel add up to one. This is to make sure no energy is added or removed from the image after the operation. Specifically, a Gaussian kernel (used for Gaussian blur) is a square array of pixels where the pixel values correspond to the values of a Gaussian curve (in 2D). Each pixel in the image gets multiplied by the Gaussian kernel. This is done by placing the center pixel of the kernel on the image pixel and multiplying the values in the original image with the pixels in the kernel that overlap. The values resulting from these multiplications are added up and that result is used for the value at the destination pixel. Looking at the image, you would multiply the value at (0,0) in the input array by the value at (i) in the kernel array, the value at (1,0) in the input array by the value at (h) in the kernel array, and so on. and then add all these values to get the value for (1,1) at the output image. To answer your second question first, the larger the kernel, the more expensive the operation. So, the larger the radius of the blur, the longer the operation will take. To answer your first question, as explained above, convolution can be done by multiplying each input pixel with the entire kernel. However, if the kernel is symmetrical (which a Gaussian kernel is) you can also multiply each axis (x and y) independently, which will decrease the total number of multiplications. In proper mathematical terms, if a matrix is separable it can be decomposed into (M×1) and (1×N) matrices. For the Gaussian kernel above this means you can also use the following kernels: $$\frac1{256}\cdot\begin{bmatrix}
1&4&6&4&1\\
4&16&24&16&4\\
6&24&36&24&6\\
4&16&24&16&4\\
1&4&6&4&1
\end{bmatrix}
=
\frac1{256}\cdot\begin{bmatrix}
1\\4\\6\\4\\1
\end{bmatrix}\cdot\begin{bmatrix}
1&4&6&4&1
\end{bmatrix}
$$ You would now multiply each pixel in the input image with both kernels and add the resulting values to get the value for the output pixel. For more information on how to see if a kernel is separable, follow this link . Edit: the two kernels shown above use slightly different values. This is because the (sigma) parameter used for the Gaussian curve to create these kernels were slightly different in both cases. For an explanation on which parameters influence the shape of the Gaussian curve and thus the values in the kernel follow this link Edit: in the second image above it says the kernel that is used is flipped. This of course only makes any difference if the kernel you use is not symmetric. The reason why you need to flip the kernel has to do with the mathematical properties of the convolution operation (see link for a more in depth explanation on convolution). Simply put: if you would not flip the kernel, the result of the convolution operation will be flipped. By flipping the kernel, you get the correct result. | {
"source": [
"https://computergraphics.stackexchange.com/questions/39",
"https://computergraphics.stackexchange.com",
"https://computergraphics.stackexchange.com/users/56/"
]
} |
54 | Image filtering operations such as blurs, SSAO, bloom and so forth are usually done using pixel shaders and "gather" operations, where each pixel shader invocation issues a number of texture fetches to access the neighboring pixel values, and computes a single pixel's worth of the result. This approach has a theoretical inefficiency in that many redundant fetches are done: nearby shader invocations will re-fetch many of the same texels. Another way to do it is with compute shaders. These have the potential advantage of being able to share a small amount of memory across a group of shader invocations. For instance, you could have each invocation fetch one texel and store it in shared memory, then calculate the results from there. This might or might not be faster. The question is under what circumstances (if ever) is the compute-shader method actually faster than the pixel-shader method? Does it depend on the size of the kernel, what kind of filtering operation it is, etc.? Clearly the answer will vary from one model of GPU to another, but I'm interested in hearing if there are any general trends. | An architectural advantage of compute shaders for image processing is that they skip the ROP step. It's very likely that writes from pixel shaders go through all the regular blending hardware even if you don't use it. Generally speaking compute shaders go through a different (and often more direct) path to memory, so you may avoid a bottleneck that you would otherwise have. I've heard of fairly sizable performance wins attributed to this. An architectural disadvantage of compute shaders is that the GPU no longer knows which work items retire to which pixels. If you are using the pixel shading pipeline, the GPU has the opportunity to pack work into a warp/wavefront that write to an area of the render target which is contiguous in memory (which may be Z-order tiled or something like that for performance reasons). If you are using a compute pipeline, the GPU may no longer kick work in optimal batches, leading to more bandwidth use. You may be able to turn that altered warp/wavefront packing into an advantage again, though, if you know that your particular operation has a substructure that you can exploit by packing related work into the same thread group. Like you said, you could in theory give the sampling hardware a break by sampling one value per lane and putting the result in groupshared memory for other lanes to access without sampling. Whether this is a win depends on how expensive your groupshared memory is: if it's cheaper than the lowest-level texture cache, then this may be a win, but there's no guarantee of that. GPUs already deal pretty well with highly local texture fetches (by necessity). If you have an intermediate stages in the operation where you want to share results, it may make more sense to use groupshared memory (since you can't fall back on the texture sampling hardware without having actually written out your intermediate result to memory). Unfortunately you also can't depend on having results from any other thread group, so the second stage would have to limit itself to only what is available in the same tile. I think the canonical example here is computing the average luminance of the screen for auto-exposure. I could also imagine combining texture upsampling with some other operation (since upsampling, unlike downsampling and blurs, doesn't depend on any values outside a given tile). | {
"source": [
"https://computergraphics.stackexchange.com/questions/54",
"https://computergraphics.stackexchange.com",
"https://computergraphics.stackexchange.com/users/48/"
]
} |
61 | The OpenGL documentation states that fwidth returns the sum of the absolute value of derivatives in x and y . What does this mean in less mathematical terms, and is there a way to visualize it? Based on my understanding of the function, fwidth(p) has access to the value of p in neighboring pixels. How does this work on the GPU without drastically impacting performance, and does it work reliably and uniformly across all pixels? | Pixel screen-space derivatives do drastically impact performance, but they impact performance whether you use them or not, so from a certain point of view they're free! Every GPU in recent history packs a quad of four pixels together and puts them in the same warp/wavefront, which essentially means that they're running right next to each other on the GPU, so accessing values from them is very cheap. Because warps/wavefronts are run in lockstep, the other pixels will also be at exactly the same place in the shader as you are, so the value of p for those pixels will just be sitting in a register waiting for you. These other three pixels will always be executed, even if their results will be thrown away. So a triangle that covers a single pixel will always shade four pixels and throw away the results of three of them, just so that these derivative features work! This is considered an acceptable cost (for current hardware) because it isn't just functions like fwidth that use these derivatives: every single texture sample does as well, in order to pick what mipmap of your texture to read from. Consider: if you are very close to a surface, the UV coordinate you are using to sample the texture will have a very small derivative in screen space, meaning you need to use a larger mipmap, and if you are farther the UV coordinate will have a larger derivative in screen space, meaning you need to use a smaller mipmap. As far as what it means in less mathematical terms: fwidth is equivalent to abs(dFdx(p)) + abs(dFdy(p)) . dFdx(p) is simply the difference between the value of p at pixel x+1 and the value of p at pixel x, and similarly for dFdy(p) . | {
"source": [
"https://computergraphics.stackexchange.com/questions/61",
"https://computergraphics.stackexchange.com",
"https://computergraphics.stackexchange.com/users/88/"
]
} |
81 | If you read papers about subsurface scattering, you'll frequently come across references to something called the "dipole approximation". This term seems to go back to the paper A Practical Model for Subsurface Light Transport by Henrik Wann Jensen et al, but this paper is pretty difficult to understand. Can anyone explain in relatively simple terms what the dipole approximation is and how it's used in rendering subsurface scattering? | The assumption underlying such model is the same as lots of other models for skin rendering; the subsurface scattering can be approximated as a diffusion phenomenon. This is good because in highly scattering media, the distribution of light loses dependency from the angle and tends to isotropy. The dipole approximation is a formulation for the resolution of such diffusion problem in an analytical fashion. Basically they start by approximating the BSSRDF as a multiple scattering and single scattering component. The multiple scattering is then defined as: Where $F_t$ are Fresnel terms and $R$ is the diffusion profile expressed as function of the distance between the entry and exit point. This $R$ is referred to as diffusion profile and they formulate this profile via a dipole approximation. The contribution of the incoming light ray is considered to be the one of two virtual sources: one negative beneath the surface and one positive above it (that's why dipole) Here in the picture r is the $\|x_i - x_o\|$ above. The contribution of those light sources is dependent on various factors such as the distance of the light from the surface, scattering coefficient etc. (See below for a more detailed description of the formula itself). This model only account for multiple scattering events, but that's fine enough for skin. It must be noticed though that for some translucent materials (e.g. smoke and marble) the single scattering is fundamental. That paper propose a single scattering formulation, but is expensive. The diffusion profile is usually approximated for real-time application as a series of gaussian blurs (like in the seminal works of D'Eon et al. in GPU Gems 3 then used for the Jimenez's SSSSS) so to make it practical for real time scenarios. In this wonderful paper there are details on such approximation.
A picture from that paper show actually how good is this formulation: As side note the dipole approximation assumes that the material is semi-infinite, however this assumption doesn’t hold with thin slabs and multi-layered material such as the skin. Building on the dipole work, Donner and Jensen [2005] proposed the multi-pole approximation that accounts for the dipole problems.
With this model instead of a single dipole, the authors use a set of them to describe the scattering phenomenon. In such formulation the reflectance and transmittance profiles can be obtained by summing up the contribution of the different dipoles involved EDIT:
I am putting here the answers to a couple of @NathanReed 's questions in the comment section: Even with the diffusion profile approximation, the BSSRDF model still requires integrating over a radius of nearby points on the surface to gather incoming light, correct? How is that accomplished in, say, a path tracer? Do you have to build some data structure so you can sample points on the surface nearby a given point? The BSSRDF approximation still need to be integrated over a certain area, yes. In the paper linked they used a Montecarlo ray-tracer randomly sampling around a point with a density defined as: $\sigma_{tr}e^{-\sigma_{tr}d}$ Where that sigma value is the effective extinction coefficient defined below ( it is dependent on scattering and absorbing coefficient, which are properties of the material) and d is the distance to the point we are evaluating. This density is this way defined because the diffusion term has exponential fall-off. In [Jensen and Buhler 2002] they proposed an acceleration technique. One of the main concepts was to decouple the sampling from the evaluation of the diffusion term. This way they perform a hierarchical evaluation of the information computed during the sampling phase to cluster together distant samples when it comes to evaluating the diffusion. The implementation described in the paper uses an octree as structure.
This technique, according to the paper, is order of magnitude faster than the full Monte Carlo integration. Unfortunately I never got myself into an off-line implementation, so I can't help more than this. In the real-time sum-of-Gaussians approximations the correct radius is implicitly set when defining the variance of the Gaussian blurs that need to be applied. Why one positive and one negative light? Is the goal for them to cancel each other in some way? Yes, the dipole source method (which dates way before Jensen's paper) is such defined to satisfy some boundary condition. Specifically the fluence must be zero at a certain extrapolated boundary that has a distance from the surface of $2AD$ where Being $F_{dr}$ the fresnel reflectivity of the slab considered and that sigma value is the reduced extinction coefficient described below. EDIT2: I have expanded (a tiny bit) some of the concepts in this answer in a blog post: http://bit.ly/1Q82rqT For those who are not scared by lots of greek letters in a formula, here's an extract from my thesis where the reflectance profile is briefly described in each term: | {
"source": [
"https://computergraphics.stackexchange.com/questions/81",
"https://computergraphics.stackexchange.com",
"https://computergraphics.stackexchange.com/users/48/"
]
} |
96 | When writing non-trivial shaders (just as when writing any other piece of non-trivial code), people make mistakes. [citation needed] However, I can't just debug it like any other code - you can't just attach gdb or the Visual Studio debugger after all. You can't even do printf debugging, because there's no form of console output. What I usually do is render the data I want to look at as colour, but that is a very rudimentary and amateurish solution. I'm sure people have come up with better solutions. So how can I actually debug a shader? Is there a way to step through a shader? Can I look at the execution of the shader on a specific vertex/primitive/fragment? (This question is specifically about how to debug shader code akin to how one would debug "normal" code, not about debugging things like state changes.) | As far as I know there are no tools that allows you to steps through code in a shader (also, in that case you would have to be able to select just a pixel/vertex you want to "debug", the execution is likely to vary depending on that).
(NOTE: Renderdoc allows you to do so by selecting a specific thread/pixel) What I personally do is a very hacky "colourful debugging". So I sprinkle a bunch of dynamic branches with #if DEBUG / #endif guards that basically say #if DEBUG
if( condition )
outDebugColour = aColorSignal;
#endif
.. rest of code ..
// Last line of the pixel shader
#if DEBUG
OutColor = outDebugColour;
#endif So you can "observe" debug info this way. I usually do various tricks like lerping or blending between various "colour codes" to test various more complex events or non-binary stuff. In this "framework" I also find useful to have a set of fixed conventions for common cases so that if I don't have to constantly go back and check what colour I associated with what.
The important thing is have a good support for hot-reloading of shader code, so you can almost interactively change your tracked data/event and switch easily on/off the debug visualization. If need to debug something that you cannot display on screen easily, you can always do the same and use one frame analyser tool to inspect your results. I've listed a couple of them as answer of this other question. Obv, it goes without saying that if I am not "debugging" a pixel shader or compute shader, I pass this "debugColor" info throughout the pipeline without interpolating it (in GLSL with flat keyword ) Again, this is very hacky and far from proper debugging, but is what I am stuck with not knowing any proper alternative. | {
"source": [
"https://computergraphics.stackexchange.com/questions/96",
"https://computergraphics.stackexchange.com",
"https://computergraphics.stackexchange.com/users/16/"
]
} |
100 | I often find myself copy-pasting code between several shaders. This includes both certain computations or data shared between all shaders in a single pipeline, and common computations which all of my vertex shaders need (or any other stage). Of course, that's horrible practice: if I need to change the code anywhere, I need to make sure I change it everywhere else. Is there an accepted best practice for keeping DRY ? Do people just prepend a single common file to all their shaders? Do they write their own rudimentary C-style preprocessor which parses #include directives? If there are accepted patterns in the industry, I'd like to follow them. | There's a bunch of a approaches, but none is perfect. It's possible to share code by using glAttachShader to combine shaders, but this doesn't make it possible to share things like struct declarations or #define -d constants. It does work for sharing functions. Some people like to use the array of strings passed to glShaderSource as a way to prepend common definitions before your code, but this has some disadvantages: It's harder to control what needs to be included from within the shader (you need a separate system for this.) It means the shader author cannot specify the GLSL #version , due to the following statement in the GLSL spec: The #version directive must occur in a shader before anything else, except for comments and white space. Due to this statement, glShaderSource cannot be used to prepend text before the #version declarations. This means that the #version line needs to be included in your glShaderSource arguments, which means that your GLSL compiler interface needs to somehow be told what version of GLSL is expected to be used. Additionally, not specifying a #version will make the GLSL compiler default to using GLSL version 1.10. If you want to let shader authors specify the #version within the script in a standard way, then you need to somehow insert #include -s after the #version statement. This could be done by explicitly parsing the GLSL shader to find the #version string (if present) and make your inclusions after it, but having access to an #include directive might be preferable to control more easily when those inclusions need to be made. On the other hand, since GLSL ignores comments before the #version line, you could add metadata for includes within comments at the top of your file (yuck.) The question now is: Is there a standard solution for #include , or do you need to roll your own preprocessor extension? There is the GL_ARB_shading_language_include extension, but it has some drawbacks: It is only supported by NVIDIA ( http://delphigl.de/glcapsviewer/listreports2.php?listreportsbyextension=GL_ARB_shading_language_include ) It works by specifying the include strings ahead of time. Therefore, before compiling, you need to specify that the string "/buffers.glsl" (as used in #include "/buffers.glsl" ) corresponds to the contents of the file buffer.glsl (which you have loaded previously). As you may have noticed in point (2), your paths need to start with "/" , like Linux-style absolute paths. This notation is generally unfamiliar to C programmers, and means you can't specify relative paths. A common design is to implement your own #include mechanism, but this can be tricky since you also need to parse (and evaluate) other preprocessor instructions like #if in order to properly handle conditional compilation (like header guards.) If you implement your own #include , you also have some liberties in how you want to implement it: You could pass strings ahead of time (like GL_ARB_shading_language_include ). You could specify an include callback (this is done by DirectX's D3DCompiler library.) You could implement a system that always reads directly from the filesystem, as done in typical C applications. As a simplification, you can automatically insert header guards for each include in your preprocessing layer, so your processor layer looks like: if (#include and not_included_yet) include_file(); (Credit to Trent Reed for showing me the above technique.) In conclusion , there exists no automatic, standard, and simple solution. In a future solution, you could use some SPIR-V OpenGL interface, in which case the GLSL to SPIR-V compiler could be outside of the GL API. Having the compiler outside the OpenGL runtime greatly simplifies implementing things like #include since it's a more appropriate place to interface with the filesystem. I believe the current widespread method is to just implement a custom preprocessor that works in a way any C programmer should be familiar with. | {
"source": [
"https://computergraphics.stackexchange.com/questions/100",
"https://computergraphics.stackexchange.com",
"https://computergraphics.stackexchange.com/users/16/"
]
} |
161 | A lot of ShaderToy demos share the Ray Marching algorithm to render the scene, but they are often written with a very compact style and i can't find any straightforward examples or explanation. So what is Ray Marching? Some comments suggests that it is a variation of Sphere Tracing. What are the computational advantages of a such approach? | TL;DR They belong to the same family of solvers, where sphere tracing is one method of ray marching, which is the family name. Raymarching a definition Raymarching is a technique a bit like traditional raytracing where the surface function is not easy to solve (or impossible without numeric iterative methods). In raytracing you just look up the ray intersection, whereas in ray marching you march forward (or back and forth) until you find the intersection, have enough samples or whatever it is your trying to solve. Try to think of it like a newton-raphson method for surface finding, or summing for integrating a varying function. This can be useful if you: Need to render volumetrics that arenot uniform Rendering implicit functions, fractals Rendering other kinds of parametric surfaces where intersection is not known ahead of time, like paralax mapping Etc Image 1 : Traditional ray marching for surface finding Related posts: how-do-raymarch-shaders-work (GameDev.SE) Sphere tracing Sphere tracing is one possible Ray marching algorithm. Not all raymarching uses benefit form this method, as they can not be converted into this kind of scheme. Sphere tracing is used for rendering implicit surfaces . Implicit surfaces are formed at some level of a continuous function. In essence solving the equation F(X,Y,Z) = 0 Because of how this function can be solved at each point, one can go ahead and estimate the biggest possible sphere that can fit the current march step (or if not exactly reasonably safely). You then know that next march distance is at least this big. This way you can have adaptive ray marching steps speeding up the process. Image 2 : Sphere tracing* in action note how the step size is adaptive For more info see: Sphere tracing: a geometric method for the antialiased ray tracing of implicit surfaces * Perhaps in 2d it's should be called circle tracing :) | {
"source": [
"https://computergraphics.stackexchange.com/questions/161",
"https://computergraphics.stackexchange.com",
"https://computergraphics.stackexchange.com/users/316/"
]
} |
233 | Looking at a light probe texture, it looks like a blurry environment map. What's the difference between the two, how is a light probe made, and what is the benefit of it being blurry? | There are two different common meanings of "light probe" that I'm aware of. Both of them represent the light around a single point in a scene, i.e. what you would see around you in all directions if you were shrunk down to a tiny size and stood at that point. One meaning is a spherical harmonic representation of the light around a point. Spherical harmonics are a collection of functions defined over a spherical domain, which are analogous to sine waves that oscillate a certain number of times around the equator and from pole to pole on the sphere. Spherical harmonics can be used to create a smooth, low-res approximation of any spherical function, by scaling and adding together some number of spherical harmonics—usually 4 (known as linear, first-degree, or one-band SH) or 9 (called quadratic, second-degree, or two-band SH). This is very compact because you only have to store the scaling factors. For instance, for quadratic SH with RGB data, you only need 9*3 = 27 values per probe. So SH makes a very compact, but also necessarily very soft and blurry, representation of the light around a point. This is suitable for diffuse lighting, and perhaps specular with a high roughness. This screenshot from Simon's Tech Blog shows an array of SH light probes spaced throughout a scene, each one showing the indirect lighting received at that point: The other currently common meaning of "light probe" is an environment cube-map whose mip levels have been pre-blurred to different extends so it can be used for specular lighting with varying levels of roughness. This image from Seb Lagarde's blog shows the basic idea: The higher-resolution mips (toward the left) are used for highly polished surfaces where you need a detailed reflected image. Toward the right, the lower-res mip levels are increasingly blurred, and are used for reflections from rougher surfaces. In a shader, when sampling this cubemap, you can calculate your requested mip level based on the material roughness, and take advantage of the trilinear filtering hardware. Both of these types of light probes are used in real-time graphics to approximate indirect lighting. While direct lighting can be calculated in real-time (or at least approximated well for area lights), indirect lighting is usually still baked in an offline preprocess due to its complexity and computational overhead. Traditionally, the result of the baking process would be lightmaps, but lightmaps only work for diffuse lighting on static geometry, and they take up a lot of memory besides. Baking a bunch of SH light probes (you can afford a lot of them because they're very compact), plus a sparser sprinkling of cubemap light probes, allows you to get decent diffuse and specular indirect lighting on both static and dynamic objects. They're a popular option in games today. | {
"source": [
"https://computergraphics.stackexchange.com/questions/233",
"https://computergraphics.stackexchange.com",
"https://computergraphics.stackexchange.com/users/56/"
]
} |
306 | Signed Distance Fields (SDFs) was presented as a fast solution to achieve resolution independent font rendering by Valve in this paper . I already have the Valve solution working but I'd like to preserve sharpness around corners. Valve states that their method can achieve sharp corners by using a second texture channels ANDed with the base one, but lacks to explain how this second channel would be generated. In fact there's a lot of implementation details left out of this paper. I'd like to know if any of you could point me out a direction to get SDFs font rendering with sharp corners. | EDIT: Please see my other answer with a concrete solution. I have actually solved this exact problem over a year ago for my master's thesis. In the Valve paper, they show that you can AND two distance fields to achieve this, which works as long as you only have one convex corner. For concave corners, you also need the OR operation. This guy actually developed some obscure system to switch between the two operations using four texture channels. However, there is a much simpler operation that can facilitate both AND and OR depending on the situation, and this is the principal idea of my thesis: the median of three . So basically, you use exactly three channels (ideal for RGB), which are completely interchangeable, and combine them using the median operation (choose the middle value out of the three). To accomodate anti-aliasing, we don't work with just booleans, but floating point values, and the AND operation becomes the minimum, and the OR becomes the maximum of two values. The median of three can indeed do both: if a < b , for ( a , a , b ), the median is the minimum, and for ( a , b , b ), it is the maximum. The rendering process is still extremely simple. The entire fragment shader including anti-aliasing can look something like this: int main() {
// Bilinear sampling of the distance field
vec3 s = texture2D(sdf, p).rgb;
// Acquire the signed distance
float d = median(s.r, s.g, s.b) - 0.5;
// Weight between inside and outside (anti-aliasing)
float w = clamp(d/fwidth(d) + 0.5, 0.0, 1.0);
// Combining the background and foreground color
gl_FragColor = mix(outsideColor, insideColor, w);
} So the only difference from the original method is computing the median right after sampling the texture. You will have to implement the median function though, which can be done with just 4 min/max operations . Now of course, the question is, how do I build such a three-channel distance field? And this is the tricky part. The most obvious approach that I took in the beginning was to perform a decomposition of the input shape/glyph into three components, and then generate a conventional distance field out of each. The rules for this decomposition aren't that complicated. Firstly , the area with at least 2 out of 3 channels on is the inside. Then, if you imagine this as the RGB color channels, convex corners must be made of a secondary color, and its two primary components continue outward. Concave corners are the inverse: Two secondary colors enclose their common primary color, and the wedge between where both edges continue inward is white. I also found that some padding is necessary where two primary or two secondary colors would otherwise touch to avoid artifacts (for example, in the middle stroke of the "N" in the picture). The following image is an example decomposition generated by the program from my thesis: This approach however has some drawbacks. One of them is that the special effects, such as outlines and shadows will no longer work correctly. Fortunatelly, I also came up with a second, much more elegant method, which generates the distance fields directly, and even supports all of the graphical effects. It is also included in my thesis and so is also over a year old. I am not going to give any more details right now, because I am currently writing a paper that describes this second technique in detail, but I will post it here as soon as it's finished. Anyway, here is an example of the difference in quality. The texture resolution is the same in each image, but the left one uses a regular texture, the middle one uses an ordinary distance field, and the right one uses my three-channel distance field. The performance overhead is only the difference between sampling an RGB texture versus a monochrome one. | {
"source": [
"https://computergraphics.stackexchange.com/questions/306",
"https://computergraphics.stackexchange.com",
"https://computergraphics.stackexchange.com/users/250/"
]
} |
311 | Although simulation models like Boids give good results for bird flocks or fish shoals on a small scale, simulating every single member in real time becomes unrealistic for huge numbers. Is there a way I can model a flock in the distance where only the density of birds is visible? I'd like to have that flowing, changing density gradient with a much smaller number of variables to process. I've tried using a much smaller population and displaying each boid as a blurred area with Gaussian density so that as they overlap the density rises and falls through their interaction. This is reasonably cheap but it never leads to sharp changes in density, either spatially or temporally, which makes it look too uniform. Is there any other way of getting away with a much smaller number of individuals? Or is the only way to get realistic results to prerender? | EDIT: Please see my other answer with a concrete solution. I have actually solved this exact problem over a year ago for my master's thesis. In the Valve paper, they show that you can AND two distance fields to achieve this, which works as long as you only have one convex corner. For concave corners, you also need the OR operation. This guy actually developed some obscure system to switch between the two operations using four texture channels. However, there is a much simpler operation that can facilitate both AND and OR depending on the situation, and this is the principal idea of my thesis: the median of three . So basically, you use exactly three channels (ideal for RGB), which are completely interchangeable, and combine them using the median operation (choose the middle value out of the three). To accomodate anti-aliasing, we don't work with just booleans, but floating point values, and the AND operation becomes the minimum, and the OR becomes the maximum of two values. The median of three can indeed do both: if a < b , for ( a , a , b ), the median is the minimum, and for ( a , b , b ), it is the maximum. The rendering process is still extremely simple. The entire fragment shader including anti-aliasing can look something like this: int main() {
// Bilinear sampling of the distance field
vec3 s = texture2D(sdf, p).rgb;
// Acquire the signed distance
float d = median(s.r, s.g, s.b) - 0.5;
// Weight between inside and outside (anti-aliasing)
float w = clamp(d/fwidth(d) + 0.5, 0.0, 1.0);
// Combining the background and foreground color
gl_FragColor = mix(outsideColor, insideColor, w);
} So the only difference from the original method is computing the median right after sampling the texture. You will have to implement the median function though, which can be done with just 4 min/max operations . Now of course, the question is, how do I build such a three-channel distance field? And this is the tricky part. The most obvious approach that I took in the beginning was to perform a decomposition of the input shape/glyph into three components, and then generate a conventional distance field out of each. The rules for this decomposition aren't that complicated. Firstly , the area with at least 2 out of 3 channels on is the inside. Then, if you imagine this as the RGB color channels, convex corners must be made of a secondary color, and its two primary components continue outward. Concave corners are the inverse: Two secondary colors enclose their common primary color, and the wedge between where both edges continue inward is white. I also found that some padding is necessary where two primary or two secondary colors would otherwise touch to avoid artifacts (for example, in the middle stroke of the "N" in the picture). The following image is an example decomposition generated by the program from my thesis: This approach however has some drawbacks. One of them is that the special effects, such as outlines and shadows will no longer work correctly. Fortunatelly, I also came up with a second, much more elegant method, which generates the distance fields directly, and even supports all of the graphical effects. It is also included in my thesis and so is also over a year old. I am not going to give any more details right now, because I am currently writing a paper that describes this second technique in detail, but I will post it here as soon as it's finished. Anyway, here is an example of the difference in quality. The texture resolution is the same in each image, but the left one uses a regular texture, the middle one uses an ordinary distance field, and the right one uses my three-channel distance field. The performance overhead is only the difference between sampling an RGB texture versus a monochrome one. | {
"source": [
"https://computergraphics.stackexchange.com/questions/311",
"https://computergraphics.stackexchange.com",
"https://computergraphics.stackexchange.com/users/231/"
]
} |
350 | Every time I think I understand the relationship between the two terms, I get more information that confuses me. I thought they were synonymous, but now I'm not sure. What is the difference between "diffuse" and "albedo"? Are they interchangeable terms or are they used to mean different things in practice? | The short answer: They are not interchangeable, but their meaning can sometimes appear to overlap in computer graphics literature, giving the potential for confusion. Albedo is the proportion of incident light that is reflected away from a surface. Diffuse reflection is the reflection of light in many directions, rather than in just one direction like a mirror ( specular reflection ). In the case of ideal diffuse reflection ( Lambertian reflectance ), incident light is reflected in all directions independently of the angle at which it arrived. Since in computer graphics rendering literature there is sometimes a "diffuse coefficient" when calculating the colour of a pixel, which indicates the proportion of light reflected diffusely, there is an opportunity for confusion with the term albedo , which also means the proportion of light reflected. If you are rendering a material which only has ideal diffuse reflectance, then the albedo will be equal to the diffuse coefficient. However, in general a surface may reflect some light diffusely and other light specularly or in other direction-dependent ways, so that the diffuse coefficient is only a fraction of the albedo. Note that albedo is a term from observation of planets, moons and other large scale bodies, and is an average over the surface, and often an average over time. The albedo is thus not a useful value in itself for rendering a surface, where you need the specific, current surface property at any given location on the surface. Also note that in astronomy the term albedo can refer to different parts of the spectrum in different contexts - it will not always be refering to human visible light. Another difference, as Nathan Reed points out in a comment, is that albedo is a single average value, which gives you no colour information. For basic rendering the diffuse coefficient gives proportions for red, green and blue components separately, so albedo would only allow you to render greyscale images. For more realistic images, spectral rendering requires the reflectance of a surface as a function of the whole visible spectrum - far more than a single average value. | {
"source": [
"https://computergraphics.stackexchange.com/questions/350",
"https://computergraphics.stackexchange.com",
"https://computergraphics.stackexchange.com/users/423/"
]
} |
1,438 | How do modern games do geometry level-of-detail for object meshes like characters, terrain, and foliage? There are two parts to my question: What does the asset pipeline look like? Do artists make a high-poly model which is later decimated? If so, what decimation algorithms are most popular? Are LOD meshes sometimes done by hand? How do engines transition between different object LODs at run time? Are there any smooth or progressive transitions? The answer might be "different studios use different techniques." If so, please identify some of the most common practices. It would also be great if you could point me to whitepapers/slides that cover specific examples. | For the geometry LOD most games simply switch between a number of predefined LOD meshes. For example "Infamous: Second Son" uses 3 LOD meshes ( Adrian Bentley - "inFAMOUS: Second Son engine postmortem", GDC 2014 ) and "Killzone: Shadow Fall" uses 7 LOD meshes per character ( Michal Valient - "Killzone: Shadow fall demo postmortem", Devstation2013 ). Most of them are generated, but more important ones (like main character) can be hand made. Meshes are often generated using a popular Simplygon middleware, but sometimes they are simply generated by graphics artists in their favorite 3D package. Games with a large draw distance additionally use imposters for foliage, trees and high buildings ( Adrian Bentley - "inFAMOUS: Second Son engine postmortem", GDC 2014 ). They also employ hierarchical LODs, which replace a set of objects with one. For example in "Just Cause 2" trees are first rendered individually as normal LOD meshes, then individually as imposters and finally as a single merged forest mesh ( Emil Persson, Joel de Vahl - "Populating a Massive Game World", Siggraph2013 ) and in "Sunset Overdrive" distant parts of the world are replaced by single automatically offline generated mesh ( Elan Ruskin - "Streaming Sunset Overdrive's Open World", GDC2015 ). Another component of a LOD system is simplification of materials and shaders. For example "Killzone: Shadow Fall" disables tangent space and normal mapping for the distant LODs ( Michal Valient - "Killzone: Shadow fall demo postmortem", Devstation2013 ). This usually is implemented by disabling globally a set of shader features per LOD, but for engines with shader graphs, where artists can create custom shaders, this needs to be done manually. For the LOD transitions some games simply switch meshes and some use dithering for smooth LOD transitions - at the LOD switch two meshes are rendered: first gradually fades out and second fades in ( Simon schreibt Blog - "Assassins Creed 3 – LoD Blending" ). Classic CPU progressive mesh techniques aren't used as they require a costly mesh update and upload to GPU. Hardware Tessellation is used in a few titles, but only for the refinement of a most detailed LOD, as it's slow and in general case it can't replace predefined geometry LODs. Terrain LODs are handled separately in order to exploit it's specific properties. Terrain geometry LOD is usually implemented using clipmaps ( Marcin Gollent - "Landscape creation and rendering in REDengine 3" ). Terrain material LODs either are handled similarly to mesh LODs or using some kind of a virtual texture Ka Chen - "Adaptive Virtual Texture Rendering In Far Cry 4 . Finally if you are interested to see real game LOD pipelines then just browse through documentation of any of the modern games engines: Unreal Engine 4 - "Creating and Using LODs" , CryEgnine - Static LOD and Unity - LOD . | {
"source": [
"https://computergraphics.stackexchange.com/questions/1438",
"https://computergraphics.stackexchange.com",
"https://computergraphics.stackexchange.com/users/14/"
]
} |
1,461 | I have heard from many sources that having T-junctions in 3D meshes is a bad idea because it could result in cracks during rendering. Can someone explain why that happens, and what one can do to avoid them? | lhf's answer is good from the perspective of tessellation, but these can occur with simpler triangle mesh use cases. Take this trivial example of three, screen-space triangles, ABC, ADE and DBE... Although point E was, mathematically, intended to be exactly on the line segment AB, the pipeline won't be using fully precise values, such as rational numbers (e.g. https://gmplib.org/ ). Instead, it will likely be using floats, and so some approximation/error will be introduced. The result is probably going to be something like: Note that all of the vertices may have inaccuracies. Although the example above shows a crack, the T-junction may instead result in overlap along the edge causing pixels to be drawn twice. This might not seem as bad, but it can cause problems with transparency or stencil operations. You might then think that with floating-point the error introduced will be insignificant, but in a renderer, the screen-space vertex (X,Y) values are nearly always represented by fixed-point numbers and so the displacement from the ideal location will usually be much greater. Further, as the rendering hardware "interpolates" the line segment pixel-by-pixel with its own internal precision, there is even more chance it will diverge from the rounded location of E. If the T-junction is "removed" by, say, also dividing triangle ABC into two, i.e. AEC and EBC, the problem will go away as the shifts introduced by the errors will all be consistent. Now, you might ask why do renderers (especially HW) use fixed-point maths for the vertex XY coordinates? Why don't they use floating-point in order to reduce the problem? Although some did (e.g. Sega's Dreamcast) it can lead to another problem where the triangle set-up maths becomes catastrophically inaccurate, particularly for long-thin triangles, and they change size in unpleasant ways. | {
"source": [
"https://computergraphics.stackexchange.com/questions/1461",
"https://computergraphics.stackexchange.com",
"https://computergraphics.stackexchange.com/users/14/"
]
} |
1,502 | When rendering 3D scenes with transformations applied to the objects, normals have to be transformed with the transposed inverse of the model view matrix. So, with a normal $n$, modelViewMatrix $M$, the transformed normal $n'$ is $$n' = (M^{-1})^{T} \cdot n $$ When transforming the objects, it is clear that the normals need to be transformed accordingly. But why, mathematically, is this the corresponding transformation matrix? | Here's a simple proof that the inverse transpose is required. Suppose we have a plane, defined by a plane equation $n \cdot x + d = 0$, where $n$ is the normal. Now I want to transform this plane by some matrix $M$. In other words, I want to find a new plane equation $n' \cdot Mx + d' = 0$ that is satisfied for exactly the same $x$ values that satisfy the previous plane equation. To do this, it suffices to set the two plane equations equal. (This gives up the ability to rescale the plane equations arbitrarily, but that's not important to the argument.) Then we can set $d' = d$ and subtract it out. What we have left is: $$n' \cdot Mx = n \cdot x$$ I'll rewrite this with the dot products expressed in matrix notation (thinking of the vectors as 1-column matrices): $${n'}^T Mx = n^T x$$ Now to satisfy this for all $x$, we must have: $${n'}^T M = n^T$$ Now solving for $n'$ in terms of $n$, $$\begin{aligned}{n'}^T &= n^T M^{-1} \\
n' &= (n^T M^{-1})^T\\
n' &= (M^{-1})^T n\end{aligned}$$ Presto! If points $x$ are transformed by a matrix $M$, then plane normals must transform by the inverse transpose of $M$ in order to preserve the plane equation. This is basically a property of the dot product. In order for the dot product to remain invariant when a transformation is applied, the two vectors being dotted have to transform in corresponding but different ways. Mathematically, this can be described by saying that the normal vector isn't an ordinary vector, but a thing called a covector (aka covariant vector, dual vector, or linear form). A covector is basically defined as "a thing that can be dotted with a vector to produce an invariant scalar". In order to achieve that, it has to transform using the inverse transpose of whatever matrix is operating on ordinary vectors. This holds in any number of dimensions. Note that in 3D specifically, a bivector is similar to a covector. They're not quite the same since they have different units: a covector has units of inverse length while a bivector has units of length squared (area), so they behave differently under scaling. However, they do transform the same way with respect to their orientation, which is what matters for normals. We usually don't care about the magnitude of a normal (we always normalize them to unit length anyway), so we usually don't need to worry about the difference between a bivector and a covector. | {
"source": [
"https://computergraphics.stackexchange.com/questions/1502",
"https://computergraphics.stackexchange.com",
"https://computergraphics.stackexchange.com/users/127/"
]
} |
1,513 | The classical way of shading surfaces in real-time computer graphics is a combination of a (Lambertian) diffuse term and a specular term, most likely Phong or Blinn-Phong. Now with the trend going towards physically-based rendering and thus material models in engines such as Frostbite , Unreal Engine or Unity 3D these BRDFs have changed. For example (a pretty universal one at that), the latest Unreal Engine still uses Lambertian diffuse, but in combination with the Cook-Torrance microfacet model for specular reflection (specifically using GGX/Trowbridge-Reitz and a modified Slick approximation for the Fresnel term). Furthermore, a 'Metalness' value is being used to distinguish between conductor and dielectric. For dielectrics, diffuse is colored using the albedo of the material, while specular is always colorless. For metals, diffuse is not used and the specular term is multiplied with the albedo of the material. Regarding real-world physical materials, does the strict separation between diffuse and specular exist and if so, where does it come from? Why is one colored while the other is not? Why do conductors behave differently? | To start, I highly suggest reading Naty Hoffman's Siggraph presentation covering the physics of rendering. That said, I will try to answer your specific questions, borrowing images from his presentation. Looking at a single light particle hitting a point on the surface of a material, it can do 2 things: reflect, or refract. Reflected light will bounce away from the surface, similar to a mirror. Refracted light bounces around inside the material, and may exit the material some distance away from where it entered. Finally, every time the light interacts with the molecules of the material, it loses some energy. If it loses enough of its energy, we consider it to be fully absorbed. To quote Naty, "Light is composed of electromagnetic waves. So the optical properties of a substance are closely linked to its electric properties." This is why we group materials as metals or non-metals. Non metals will exhibit both reflection and refraction. Metallic materials only have reflection. All refracted light is absorbed. It would be prohibitively expensive to try to model the light particle's interaction with the molecules of the material. We instead, make some assumptions and simplifications. If the pixel size or shading area is large compared to the entry-exit distances, we can make the assumption that the distances are effectively zero.
For convenience, we split the light interactions into two different terms. We call the surface reflection term "specular" and the term resulting from refraction, absorption, scattering, and re-refraction we call "diffuse". However, this is a pretty large assumption. For most opaque materials, this assumption is ok and doesn't differ too much from real-life. However, for materials with any kind of transparency, the assumption fails. For example, milk, skin, soap, etc. A material's observed color is the light that is not absorbed. This is a combination of both the reflected light, as well as any refracted light that exits the material. For example, a pure green material will absorb all light that is not green, so the only light to reach our eyes is the green light. Therefore an artist models the color of a material by giving us the attenuation function for the material, ie how the light will be absorbed by the material. In our simplified diffuse/specular model, this can be represented by two colors, the diffuse color, and the specular color. Back before physically-based materials were used, the artist would arbitrarily choose each of these colors. However, it should seem obvious that these two colors should be related. This is where the albedo color comes in. For example, in UE4, they calculate diffuse and specular color as follows: DiffuseColor = AlbedoColor - AlbedoColor * Metallic;
SpecColor = lerp(0.08 * Specular.xxx, AlbedoColor, Metallic) where Metallic is 0 for non-metals and 1 for metals. The 'Specular' parameter controls the specularity of an object (but it's usually a constant 0.5 for 99% of materials) | {
"source": [
"https://computergraphics.stackexchange.com/questions/1513",
"https://computergraphics.stackexchange.com",
"https://computergraphics.stackexchange.com/users/385/"
]
} |
1,523 | That it compresses the data compared to the pixel array is obvious. But what makes it different from from normal compression (like png, jpeg)? | As Simon's comment alluded to, one major difference between hardware texture compression and other commonly used image compression is that the former does not use entropy coding. Entropy coding is the use of shorter bit-strings to represent commonly-occurring or repeating patterns in the source data—as seen in container formats like ZIP, many common image formats such as GIF, JPEG, and PNG, and also in many common audio and video formats. Entropy coding is good at compressing all sorts of data, but it intrinsically produces a variable compression ratio. Some areas of the image may have little detail (or the detail is predicted well by the coding model you're using) and require very few bits, but other areas may have complex detail that requires more bits to encode. This makes it difficult to implement random access, as there is no straightforward way to calculate where in the compressed data you can find the pixel at given ( x , y ) coordinates. Also, most entropy coding schemes are stateful, so it's not possible to simply start decoding at an arbitrary place in the stream; you have to start from the beginning to build up the correct state. However, random access is necessary for texture sampling, since a shader may sample from any location in a texture at any time. So, rather than entropy coding, hardware compression uses fixed-ratio, block-based schemes. For example, in DXT / BCn compression , the texture is sliced up into 4×4 pixel blocks, each of which is encoded in either 64 or 128 bits (depending on which format is picked); in ASTC , different formats use block sizes from 4×4 up to 12×12, and all blocks are encoded in 128 bits. The details of how the bits represent the image data vary between formats (and may even vary from one block to the next within the same image), but because the ratio is fixed, it's easy for hardware to calculate where in memory to find the block containing a given ( x , y ) pixel, and each block is self-contained, so it can be decoded independently of any other blocks. Another consideration in hardware texture compression is that the decoding should be efficiently implementable in hardware. This means that heavy math operations and complex dataflow are strongly disfavored. The BCn formats, for instance, can be decoded by doing a handful of 8-bit integer math operations per block to populate a small lookup table, then just looking up the appropriate table entry per pixel. This requires very little area on-chip, which is important because you probably want to decode several blocks in parallel, and thus need several copies of the decode hardware. In contrast, DCT-based formats like JPEG require a nontrivial amount of math per pixel, not to mention a complex dataflow that swaps and broadcasts various intermediate values across pixels within a block. (Look at this article for some of the gory details of DCT decoding.) This would be a lot grosser for hardware implementation, which I'm guessing is why AFAICT, no GPU hardware has ever implemented DCT-based or wavelet-based texture compression. | {
"source": [
"https://computergraphics.stackexchange.com/questions/1523",
"https://computergraphics.stackexchange.com",
"https://computergraphics.stackexchange.com/users/137/"
]
} |
1,718 | I have a mesh and in the region around each triangle, I want to compute an estimate of the principal curvature directions. I have never done this sort of thing before and Wikipedia does not help a lot. Can you describe or point me to a simple algorithm that can help me compute this estimate? Assume that I know positions and normals of all vertices. | When I needed an estimate of mesh curvature for a skin shader, the algorithm I ended up settling on was this: First, I computed a scalar curvature for each edge in the mesh. If the edge has positions $p_1, p_2$ and normals $n_1, n_2$ , then I estimated its curvature as: $$\text{curvature} = \frac{(n_2 - n_1) \cdot (p_2 - p_1)}{|p_2 - p_1|^2}$$ This calculates the difference in normals, projected along the edge, as a fraction of the length of the edge. (See below for how I came up with this formula.) Then, for each vertex I looked at the curvatures of all the edges touching it. In my case, I just wanted a scalar estimate of "average curvature", so I ended up taking the geometric mean of the absolute values of all the edge curvatures at each vertex. For your case, you might find the minimum and maximum curvatures, and take those edges to be the principal curvature directions (maybe orthonormalizing them with the vertex normal). That's a bit rough, but it might give you a good enough result for what you want to do. The motivation for this formula is looking at what happens in 2D when applied to a circle: Suppose you have a circle of radius $r$ (so its curvature is $1/r$ ), and you have two points on the circle, with their normals $n_1, n_2$ . The positions of the points, relative to the circle's center, are going to be $p_1 = rn_1$ and $p_2 = rn_2$ , due to the property that a circle or sphere's normals always point directly out from its center. Therefore you can recover the radius as $r = |p_1| / |n_1|$ or $|p_2| / |n_2|$ . But in general, the vertex positions won't be relative to the circle's center. We can work around this by subtracting the two: $$\begin{aligned}
p_2 - p_1 &= rn_2 - rn_1 \\
&= r(n_2 - n_1) \\
r &= \frac{|p_2 - p_1|}{|n_2 - n_1|} \\
\text{curvature} = \frac{1}{r} &= \frac{|n_2 - n_1|}{|p_2 - p_1|}
\end{aligned}$$ The result is exact only for circles and spheres. However, we can extend it to make it a bit more "tolerant", and use it on arbitrary 3D meshes, and it seems to work reasonably well. We can make the formula more "tolerant" by first projecting the vector $n_2 - n_1$ onto the direction of the edge, $p_2 - p_1$ . This allows for these two vectors not being exactly parallel (as they are in the circle case); we'll just project away any component that's not parallel. We can do this by dotting with the normalized edge vector: $$\begin{aligned}
\text{curvature} &= \frac{(n_2 - n_1) \cdot \text{normalize}(p_2 - p_1)}{|p_2 - p_1|} \\
&= \frac{(n_2 - n_1) \cdot (p_2 - p_1)/|p_2 - p_1|}{|p_2 - p_1|} \\
&= \frac{(n_2 - n_1) \cdot (p_2 - p_1)}{|p_2 - p_1|^2}
\end{aligned}$$ Et voilà, there's the formula that appeared at the top of this answer. By the way, a nice side benefit of using the signed projection (the dot product) is that the formula then gives a signed curvature: positive for convex, and negative for concave surfaces. Another approach I can imagine using, but haven't tried, would be to estimate the second fundamental form of the surface at each vertex. This could be done by setting up a tangent basis at the vertex, then converting all neighboring vertices into that tangent space, and using least-squares to find the best-fit 2FF matrix. Then the principal curvature directions would be the eigenvectors of that matrix. This seems interesting as it could let you find curvature directions "implied" by the neighboring vertices without any edges explicitly pointing in those directions, but on the other hand is a lot more code, more computation, and perhaps less numerically robust. A paper that takes this approach is Rusinkiewicz, "Estimating Curvatures and Their Derivatives on Triangle Meshes" . It works by estimating the best-fit 2FF matrix per triangle, then averaging the matrices per-vertex (similar to how smooth normals are calculated). | {
"source": [
"https://computergraphics.stackexchange.com/questions/1718",
"https://computergraphics.stackexchange.com",
"https://computergraphics.stackexchange.com/users/14/"
]
} |
1,768 | For reference, what I'm referring to is the "generic name" for the technique first(I believe) introduced with idTech 5's MegaTexture technology. See the video here for a quick glance on how it works. I've been skimming some papers and publications related to it lately, and what I don't understand is how it can possibly be efficient. Doesn't it require constant recalculation of UV coordinates from the "global texture page" space into the virtual texture coordinates? And how doesn't that curb most attempts at batching geometry altogether? How can it allow arbitrary zooming in? Wouldn't it at some point require subdividing polygons? There just is so much I don't understand, and I have been unable to find any actually easily approachable resources on the topic. | Overview The main reason for Virtual Texturing (VT), or Sparse Virtual Textures , as it is sometimes called, is as a memory optimization. The gist of the thing is to only move into video memory the actual texels (generalized as pages/tiles) that you might need for a rendered frame. So it will allow you to have much more texture data in offline or slow storage (HDD, Optical-Disk, Cloud) than it would otherwise fit on video memory or even main memory. If you understand the concept of Virtual Memory used by modern Operating Systems, it is the same thing in its essence (the name is not given by accident). VT does not require recomputing UVs in the sense that you'd do that each frame before rendering a mesh, then resubmit vertex data, but it does require some substantial work in the Vertex and Fragment shaders to perform the indirect lookup from the incoming UVs. In a good implementation however, it should be completely transparent for the application if it is using a virtual texture or a traditional one. Actually, most of the time an application will mix both types of texturing, virtual and traditional. Batching can in theory work very well, though I have never looked into the details of this. Since the usual criteria for grouping geometry are the textures, and with VT, every polygon in the scene can share the same "infinitely large" texture, theoretically, you could achieve full scene drawing with 1 draw call. But in reality, other factors come into play making this impractical. Issues with VT Zooming in/out and abrupt camera movement are the hardest things to handle in a VT setup. It can look very appealing for a static scene, but once things start moving, more texture pages/tiles will be requested than you can stream for external storage. Async file IO and threading can help, but if it is a real-time system, like in a game, you'll just have to render for a few frames with lower resolution tiles until the hi-res ones arrive, every now and then, resulting in a blurry texture. There's no silver bullet here and that's the biggest issue with the technique, IMO. Virtual Texturing also doesn't handle transparency in an easy way, so transparent polygons need a separate traditional rendering path for them. All in all, VT is interesting, but I wouldn't recommend it for everyone. It can work well, but it is hard to implement and optimize, plus there's just too many corner cases and case-specific tweaks needed for my taste. But for large open-world games or data visualization apps, it might be the only possible approach to fit all the content into the available hardware. With a lot of work, it can be made to run fairly efficiently even on limited hardware, like we can see in the PS3 and XBOX360 versions of id's Rage . Implementation I have managed to get VT working on iOS with OpenGL-ES, to a certain degree. My implementation is not "shippable", but I could conceivably make it so if I wanted and had the resources. You can view the source code here , it might help getting a better idea of how the pieces fit together. Here's a video of a demo running on the iOS Sim. It looks very laggy because the simulator is terrible at emulating shaders, but it runs smoothly on a device. The following diagram outlines the main components of the system in my implementation. It differs quite a bit from Sean's SVT demo (link down bellow), but it is closer in architecture to the one presented by the paper Accelerating Virtual Texturing Using CUDA , found in the first GPU Pro book (link bellow). Page Files are the virtual textures, already cut into tiles (AKA pages) as a preprocessing step, so they are ready to be moved from disk into video memory whenever needed. A page file also contains the whole set of mipmaps, also called virtual mipmaps . Page Cache Manager keeps an application-side representation of the Page Table and Page Indirection textures. Since moving a page from offline storage to memory is expensive, we need a cache to avoid reloading what is already available. This cache is a very simple Least Recently Used (LRU) cache. The cache is also the component responsible for keeping the physical textures up-to-date with its own local representation of the data. The Page Provider is an async job queue that will fetch the pages needed for a given view of the scene and send them to the cache. The Page Indirection texture is a texture with one pixel for each page/tile in the virtual texture, that will map the incoming UVs to the Page Table cache texture that has the actual texel data. This texture can get quite large, so it must use some compact format, like RGBA 8:8:8:8 or RGB 5:6:5. But we are still missing a key piece here, and that's how to determine which pages must be loaded from storage into the cache and consequently into the Page Table . That's where the Feedback Pass and the Page Resolver enter. The Feedback Pass is a pre-render of the view, with a custom shader and at a much lower resolution, that will write the ids of the required pages to the color framebuffer. That colorful patchwork of the cube and sphere above are actual page indexes encoded as an RGBA color. This pre-pass rendering is then read into main memory and processed by the Page Resolver to decode the page indexes and fire the new requests with the Page Provider . After the Feedback pre-pass, the scene can be rendered normally with the VT lookup shaders. But note that we don't wait for new page request to finish, that would be terrible, because we'd simply block on synchronous file IO. The requests are asynchronous and might or might not be ready by the time the final view is rendered. If they are ready, sweet, but if not, we always keep a locked page of a low-res mipmap in the cache as a fallback, so we have some texture data in there to use, but it is going to be blurry. Others resources worth checking out The first book in the GPU Pro series . It has two very good articles on VT. Mandatory paper by MrElusive: Software Virtual Textures . The Crytek paper by Martin Mittring: Advanced Virtual Texture Topics . And Sean's presentation on youtube , which I see you've already found. VT is still a somewhat hot topic on Computer Graphics, so there's tons of good material available, you should be able to find a lot more. If there's anything else I can add to this answer, please feel free to ask. I'm a bit rusty on the topic, haven't read much about it for the past year, but it is alway good for the memory to revisit stuff :) | {
"source": [
"https://computergraphics.stackexchange.com/questions/1768",
"https://computergraphics.stackexchange.com",
"https://computergraphics.stackexchange.com/users/2111/"
]
} |
2,018 | In recent games I have noticed something called Tessellation, Turning the thing ON destroys my frame rate. I have noticed that it when turned on it looks like Anti - Aliasing. Can someone give me further information on what exactly the GPU does. | Tesselation is a technique that allows you to generate primitives (triangles, lines, points and such) on the graphics-card. Specifically, it lets you repeatedly subdivide the current geometry into a finer mesh. This allows you to load a relatively coarse mesh on your graphics card, generate more vertices and triangles dynamically and then have a mesh on the screen that looks much smoother. Most of the time this tesselation is done anew in each single frame and this could be the reason that your frame-rate drops once you enable this. Tesselation is done in multiple stages, and it is done AFTER the vertex shader. The terms for each stage varies based on the API. In DirectX, it is the Hull Shader, Hardware Tessellation, and the Domain Shader. In OpenGL, they are called the Tessellation Control Shader, Tesselation Primitive Generation, and the Tessellation Evaluation Shader. The first and the last stage is programmable, the actual tesselation is done by the hardware in a fixed function stage. In the Tesselation Control shader you set the type and number of subdivisions. Then the hardware tessellator divides the geometry according to the Control Shader. Lastly, the Tessellation Evaluation Shader is called for each newly generated vertex. In this shader, you set the type of primitive you want to generate and how the vertices are spaced and many other things. This shader can also be used to do all sorts of per-vertex calculations just like a vertex shader. It is guaranteed to be called at least once for each generated vertex. If you want to do any further work on the primitives, you can add a Geometry shader. | {
"source": [
"https://computergraphics.stackexchange.com/questions/2018",
"https://computergraphics.stackexchange.com",
"https://computergraphics.stackexchange.com/users/2641/"
]
} |
2,231 | Games and other graphically intensive applications use frameworks like OpenGL and DirectX. Also they require features like pixel shader and DX12. But why would we need all these frameworks and GPU features when we could just draw everything pixel by pixel? First, the game would have to be compiling in a way so it is drawn pixel by pixel. This is likely to make the game executable big, but will it be faster and work on any 32-bit color GPU (even old ones)? I know that the first 3D games were drawn pixel by pixel, but why aren't they doing it now? | Speed is the most common reason why this is not done. In fact you can do what you propose, if you make your own operating system, its just going to be very slow for architectural reasons. So the assumption that its faster is a bit flawed. Even if it would be faster, it would be less efficient in terms of development (like 1% speed increase for 10 times the work). Copying the data over from the CPU to the graphics card is a relatively slow operation. The less you copy the faster your update speed can be. So ideally you would have most of the data on your GPU and only update small chunks of data. There is a world of difference between copying over 320x200 pixels compared to 1920x1200 or more. See the number of pixels you need to update grows quadratically when the sides grow. Example: It's cheaper to tell the GPU to move the image 10 pixels right than copy the pixels manually to the video memory in different locations. Why do you have to go trough an API? Simply because it's not your system. The operating system can not allow you to do whatever you want for safety reasons. Secondly because the operating system needs to abstract hardware away, even the OS is talking to the driver trough some abstracted system, an API if you will. In fact I would rate the likelihood that your system would be faster, if you just did all the work yourself, close to near zero. It's a bit like comparing C and assembly. Sure you can write assembly, but compilers are pretty smart these days and optimize better and better all the time. It's hard to be better manually, even if you can your productivity will be down the drains. PS: An API does not make it impossible to do this update just like old games did it. It's just inefficient that's all. Not because of the API mind but because it is inefficient period. PPS: This is why they are rolling out Vulkan. | {
"source": [
"https://computergraphics.stackexchange.com/questions/2231",
"https://computergraphics.stackexchange.com",
"https://computergraphics.stackexchange.com/users/2964/"
]
} |
3,575 | From the wiki : "the Vulkan API was initially referred to as the 'next generation OpenGL initiative' by Khrono", and that it is "a grounds-up redesign effort to unify OpenGL and OpenGL ES into one common API that will not be backwards compatible with existing OpenGL versions". So should those now getting into graphics programming be better served to learn Vulkan instead of OpenGL? It seem they will serve the same purpose. | Hardly! This seems a lot like asking "Should new programmers learn C++ instead of C," or "Should new artists be learning digital painting instead of physical painting." Especially because it's NOT backward compatible, graphics programmers would be foolish to exclude the most common graphics API in the industry, simply because there's a new one. Additionally, OpenGL does different things differently. It's entirely possible that a company would choose OpenGL over Vulkan, especially this early in the game, and that company would not be interested in someone who doesn't know OpenGL, regardless of whether they know Vulkan or not. Specialization is rarely a marketable skill. For those who don't need to market their skills as such, like indie developers, it'd be even MORE foolish to remove a tool from their toolbox. An indie dev is even more dependent on flexibility and being able to choose what works, over what's getting funded. Specializing in Vulkan only limits your options. Specialization is rarely an efficient paradigm. | {
"source": [
"https://computergraphics.stackexchange.com/questions/3575",
"https://computergraphics.stackexchange.com",
"https://computergraphics.stackexchange.com/users/3477/"
]
} |
3,955 | I implemented a physically based path tracer after studying PBRT by M. Pharr and G. Humphreys. Now I'm trying to apply physically based rendering to real time graphics using OpenGL ES (in an iPhone application). I want to start using Oren-Nayar and Cook-Torrance as diffuse and specular BRDF but I have a problem: how do I model indirect lighting? In a path tracer (like the one contained in pbrt) the indirect/ambient light is given "automatically" from the path tracing algorithm, as it follows the path of light rays taking into account direct and indirect lighting. How do I model the indirect lighting in a physically based render written in OpenGL ES, so using real time computer graphics? | Real-time graphics deploys a variety of approximations to deal with the computational expense of simulating indirect lighting, trading off between runtime performance and lighting fidelity. This is an area of active research, with new techniques appearing every year. Ambient lighting At the very simplest end of the range, you can use ambient lighting : a global, omnidirectional light source that applies to every object in the scene, without regard to actual light sources or local visibility. This is not at all accurate, but is extremely cheap, easy for an artist to tweak, and can look okay depending on the scene and the desired visual style. Common extensions to basic ambient lighting include: Make the ambient color vary directionally, e.g. using spherical harmonics (SH) or a small cubemap , and looking up the color in a shader based on each vertex's or pixel's normal vector. This allows some visual differentiation between surfaces of different orientations, even where no direct light reaches them. Apply ambient occlusion (AO) techniques including pre-computed vertex AO, AO texture maps, AO fields , and screen-space AO (SSAO) . These all work by attempting to detect areas such as holes and crevices where indirect light is less likely to bounce into, and darkening the ambient light there. Add an environment cubemap to provide ambient specular reflection. A cubemap with a decent resolution (128² or 256² per face) can be quite convincing for specular on curved, shiny surfaces. Baked indirect lighting The next "level", so to speak, of techniques involve baking (pre-computing offline) some representation of the indirect lighting in a scene. The advantage of baking is you can get pretty high-quality results for little real-time computational expense, since all the hard parts are done in the bake. The trade-offs are that the time needed for the bake process harms level designers' iteration rate; more memory and disk space are required to store the precomputed data; the ability to change the lighting in real-time is very limited; and the bake process can only use information from static level geometry, so indirect lighting effects from dynamic objects such as characters will be missed. Still, baked lighting is very widely used in AAA games today. The bake step can use any desired rendering algorithm including path tracing, radiosity, or using the game engine itself to render out cubemaps (or hemicubes ). The results can be stored in textures ( lightmaps ) applied to static geometry in the level, and/or they can also be converted to SH and stored in volumetric data structures, such as irradiance volumes (volume textures where each texel stores an SH probe) or tetrahedral meshes . You can then use shaders to look up and interpolate colors from that data structure and apply them to your rendered geometry. The volumetric approach allows baked lighting to be applied to dynamic objects as well as static geometry. The spatial resolution of the lightmaps etc. will be limited by memory and other practical constraints, so you might supplement the baked lighting with some AO techniques to add high-frequency detail that the baked lighting can't provide, and to respond to dynamic objects (such as darkening the indirect light under a moving character or vehicle). There's also a technique called precomputed radiance transfer (PRT) , which extends baking to handle more dynamic lighting conditions. In PRT, instead of baking the indirect lighting itself, you bake the transfer function from some source of light—usually the sky—to the resultant indirect lighting in the scene. The transfer function is represented as a matrix that transforms from source to destination SH coefficients at each bake sample point. This allows the lighting environment to be changed, and the indirect lighting in the scene will respond plausibly. Far Cry 3 and 4 used this technique to allow a continuous day-night cycle, with indirect lighting varying based on the sky colors at each time of day. One other point about baking: it may be useful to have separate baked data for diffuse and specular indirect lighting. Cubemaps work much better than SH for specular (since cubemaps can have a lot more angular detail), but they also take up a lot more memory, so you can't afford to place them as densely as SH samples. Parallax correction can be used to somewhat make up for that, by heuristically warping the cubemap to make its reflections feel more grounded to the geometry around it. Fully real-time techniques Finally, it's possible to compute fully dynamic indirect lighting on the GPU. It can respond in real-time to arbitrary changes of lighting or geometry. However, again there is a tradeoff between runtime performance, lighting fidelity, and scene size. Some of these techniques need a beefy GPU to work at all, and may only be feasible for limited scene sizes. They also typically support only a single bounce of indirect light. A dynamic environment cubemap, where the faces of the cubemap are re-rendered each frame using six cameras clustered around a chosen point, can provide decently good ambient reflections for a single object. This is often used for the player car in racing games, for instance. Screen-space global illumination , an extension of SSAO that gathers bounce lighting from nearby pixels on the screen in a post-processing pass. Screen-space raytraced reflection works by ray-marching through the depth buffer in a post-pass. It can provide quite high-quality reflections as long as the reflected objects are on-screen. Instant radiosity works by tracing rays into the scene using the CPU, and placing a point light at each ray hit point, which approximately represents the outgoing reflected light in all directions from that ray. These many lights, known as virtual point lights (VPLs), are then rendered by the GPU in the usual way. Reflective shadow maps (RSMs) are similar to instant radiosity, but the VPLs are generated by rendering the scene from the light's point of view (like a shadow map) and placing a VPL at each pixel of this map. Light propagation volumes consist of 3D grids of SH probes placed throughout the scene. RSMs are rendered and used to "inject" bounce light into the SH probes nearest the reflecting surfaces. Then a flood-fill-like process propagates light from each SH probe to surrounding points in the grid, and the result of this is used to apply lighting to the scene. This technique has been extended to volumetric light scattering as well. Voxel cone tracing works by voxelizing the scene geometry (likely using varying voxel resolutions, finer near the camera and coarser far away), then injecting light from RSMs into the voxel grid. When rendering the main scene, the pixel shader performs a "cone trace"—a ray-march with gradually increasing radius—through the voxel grid to gather incoming light for either diffuse or specular shading. Most of these techniques are not widely used in games today due to problems scaling up to realistic scene sizes, or other limitations. The exception is screen-space reflection, which is very popular (though it's usually used with cubemaps as a fallback, for regions where the screen-space part fails). As you can see, real-time indirect lighting is a huge topic and even this (rather long!) answer can only provide a 10,000-foot overview and context for further reading. Which approach is best for you will depend greatly on the details of your particular application, what constraints you're willing to accept, and how much time you have to put into it. | {
"source": [
"https://computergraphics.stackexchange.com/questions/3955",
"https://computergraphics.stackexchange.com",
"https://computergraphics.stackexchange.com/users/2237/"
]
} |
4,422 | Often a similar hardware feature is exposed via DirectX and OpenGL using different terminology. For example: Constant Buffer / Uniform Buffer Object RWBuffer / SSBO I am looking for an exhaustive chart that describes which DirectX terminology is used to refer to which OpenGL concept, and vice-versa. Where can I find such a resource? | I haven't been able to find such a chart on the web, so I made one here. (Everyone, feel free to add, elaborate, or correct any mistakes. Some of these are just best guesses based on a partial understanding of the API and hardware internals.) API Basics D3D11 OpenGL 4.x
----- ----------
device context
immediate context (implicit; no specific name)
deferred context (no cross-vendor equivalent, but see
GL_NV_command_list)
swap chain (implicit; no specific name)
(no cross-vendor equivalent) extensions
debug layer; info queue GL_KHR_debug extension Shaders D3D11 OpenGL 4.x
----- ----------
pixel shader fragment shader
hull shader tessellation control shader
domain shader tessellation evaluation shader
(vertex shader, geometry shader, and compute shader
are called the same in both)
registers binding points
semantics interface layouts
SV_Foo semantics gl_Foo builtin variables
class linkage subroutines
(no equivalent) program objects; program linking
(D3D11's normal
behavior; no separate shader objects
specific name) Geometry and Drawing D3D11 OpenGL 4.x
----- ----------
vertex buffer vertex attribute array buffer; vertex buffer object
index buffer element array buffer
input layout vertex array object (sort of)
Draw glDrawArrays
DrawIndexed glDrawElements
(instancing and indirect draw are called similarly in both)
(no equivalent) multi-draw, e.g. glMultiDrawElements
stream-out transform feedback
DrawAuto glDrawTransformFeedback
predication conditional rendering
(no equivalent) sync objects Buffers and Textures D3D11 OpenGL 4.x
----- ----------
constant buffer uniform buffer object
typed buffer texture buffer
structured buffer (no specific name; subset of SSBO features)
UAV buffer; RWBuffer SSBO (shader storage buffer object)
UAV texture; RWTexture image load/store
shader resource view texture view
sampler state sampler object
interlocked operations atomic operations
append/consume buffer SSBO + atomic counter
discard buffer/texture invalidate buffer/texture
(no equivalent) persistent mapping
(D3D11's normal
behavior; no immutable storage
specific name)
(implicitly inserted glMemoryBarrier; glTextureBarrier
by the API) Render Targets D3D11 OpenGL 4.x
----- ----------
(no equivalent) framebuffer object
render target view framebuffer color attachment
depth-stencil view framebuffer depth-stencil attachment
multisample resolve blit multisampled buffer to non-multisampled one
multiple render targets multiple color attachments
render target array layered image
(no equivalent) renderbuffer Queries D3D11 OpenGL 4.x
----- ----------
timestamp query timer query
timestamp-disjoint query (no equivalent)
(no equivalent) time-elapsed query
occlusion query samples-passed query
occlusion predicate query any-samples-passed query
pipeline statistics query (no equivalent in core, but see
GL_ARB_pipeline_statistics_query)
SO statistics query primitives-generated/-written queries
(no equivalent) query buffer object Compute Shaders D3D11 OpenGL 4.x
----- ----------
thread invocation
thread group work group
thread group size local size
threadgroup variable shared variable
group sync "plain" barrier
group memory barrier shared memory barrier
device memory barrier atomic+buffer+image memory barriers
all memory barrier group memory barrier Other Resources Porting from DirectX11 to OpenGL 4.2 – not exhaustive, but a quick discussion of common porting issues MSDN: GLSL-to-HLSL porting guide – lists correspondences and differences between GLSL and HLSL shading languages | {
"source": [
"https://computergraphics.stackexchange.com/questions/4422",
"https://computergraphics.stackexchange.com",
"https://computergraphics.stackexchange.com/users/110/"
]
} |
4,979 | What is importance sampling? Every article I read about it mentions 'PDF' what is that as well? From what I gather, importance sampling is a technique to only sample areas on a hemisphere that matter more than others. So, ideally, I should sample rays towards light sources to reduce noise and increase speed. Also, some BRDF's at grazing angles have little no difference in the calculation so using importance sampling to avoid that is good? If I were to implement importance sampling for a Cook-Torrance BRDF how could I do this? | Short answer: Importance sampling is a method to reduce variance in Monte Carlo Integration by choosing an estimator close to the shape of the actual function. PDF is an abbreviation for Probability Density Function . A $pdf(x)$ gives the probability of a random sample generated being $x$ . Long Answer: To start, let's review what Monte Carlo Integration is, and what it looks like mathematically. Monte Carlo Integration is an technique to estimate the value of an integral. It's typically used when there isn't a closed form solution to the integral. It looks like this: $$\int f(x) \, dx \approx \frac{1}{N} \sum_{i=1}^N \frac{f(x_i)}{pdf(x_{i})}$$ In english, this says that you can approximate an integral by averaging successive random samples from the function. As $N$ gets large, the approximation gets closer and closer to the solution. $pdf(x_{i})$ represents the probability density function of each random sample. Let's do an example: Calculate the value of the integral $I$ . $$I = \int_{0}^{2\pi} e^{-x} \sin(x) dx$$ Let's use Monte Carlo Integration: $$I \approx \frac{1}{N} \sum_{i=1}^N \frac{e^{-x} \sin(x_i)}{pdf(x_{i})}$$ A simple python program to calculate this is: import random
import math
N = 200000
TwoPi = 2.0 * math.pi
sum = 0.0
for i in range(N):
x = random.uniform(0, TwoPi)
fx = math.exp(-x) * math.sin(x)
pdf = 1 / (TwoPi - 0.0)
sum += fx / pdf
I = (1 / N) * sum
print(I) If we run the program we get $I = 0.4986941$ Using separation by parts, we can get the exact solution: $$I = \frac{1}{2} (1 − e−2π) = 0.4990663$$ You'll notice that the Monte Carlo Solution is not quite correct. This is because it is an estimate. That said, as $N$ goes to infinity, the estimate should get closer and closer to the correct answer. Already at $N = 2000$ some runs are almost identical to the correct answer. A note about the PDF: In this simple example, we always take a uniform random sample. A uniform random sample means every sample has the exact same probability of being chosen. We sample in the range $[0, 2\pi]$ so, $pdf(x) = 1 / (2\pi - 0)$ Importance sampling works by not uniformly sampling. Instead we try to choose more samples that contribute a lot to the result (important), and less samples that only contribute a little to the result (less important). Hence the name, importance sampling. If you choose a sampling function whose pdf very closely matches the shape of $f$ , you can greatly reduce the variance, which means you can take less samples. However, if you choose a sampling function whose value is very different from $f$ , you can increase the variance. See the picture below: Image from Wojciech Jarosz's Dissertation Appendix A One example of importance sampling in Path Tracing is how to choose the direction of a ray after it hits a surface. If the surface is not perfectly specular (ie. a mirror or glass), the outgoing ray can be anywhere in the hemisphere. We could uniformly sample the hemisphere to generate the new ray. However, we can exploit the fact that the rendering equation has a cosine factor in it: $$L_{\text{o}}(p, \omega_{\text{o}}) = L_{e}(p, \omega_{\text{o}}) + \int_{\Omega} f(p, \omega_{\text{i}}, \omega_{\text{o}}) L_{\text{i}}(p, \omega_{\text{i}}) \left | \cos \theta_{\text{i}} \right | d\omega_{\text{i}}$$ Specifically, we know that any rays at the horizon will be heavily attenuated (specifically, $\cos(x)$ ). So, rays generated near the horizon will not contribute very much to the final value. To combat this, we use importance sampling. If we generate rays according to a cosine weighted hemisphere, we ensure that more rays are generated well above the horizon, and less near the horizon. This will lower variance and reduce noise. In your case, you specified that you will be using a Cook-Torrance, microfacet-based BRDF. The common form being: $$f(p, \omega_{\text{i}}, \omega_{\text{o}}) = \frac{F(\omega_{\text{i}}, h) G(\omega_{\text{i}}, \omega_{\text{o}}, h) D(h)}{4 \cos(\theta_{i}) \cos(\theta_{o})}$$ where $$F(\omega_{\text{i}}, h) = \text{Fresnel function} \\
G(\omega_{\text{i}}, \omega_{\text{o}}, h) = \text{Geometry Masking and Shadowing function} \\
D(h) = \text{Normal Distribution Function}$$ The blog "A Graphic's Guy's Note" has an excellent write up on how to sample Cook-Torrance BRDFs. I will refer you to his blog post . That said, I will try to create a brief overview below: The NDF is generally the dominant portion of the Cook-Torrance BRDF, so if we are going to importance sample, then we should sample based on the NDF. Cook-Torrance doesn't specify a specific NDF to use; we are free to choose whichever one suits our fancy. That said, there are a few popular NDFs: GGX Beckmann Blinn Each NDF has it's own formula, thus each must be sampled differently. I am only going to show the final sampling function for each. If you would like to see how the formula is derived, see the blog post. GGX is defined as: $$D_{GGX}(m) = \frac{\alpha^2}{\pi((\alpha^2-1) \cos^2(\theta) + 1)^2}$$ To sample the spherical coordinates angle $\theta$ , we can use the formula: $$\theta = \arccos \left( \sqrt{\frac{\alpha^2}{\xi_1 (\alpha^2 - 1) + 1}} \right)$$ where $\xi$ is a uniform random variable. We assume that the NDF is isotropic, so we can sample $\phi$ uniformly: $$\phi = \xi_{2}$$ Beckmann is defined as: $$D_{Beckmann}(m) = \frac{1}{\pi \alpha^2\cos^4(\theta)} e^{-\frac{\tan^2(\theta)}{\alpha^2}}$$ Which can be sampled with: $$\theta = \arccos \left(\sqrt{\frac{1}{1 = \alpha^2 \ln(1 - \xi_1)}} \right) \\
\phi = \xi_2$$ Lastly, Blinn is defined as: $$D_{Blinn}(m) = \frac{\alpha + 2}{2 \pi} (\cos(\theta))^{\alpha}$$ Which can be sampled with: $$\theta = \arccos \left(\frac{1}{\xi_{1}^{\alpha + 1}} \right) \\
\phi = \xi_2$$ Putting it in Practice Let's look at a basic backwards path tracer: void RenderPixel(uint x, uint y, UniformSampler *sampler) {
Ray ray = m_scene->Camera.CalculateRayFromPixel(x, y, sampler);
float3 color(0.0f);
float3 throughput(1.0f);
// Bounce the ray around the scene
for (uint bounces = 0; bounces < 10; ++bounces) {
m_scene->Intersect(ray);
// The ray missed. Return the background color
if (ray.geomID == RTC_INVALID_GEOMETRY_ID) {
color += throughput * float3(0.846f, 0.933f, 0.949f);
break;
}
// We hit an object
// Fetch the material
Material *material = m_scene->GetMaterial(ray.geomID);
// The object might be emissive. If so, it will have a corresponding light
// Otherwise, GetLight will return nullptr
Light *light = m_scene->GetLight(ray.geomID);
// If we hit a light, add the emmisive light
if (light != nullptr) {
color += throughput * light->Le();
}
float3 normal = normalize(ray.Ng);
float3 wo = normalize(-ray.dir);
float3 surfacePos = ray.org + ray.dir * ray.tfar;
// Get the new ray direction
// Choose the direction based on the material
float3 wi = material->Sample(wo, normal, sampler);
float pdf = material->Pdf(wi, normal);
// Accumulate the brdf attenuation
throughput = throughput * material->Eval(wi, wo, normal) / pdf;
// Shoot a new ray
// Set the origin at the intersection point
ray.org = surfacePos;
// Reset the other ray properties
ray.dir = wi;
ray.tnear = 0.001f;
ray.tfar = embree::inf;
ray.geomID = RTC_INVALID_GEOMETRY_ID;
ray.primID = RTC_INVALID_GEOMETRY_ID;
ray.instID = RTC_INVALID_GEOMETRY_ID;
ray.mask = 0xFFFFFFFF;
ray.time = 0.0f;
}
m_scene->Camera.FrameBuffer.SplatPixel(x, y, color);
} IE. we bounce around the scene, accumulating color and light attenuation as we go. At each bounce, we have to choose a new direction for the ray. As mentioned above, we could uniformly sample the hemisphere to generate the new ray. However, the code is smarter; it importance samples the new direction based on the BRDF. (Note: This is the input direction, because we are a backwards path tracer) // Get the new ray direction
// Choose the direction based on the material
float3 wi = material->Sample(wo, normal, sampler);
float pdf = material->Pdf(wi, normal); Which could be implemented as: void LambertBRDF::Sample(float3 outputDirection, float3 normal, UniformSampler *sampler) {
float rand = sampler->NextFloat();
float r = std::sqrtf(rand);
float theta = sampler->NextFloat() * 2.0f * M_PI;
float x = r * std::cosf(theta);
float y = r * std::sinf(theta);
// Project z up to the unit hemisphere
float z = std::sqrtf(1.0f - x * x - y * y);
return normalize(TransformToWorld(x, y, z, normal));
}
float3a TransformToWorld(float x, float y, float z, float3a &normal) {
// Find an axis that is not parallel to normal
float3a majorAxis;
if (abs(normal.x) < 0.57735026919f /* 1 / sqrt(3) */) {
majorAxis = float3a(1, 0, 0);
} else if (abs(normal.y) < 0.57735026919f /* 1 / sqrt(3) */) {
majorAxis = float3a(0, 1, 0);
} else {
majorAxis = float3a(0, 0, 1);
}
// Use majorAxis to create a coordinate system relative to world space
float3a u = normalize(cross(normal, majorAxis));
float3a v = cross(normal, u);
float3a w = normal;
// Transform from local coordinates to world coordinates
return u * x +
v * y +
w * z;
}
float LambertBRDF::Pdf(float3 inputDirection, float3 normal) {
return dot(inputDirection, normal) * M_1_PI;
} After we sample the inputDirection ('wi' in the code), we use that to calculate the value of the BRDF. And then we divide by the pdf as per the Monte Carlo formula: // Accumulate the brdf attenuation
throughput = throughput * material->Eval(wi, wo, normal) / pdf; Where Eval() is just the BRDF function itself (Lambert, Blinn-Phong, Cook-Torrance, etc.): float3 LambertBRDF::Eval(float3 inputDirection, float3 outputDirection, float3 normal) const override {
return m_albedo * M_1_PI * dot(inputDirection, normal);
} | {
"source": [
"https://computergraphics.stackexchange.com/questions/4979",
"https://computergraphics.stackexchange.com",
"https://computergraphics.stackexchange.com/users/5256/"
]
} |
5,465 | In film school, in the classes of 3D modeling, I was told that when we model something for films we maintain topology of 4 edged polygons. Any polygon which has more or less than 4 edge/vertex is considered bad and should be avoided. Whereas if the same model is used in games, it is triangulated. Although I'm not majoring in 3D modeling, the question is still in my mind. Why is 3 edged polygon used in gaming? Does it render faster? Then why not is it used in film renderings? | For 3D modeling, the usual reason to prefer quads is that subdivision surface algorithms work better with them—if your mesh is getting subdivided, triangles can cause problems in the curvature of the resulting surface. For an example, take a look at these two boxes: The left one is all quads; the right one has the same overall shape, but one corner is made of triangles instead of quads. Now see what happens when the boxes get subdivided: Different, yeah? Note the way the edge loops have changed from being roughly equidistant on the left box to a more complicated, pinched-and-stretched arrangement on the right. Now let’s turn off the wireframe and see how it’s getting lit. See the weird pinching in the highlight on the right box? That’s caused by the messy subdivision. This particular one is a pretty benign case, but you can get way more messed-up-looking results with more complex meshes with higher subdivision levels (like the ones you’d usually use in film). All of this still applies when making game assets, if you’re planning to subdivide them, but the key difference is that the subdivision happens ahead of time —while you’re still in quad-land—and then the final subdivided result gets turned into triangles because that’s what the graphics hardware speaks (because, as mentioned in the comments above, it makes the math easier). | {
"source": [
"https://computergraphics.stackexchange.com/questions/5465",
"https://computergraphics.stackexchange.com",
"https://computergraphics.stackexchange.com/users/6749/"
]
} |
5,724 | I'm new to shaders and know that you can color pixels with gl_FragColor but sometimes there is this thing: vec2 uv = gl_FragCoord.xy / screenSize; // or resolution or smthn depends on the implementation If gl_FragCoord is like pixel coordinates, what does uv get? Why is it often done in GLSL? If someone could even draw an example of which part of the screen will be UV it will be very helpful! | First, gl_FragCoord.xy are screen space coordinates of current pixel based on viewport size. So if viewport size is width=5 , height=4 then each fragment contains: Why are uvs needed? For example I rendered geometry to screen quad and then I need to apply some postprocessing on this quad in another rendering pass. To sample from that quad I need texture coordinates in range [0, 1]. So to calculate them I perform division gl_FragCoord.xy / viewPortSize . | {
"source": [
"https://computergraphics.stackexchange.com/questions/5724",
"https://computergraphics.stackexchange.com",
"https://computergraphics.stackexchange.com/users/7464/"
]
} |
5,784 | Why do red, green, and blue combinations can make up all the visible colors? | Let's reminds ourselves what light is. Radio waves, micro waves, X rays and gamma rays are all electromagnetic radiation and they only differ by their frequency. It just so happens that the human eye is able to detect electromagnetic radiation between ~400nm and ~800nm, which we perceive as light. The 400nm end is perceived as violet and the 800nm end is perceived as red, with the colors of the rainbow in between. A ray of light can be a mix of any of those frequencies, and when light interacts with matter, some frequencies are absorbed while other might not: this is what we perceive as the colors of objects around us. Unlike the ear though, which is able to distinguish between a lot of sound frequencies (we can identify individual notes, voices and instruments when listening to a song), the eye is not able to distinguish every single frequency. It can generally only detect four ranges of frequencies (there are exceptions like daltonism or mutations). This happens in the retina, where there are several kinds of photo-receptors . A first kind, called " rods ", detects most frequencies of the visible light, without being able to tell them apart. They are responsible for our perception of brightness. A second kind of photo-receptors, called " cones ", exists in three specializations. They detect a narrower range of frequencies, and some of them are more sensitive to the frequencies around red, some to the frequencies around green, and the last ones to the frequencies around blue. Because they detect a range of frequencies , they cannot tell the difference between two frequencies within that range, and they cannot tell the difference between a monochromatic light and a mix of frequencies within that range either. The visual system only has the inputs from those three detectors and reconstruct a perception of color with them. For this reason, the eye cannot tell the difference between a white light made of all the frequencies of the visible light, and the simple mix of only red green and blue lights. Thus, with only three colors, we can reconstruct most colors we can see. By the way, rods are a lot more sensitive than cones, and that's why we don't perceive colors in the night. | {
"source": [
"https://computergraphics.stackexchange.com/questions/5784",
"https://computergraphics.stackexchange.com",
"https://computergraphics.stackexchange.com/users/7550/"
]
} |
7,809 | I have played around with CPU assembly programming like Nasm, Tasm or Masm, but I'm really curious to know how GPU works now.
However, i'm quite confused when I look on internet. I've heard about Cuda and OpenCL, but it is not what I'm looking for.
I'd like to know how GPUs instructions are in RAM... What are the Nasm and Masm for most GPUs? What is the x86 or Z80 of GPUs (What are the different families of GPU)? Do you know a constructor opcodes reference manual?
I think I really need something to compare between the two Processing Units to make it clear, because GPU assembly programming seems to be an even harder subject to learn from on internet that CPU asm programming. I've also read that "NVIDIA never released details on the instructions actually understood by their hardware", but it seems pretty surprising to me. Full post there: https://stackoverflow.com/questions/4660974/how-to-create-or-manipulate-gpu-assembler?newreg=e31519279ce949f087df6322dbf2bf4d Thanks for your help! | You're tilting at windmills trying to learn "GPU assembly", and it's due to the differences between how CPUs and GPUs are made and sold. Each CPU has what's called an instruction set architecture , for example x86 or ARMv8. The instruction set is the interface between the user of the CPU (i.e. the programmer) and the chip. The chip designer publishes the details of the instruction set so that compiler vendors can write compilers to target that instruction set. Any CPU that uses that instruction set can run the same binaries. (Extensions like SSE make that slightly untrue, but they only add new instructions: they don't change the structure of the binary.) When the vendor creates a new processor family, it could have completely different micro-architecture internally, but the same instruction set. GPUs are not like this at all. For best efficiency, the instruction set is usually closely tied to the micro-architecture of the CPU. That means each new family of GPUs has a new instruction set. GPUs can't run binaries made for different GPUs. For this reason, the instruction set usually isn't published: instead, your interface to the GPU is the driver published by the vendor for each graphics API (OpenGL, Vulkan, DirectX, &c.). This is why the graphics APIs have functions to take the source code of a shader and run it: the compiled shader only runs on the same model or family of GPU it was compiled for. The closest you get to GPU assembly language is SPIR-V . This is an industry-standard intermediate representation, which GPU vendors are starting to support. It's like a GPU equivalent of LLVM's intermediate representation. It allows you to do the parsing and source optimization parts of compilation up-front to get a SPIR-V file, and then the GPU driver only needs to compile that to the GPU's instruction set at load time. | {
"source": [
"https://computergraphics.stackexchange.com/questions/7809",
"https://computergraphics.stackexchange.com",
"https://computergraphics.stackexchange.com/users/9028/"
]
} |
8,061 | I am learning about normal mapping. I understood that RGB values are converted into XYZ, but my question is how is it converted and why is the normal map blue and purple in color? | Only tangent space normal maps are primarily blue. This is because the colour blue represents a normal of (0,0,1) which would be an unchanged normal when the triangle lies in the plane x and y, i.e. perpendicular to the surface. The tangent, x and bi-tangent, y (also referred to as bi-normal) are encoded in the red and green channels and these form to create the a tangent space normal for a point on the surface of the triangle. If a tangent space normal map was to encode a colour only in red (1.0, 0.0, 0.0) this would generate a tangent-space normal parallel to the triangle surface. This is never seen because it would mean that the triangle would only ever be lit at 90degrees from the surface and view vector at which point you wouldn't be able to see the triangle anyway. World space normal maps encode the unit normal over a sphere so can be primarily different colours once encoded from [-1, 1] to [0, 1] per channel. A comparison can be seen here: In practise normal maps are usually encoded in a 2 channel format such as BC5 which actually only stores the x and y with the z being reconstructed as we know it's a unit vector. This allows you to maintain higher precision with more bits without increasing the file size. | {
"source": [
"https://computergraphics.stackexchange.com/questions/8061",
"https://computergraphics.stackexchange.com",
"https://computergraphics.stackexchange.com/users/9298/"
]
} |
78 | We are now in the third day of private beta with a couple of questions. That is great. However, most of questions here so far are about CS theory, which already have a full-grown site. The overlap is not a problem per se and is expected. However, if we continue this way, I fear that we could not survive the beta phase due to being a duplicate community and ending being absorved by cstheory with many questions migrated to there, even if cstheory widens it scope a bit in the process. So what should we do to show to be significantly distinct from cstheory and get some a community personality instead of just being a smaller duplicate of cstheory with just a small handful questions that would be off-topic there? | The core question here should be, why do we even have separate CS and CS Theory sites? And the most convenient answers to this can be found by reading this thread on the CSTheory meta . If you have time, please read through it before continuing here. If you're still stuck in line at the DMV after that, read this . But if you have time to read nothing else, please at least read this answer from Jukka Suomela : Not everyone agrees with the premise that StackOverflow is exemplary and that all other sides should follow its model. I find its huge volume of traffic exhausting and daunting. It is too fast-paced for my taste. Questions are asked, very quickly answered, and then forgotten. It might work fairly well for technical programming-related questions, but it is not necessarily a model that I would like to try with theoretical research problems. I think that reasonably low volume is an important feature of this site. While perhaps not stated in such plain language elsewhere, this seems to be the underlying theme of most answers there, and it represents a division of philosophy that arose in the SE 1.0 era (beginning with Stack Overflow) and has now been replicated and expanded on the SE 2.0 sites. There are a number of ways to describe this division, but for the purpose of this discussion I'm going to use a question... Is a lack of existing knowledge of the field an insurmountable barrier to entry? Stack Overflow espouses an open-arms approach: show up, ask your question, and as long as it's about programming - an extremely broad field - you'll get an answer. Doesn't matter if it's answered on page one, chapter one of Learn 2 Program in 10 Hours - they won't turn you away. Theoretical Computer Science takes the opposite approach. Your question must be related to theoretical computer science - an already very narrow field. But it must also be a research-level question. From the FAQ: Although there is no black-and-white distinction between research-level questions and non-research-level questions, questions are considered to be "research-level" roughly when they can be discussed between two professors or between two graduate students working on Ph.D.'s, but not usually between a professor and a typical undergraduate student. It does not include questions at the level of difficulty of typical undergraduate course/textbook homework/exercise. The big tent In practice, there are questions that are considered "too basic" for Stack Overflow. But this inclusionist philosophy has had a massive influence on its reach and scope: by demanding little-to-no background from askers in any topic, SO has been able to serve a staggeringly broad base of users from different backgrounds, many of them experts in their own right but in fields that have only a tangential connection to programming. But even more importantly , this broad scope encourages the collaborative construction of a library of knowledge useful to many who may never actively participate on the site. This was not accidental. And the success of this philosophy in achieving this goal is reflected in the numbers: 96% of visitors to the site do so because their question shows up in a Google (or other general-purpose search engine) search result. Network-wide, that number is about 87%. The vast, vast majority of people learning from the answers on these sites will never need to even sign in. Oh... And on CSTheory? That number hovers between 40% and 50%. The mailing list This brings me back to Jukka's answer (quoted in part above). Even though the Stack Exchange software seems to be working fairly well for that community, this is mostly accidental - it was never designed for small, low-traffic, mostly self-contained groups of people. You can actually license the system for use on private, internal sites, but very, very few groups ever do this: not many internal organizations reach the scale necessary for it to actually work . Indeed, CSTheory is more akin to a large mailing list or message board than it is to the sort of Q&A repository Stack Exchange was designed for. That it works at all , and has managed to establish and sustain a core group of users, is actually quite fascinating. I feel strongly that there are lessons we can learn from it and apply to other topics within our network. That said, I must strongly caution you against attempting to replicate that philosophy here . We're not in the mailing list business Please let me be frank with you: this network - which is to say, the people who comprise it, not the organization funding it - cannot support a dozen sites like CSTheory, now or in the near future. And I have no desire to shut this site down, followed one after another by each of the dozen or so related proposals for niche sites. But if each one insists on trying to follow this mailing-list pattern, that is exactly what will happen. That's not a threat - it's a prediction. So to finally answer the question, this is what you can do to make your community different from that on CSTheory: Welcome beginners Computer science is a fairly large field, and folks approach it from many directions. Some make it their field of study; others make it their livelihood. Many will encounter it as part of their work or study in some other field. Don't turn them away. Welcome closely-related topics Don't worry too much if a particular question seems like it's focused more on mathematics, or software engineering, or statistics. If it can be asked and answered from a CS perspective, then edit to make that clear... and then answer it. I would much rather see this site encompass topics like artificial intelligence or even computational linguistics than try to spin up separate sites. Be very careful when defining your scope to be as inclusive as possible without completely abandoning CS as the focus. Strive for accessible language in questions and answers Every question need not devolve into a beginner's tutorial on the basic concepts. But questions and answers will be far more useful as a resource for others when they're written (or edited) with that in mind. Avoid ambiguity, embrace detail, and welcome questions from those unfamiliar with your particular area of interest as an opportunity. Be patient with students I have a sneaking suspicion that the primary audience for this site - initially at least - will be students, from many different backgrounds and at many different points in their education. This is a wonderful opportunity, but it can also be a burden: students are often poor at asking questions, may lack sufficient background to even know quite what they're asking, and can be tempted to misuse the site as a resource for cheating rather than learning. I can't promise a happy outcome, even if you follow these guidelines. As some of you may know, I was quite reluctant to see this site launch in the first place, and I still harbor grave doubts as to whether it can be made to work. But after much discussion, in public and internally, we decided to give it a chance. All I can ask is that you do the same... | {
"source": [
"https://cs.meta.stackexchange.com/questions/78",
"https://cs.meta.stackexchange.com",
"https://cs.meta.stackexchange.com/users/95/"
]
} |
1,151 | I understand that all the forums serve different purposes, and they have their own moderators, rules and policies. But CS seems completely different to me, with respect to all the other forums I have used. The most significant difference that surprised me is that one will never get a solution. I totally understand that one must do a self-study, and research before posting a question here, but after doing everything one could (in the limited timeframe), he/she will not get a positive reply here. I guess most of the experts that are in the forum have pretty high standards. They will never believe that you have done enough research or self-study until you have proved the problem, in which case you will not need help from a forum or someplace else. Whenever I have posted homework related question, where I actually have given it some thought, and did my research, but still couldn't managed to solve the problem, I will get a reply that I should think more about it. But, one has to keep in mind, that students take plenty of courses each semester, and nearly all of them will include exercises (which is always helpful to learn the subject more), which amounts to plenty of work. The homeworks will usually have short deadlines (or deadlines that are okay, if you only take one course, and fully focus on the subject). So, it is not possible to devote your full time to just one course, and spend all your time in one problem set, because if you do, you will sacrifice the other subjects. Therefore, each student tries to keep a balance. Plus, not all subjects are equally important to everyone. Everyone thinks about the area where he/she wants to continue or specialize, and spend more time on those topics. Anyway, what I wanted to say is that, whenever I have a math or programming question, and if I post in MathExchange or StackOverflow, I will get an immediate answer, either a solution or a "really helpful" hint for the problem. Then, I can study the problem, see what the person who has answered the solution has done where I was stuck, ask him/her questions regarding the solution to fully grasp it, and at the end better understand the subject, because each problem teaches you something new or a gives a different view of that area. I guess this kind of policy and behavior in those forums, is what makes them very active and helpful. I understand that it is normal for SO and ME to have plenty of users, after all they serve to a larger base of people than CS. But, when we think that CS is perhaps the most important area of study at the moment, and more and more people enroll to study it, one will expect that the CS forum would be in the level of SO or ME. But seeing the unhelpful replies, I'm not surprised why it is not in the same level as SO or ME. | I agree with David Richerby. Math.se is the anomalous site. It is inundated with boring homework questions that keep repeating in an annual cycle. I personally ran away from the site with disgust, and I wouldn't want the same thing to happen here. I don't mind giving help, or sometimes complete solutions, for interesting and non-routine exercises. For me they are nice puzzles to sharpen my technical abilities. What I dislike are routine, basic exercises, which I usually only give hints to. This is not merely to discourage such questions, but also to encourage the posters to solve such exercises on their own. These routine, basic exercises are often not much more than a play with definitions. They are designed to help you learn and internalize these definitions. If a student is unable to solve them, this is usually due to lack of "mathematical maturity". The only way to develop mathematical maturity is to solve such exercises enough times until your skill increases. Another issue that bothers me is fairness. It is common for students to ask for help from their peers, teaching assistants or professors. The latter will help the student with hints, but hopefully not complete solutions, while asking peers might be forbidden. The internet side channel is thus a way of cheating the system. Many people on math.se would disagree with me on this point, which is why that community behaves differently. There is a hint of entitlement in your question. We are not here to help. We are here to do as we please, forming our community standards through our actions. There is no obligation to help every student in the way they seem fit. Our only obligation is to ourselves. There are disagreements among the major users on this site, a plurality of opinions, so even if one person is unwilling to help you, another might. But if all of us as a community feel helping you in this particular way is against our better judgement, your only option is to accept it, and try to change it, for example by asking this question. | {
"source": [
"https://cs.meta.stackexchange.com/questions/1151",
"https://cs.meta.stackexchange.com",
"https://cs.meta.stackexchange.com/users/-1/"
]
} |
1,169 | As you know, we are graduating . The last missing puzzle piece is our own design which will be create by Stack Exchange employees at some uncertain point in the future. They are known for listening to community input and feedback. So while we probably should not try to do their work for them, I think it may be worthwhile to give them some thoughts to play around with. One thing I have been agonising over is this: how to represent computer science visually? At all and, more importantly in this context, in a small square-ish form factor? Can we avoid clichés? So this is a request to brainstorm, break down CS as you see it to its essentials and boil it down to abstract, visual cues. Go as far as you can: while mockups are great, I guess that the designers can do a lot (maybe more) with ideas and concepts. A handful of keywords may be enough; see here for the thoughts behind the logo of crypto.SE. Paweł , a designer working for Stack Exchange, is seeking ideas , especially about a logo. Related question: Advertising Computer Science Stack Exchange on other SE sites | Rooted trees are pretty ubiquitous in all kinds of computer science, both theoretical and applied. You can find plenty of applications of rooted trees e.g. in the context of algorithms and data structures, automata theory, computational complexity theory, computational geometry, programming language theory, formal methods, artificial intelligence, computer architecture, computer graphics, parallel and distributed computing, information storage and retrieval, and software engineering. Five nodes are enough to draw something that is easily recognisable as a rooted binary tree. Works well in small sizes, too: | {
"source": [
"https://cs.meta.stackexchange.com/questions/1169",
"https://cs.meta.stackexchange.com",
"https://cs.meta.stackexchange.com/users/98/"
]
} |
1,650 | I am stepping down as a moderator, and you deserve to know why. I cannot condone actions that the company has recently taken (behind close doors, although I expect more will become public soon). Social life means being confronted to different points of view. This is especially true on in an international setting such as Stack Exchange where you get to encounter people from different cultures. When interacting with others, you need to draw lines — for example, racism is not acceptable, full stop — and within those bounds, you need to open up to diversity. Sometimes that means listening to multiple points of view, and sometimes agree to disagree, and sometimes compromise. I have witnessed a disagreement between moderators where both sides made some good points. Both sides deserved and requested respect. One side was aware that their behavior could hurt even though no malice was intended and tried to go out of their way in order not to be hurtful. The other side demanded to have things their way, and did not care who they were hurting on the process. In this particular dispute, there was clearly a victim and aggressors. The victim has now written up her side of the story . Stack Exchange intervened, did not try to calm spirits, came firmly on the uncompromising side, and fired the victim in a very hurtful manner. This is not an environment I feel safe in, and certainly not an environment I can or will help foster. I have been a participant on Stack Exchange for 9 years and a volunteer moderator across different sites for more than 8. Stepping down is a big thing for me. I am very disappointed to go that way. But I simply cannot continue. I still value building a shared library of knowledge. I see a question and answer site as a useful complement to an encyclopedia, whose role Wikipedia fulfills. However, the way it is run now makes me doubt that Stack Exchange is a good place to build this library, so I will be reevaluating my participation. | Thank you for all of your enormous contributions to the site over the years, Gilles! I admire your acts of service and your spirit of giving to the world anonymously, and I will miss having you on the moderation team. Best wishes in all your future endeavours. | {
"source": [
"https://cs.meta.stackexchange.com/questions/1650",
"https://cs.meta.stackexchange.com",
"https://cs.meta.stackexchange.com/users/39/"
]
} |
Subsets and Splits