source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
50,684
I am new to chemistry and I find it fascinating. I am trying to learn about chemical reactions and I was wondering if there was an easy way to quickly tell if any combination of chemical substances would produce a reaction and what product(s) if any might be formed. For example, if I pick any two random substances $\ce{A}$ and $\ce{B}$, can I determine if a reaction will occur and predict the products? $$\ce{A + B -> \ ?}$$ More specifically, let's say I just learned that chlorine bleach (sodium hypochlorite) can be made by a reaction of sodium hydroxide and chlorine with sodium chloride as a byproduct: $$\ce{NaOH(aq) + Cl2(g) -> NaOCl(aq) + NaCl(aq)}$$ Is there a way that I could have predicted this reaction (and any other) before I learned about it? I do not want to memorize the outcome of every combination o that I can answer questions about chemical reactions. I am hoping there is a short list of simple rules that govern all chemical reactions that I can commit to memory and then apply to any combination of substances. I might also like to be able to develop a simple computer program built around an algorithm for these reactivity rules that can sample databases of known substances and predict new reactions.
Can I predict the products of any chemical reaction? In theory, yes! Every substance has characteristic reactivity behavior. Likewise pairs and sets of substances have characteristic behavior. For example, the following combinations of substances only have one likely outcome each: $$ \ce{HCl + NaOH -> NaCl + H2O} \\[2ex] \ce{CH3CH2CH2OH->[$1.$ (COCl)2, (CH3)2SO][$2.$ Et3N] CH3CH2CHO} $$ However, it is not a problem suited to brute force or exhaustive approaches There are millions or perhaps billions of known or possible substances. Let's take the lower estimate of 1 million substances. There are $999\,999\,000\,000$ possible pairwise combinations. Any brute force method (in other words a database that has an answer for all possible combinations) would be large and potentially resource prohibitive. Likewise you would not want to memorize the nearly 1 trillion combinations. If more substances are given, the combination space gets bigger. In the second example reaction above, there are four substances combined: $\ce{CH3CH2CH2OH}$, $\ce{(COCl)2}$, $\ce{(CH3)2SO}$, and $\ce{Et3N}$. Pulling four substances at random from the substance space generates a reaction space on the order of $1\times 10^{24}$ possible combinations. And that does not factor in order of addition. In the second reaction above, there is an implied order of addition: $\ce{CH3CH2CH2OH}$ $\ce{(COCl)2}$, $\ce{(CH3)2SO}$ $\ce{Et3N}$ However, there are $4!=24$ different orders of addition for four substances, some of which might not generate the same result. Our reaction space is up to $24\times 10^{24}$, a bewildering number of combinations. And this space does not include other variables, like time, temperature, irradiation, agitation, concentration, pressure, control of environment, etc. If each reaction in the space could somehow be stored for as little as 100 kB of memory, then the whole space of combinations up to 4 substances would require $2.4 \times 10^{27}$ bytes of data, or $2.4\times 10^7$ ZB (zettabytes) or $2.4\times 10^4$ trillion terabytes. The total digital data generated by the human species was estimated recently (Nov. 2015) to be 4.4 ZB. We need $5.5\times 10^5$ times more data in the world to hold such a database. And that does not even count the program written to search it or the humans needed to populate it, the bandwidth required to access it, or the time investment of any of these steps. In practice, it can be manageable! Even though the reaction space is bewilderingly huge, chemistry is an orderly predictable business. Folks in the natural product total synthesis world do not resort to random combinations and alchemical mumbo jumbo. They can predict with some certainty what type of reactions do what to which substances and then act on that prediction. When we learn chemistry, we are taught to recognize if a molecule belongs to a certain class with characteristic behavior. In the first example above, we can identify $\ce{HCl}$ as an acid and $\ce{NaOH}$ as a base, and then predict an outcome that is common to all acid-base reactions. In the second example above, we are taught to recognize $\ce{CH3CH2CH2OH}$ as a primary alcohol and the reagents given as an oxidant. The outcome is an aldehyde. These examples are simple ones in which the molecules easily fit into one class predominantly. More complex molecules may belong to many categories. Organic chemistry calls these categories “Functional Groups” . The ability to predict synthetic outcomes then begins and ends with identifying functional groups within a compound's structure. For example, even though the following compound has a more complex structure, it contains a primary alcohol, which will be oxidized to an aldehyde using the same reagents presented above. We can also be reasonably confident that no unpleasant side reactions will occur. If the reagents in the previous reaction had been $\ce{LiAlH4}$ followed by $\ce{H3O+}$, then more than one outcome is possible since more than one functional group in the starting compound will react. Controlling the reaction to give one of the possible outcomes is possible, but requires further careful thought. There are rules, but they are not few in number. There are too many classes of compounds to list here. Likewise even one class, like primary alcohols (an hydroxyl group at the end of a hydrocarbon chain) has too many characteristic reactions to list here. If there are 30 classes of compounds (an underestimate) and 30 types of reactions (an underestimate), then there are 900 reaction types (an underestimate). The number of viable reaction types is more manageable than the total reaction space, but would still be difficult to commit to memory quickly. And new reaction types are being discovered all the time. Folks who learn how to analyze combinations of compounds spend years taking courses and reading books and research articles to accumulate the knowledge and wisdom necessary. It can be done. Computer programs can be (and have been) designed to do the same analysis, but they were designed by people who learned all of the characteristic combinations. There is no shortcut.
{ "source": [ "https://chemistry.stackexchange.com/questions/50684", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/275/" ] }
50,704
The pH of pure liquid water depends on temperature. It is about pH = 7.0 at room temperature, pH = 6.1 at 100 °C, and pH = 7.5 at 0 °C. What happens to the pH (or to the ion product) of pure water when it freezes? I assume that the proton transfer reactions $$\ce{2H2O <=> H3O+ + OH-}$$ $$\ce{H3O+ + H2O <=> H2O + H3O+}$$ $$\ce{H2O + OH- <=> OH- + H2O}$$ are too fast, so that any present $\ce{H3O+}$ and $\ce{OH-}$ cannot be easily trapped in the solid ice crystal when it grows. Does that mean that pure ice crystals are free of $\ce{H3O+}$ and $\ce{OH-}$ ions?
According to Martin Chaplin's Water Dissociation and pH : In ice, where the local hydrogen bonding rarely breaks to separate the constantly forming and re-associating ions, the dissociation constant is much lower (for example at $-4~\mathrm{^\circ C}$, $K_\mathrm{w} = 2 \times 10^{-20}~\mathrm{mol^2~L^{-2}}$). So $[\ce{H+}] = 1.4 \times 10^{-10}~\mathrm{mol\ L^{-1}} \Longrightarrow \mathrm{p[\ce{H+}]} = 9.9$ For more information see Self-Dissociation and Protonic Charge Transport in Water and Ice Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences Vol. 247, No. 1251 (Oct. 21, 1958), pp. 505-533 This is a review article by Nobel Prize winner Manfred Eigen , after whom hydrated $\ce{H3O+}$ is sometimes referred to as the Eigen Ion .
{ "source": [ "https://chemistry.stackexchange.com/questions/50704", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/29770/" ] }
50,796
From Wikipedia's article on sodium: When burned in dry air, it forms primarily sodium peroxide with some sodium oxide. We know that sodium has a strong reducing capacity, so why does it produce a compound in which the oxygen atom is not reduced to the fullest possible extent? After posting the question, I read up some more and found this, on Chemguide's page describing the oxidation properties of Group 1 elements: So why do any of the metals form the more complicated oxides? It is a matter of energetics. In the presence of sufficient oxygen, they produce the compound whose formation gives out most energy. That gives the most stable compound. The amount of heat evolved per mole of rubidium in forming its various oxides is: $$\begin{array}{|c|c|} \hline & \text{enthalpy change (kJ / mol of Rb)} \\ \hline \ce{Rb2O} & -169.5 \\ \ce{Rb2O2} & -236 \\ \ce{RbO2} & -278.7 \\ \hline \end{array}$$ This is hardly an explanation, because it just says in a circular way that it's energetically profitable to form such compounds. Why is this energetically profitable? Can part of the explanation be "because in Group 1, starting from $\ce{Na}$, ion charge densities are low enough to allow peroxides and superoxides to exist"? I wonder if the following behaviour is related to this: lithium nitrate decomposes to lithium oxide (again the other participant has the oxidation -2), while the rest of the group's elements have their nitrates decompose to a nitrite (again the other participant's oxidation is -1): From Chemguide : $$\ce{4 LiNO3 (s) -> 2 Li2O (s) + 4 NO2 (g) + O2 (g)}$$ $$\ce{2 MNO3 (s) -> 2 MNO2 (s) + O2 (g)}$$
Since I will deal with all of the alkali metals in this answer, I think the question should also be broadened. There is no point in covering one single metal (sodium) without touching the others since it is the trend going down the group that we are interested in. All thermodynamic data is taken from Prof. M. Hayward's lecture notes at Oxford. So, firstly, some data. There is an increase in ionic radius going down the group, which should not be surprising: $$\begin{array}{cc} \hline \ce{M} & \text{Ionic radius of }\ce{M+}\text{ / pm} \\ \hline \ce{Li} & 76 \\ \ce{Na} & 102 \\ \ce{K} & 138 \\ \ce{Rb} & 152 \\ \ce{Cs} & 167 \\ \hline \end{array}$$ This leads to an decrease in the magnitude of the lattice enthalpies of the Group I superoxides, peroxides, and oxides, going down the group: $$\begin{array}{cccc} \hline \ce{M} & \Delta H_\mathrm{L}(\ce{MO2})\mathrm{~/~kJ~mol^{-1}} & \Delta H_\mathrm{L}(\ce{M2O2})\mathrm{~/~kJ~mol^{-1}} & \Delta H_\mathrm{L}(\ce{M2O})\mathrm{~/~kJ~mol^{-1}} \\ \hline \ce{Li} & -960 & -2748 & -3295 \\ \ce{Na} & -860 & -2475 & -2909 \\ \ce{K} & -752 & -2175 & -2503 \\ \ce{Rb} & -717 & -2077 & -2375 \\ \ce{Cs} & -683 & -1981 & -2250 \\ \hline \end{array}$$ As for why there is an increase in magnitude going across the table, we have to look at the factors controlling the magnitude of the lattice enthalpy: $$\Delta H_\mathrm{L} \propto \frac{\nu z_+ z_-}{r_+ + r_-} \tag{1}$$ where $\nu$ is the number of ions in one formula unit, $z_+$ and $z_-$ are the charge numbers on the cation and anion, and $r_+$ and $r_-$ are the ionic radii: $$\begin{array}{ccccc} \hline \text{Formula unit} & \nu & z_+ & z_- & \text{Numerator in eq. (1)} & r_-\text{ / pm} \\ \hline \ce{MO2} & 2 & 1 & 1 & 2 & 149 \\ \ce{M2O2} & 3 & 1 & 2 & 6 & 159 \\ \ce{M2O} & 3 & 1 & 2 & 6 & 120 \\ \hline \end{array}$$ The lattice energies of the peroxides and oxides are roughly 3 times those of the corresponding superoxides, because of the larger numerator. The lattice energies of the oxides have a slightly larger magnitude than those of the corresponding peroxides, because of the smaller anionic radius. Just looking at the lattice enthalpies, we might think that all metal cations would form the oxides. However, this approach is flawed because it does not take into consideration the energy cost of forming the three different anions from molecular dioxygen. Recall that the lattice enthalpy is defined as $\Delta H$ for the reaction $$\ce{M+ (g) + X- (g) -> MX (s)}$$ However when we burn a metal in oxygen we are starting from $\ce{M}$ and $\ce{O2}$. So, we have to figure out the energy needed to get from $\ce{M}$ to $\ce{M+}$, and from $\ce{O2}$ to the relevant anion ($\ce{O2-}$, $\ce{O2^2-}$, or $\ce{O^2-}$). The analysis here is complicated by the fact that the charges on the anions are not the same. Therefore, the reaction $$\ce{M(s) + O2 (g) -> MO2(s)}$$ cannot be directly compared with $$\ce{2M(s) + O2(g) -> M2O2(s)}$$ In order to "standardise" the equations, we will consider the reactions per mole of metal . One can loosely think of the combustion reaction as "releasing" some kind of energy within the metal; the most favourable reaction will be that which "releases" the most energy per mole of metal. Therefore, the three reactions we are considering are: $$\begin{align} \ce{M (s) + O2 (g) &-> MO2 (s)} & \Delta H_1 \\ \ce{M (s) + 1/2 O2 (g) &-> 1/2 M2O2 (s)} & \Delta H_2 \\ \ce{M (s) + 1/4 O2 (g) &-> 1/2 M2O (s)} & \Delta H_3 \\ \end{align}$$ Now, we construct Hess cycles for all three reactions. Superoxides $$\require{AMScd}\begin{CD} \color{blue}{\ce{M(s) + O2(g)}} @>{\Large\color{blue}{\Delta H_1}}>> \color{blue}{\ce{MO2(s)}} \\ @V{\Large\Delta H_\mathrm{f}(\ce{M+})}VV @AA{\Large\Delta H_\mathrm{L}(\ce{MO2})}A \\ \ce{M+(g) + e- + O2(g)} @>>{\Large\Delta H_\mathrm{f}(\ce{O2^-})}> \ce{M+(g) + O2^-(g)} \end{CD}$$ Peroxides $$\require{AMScd}\begin{CD} \color{blue}{\ce{M(s) + 1/2O2(g)}} @>{\Large\color{blue}{\Delta H_2}}>> \color{blue}{\ce{1/2M2O2(s)}} \\ @V{\Large\Delta H_\mathrm{f}(\ce{M+})}VV @AA{\Large\ce{1/2}\Delta H_\mathrm{L}(\ce{M2O2})}A \\ \ce{M+(g) + e- + 1/2O2(g)} @>>{\Large\ce{1/2}\Delta H_\mathrm{f}(\ce{O2^2-})}> \ce{M+(g) + 1/2O2^2-(g)} \end{CD}$$ Oxides $$\require{AMScd}\begin{CD} \color{blue}{\ce{M(s) + 1/4O2(g)}} @>{\Large\color{blue}{\Delta H_3}}>> \color{blue}{\ce{1/2M2O(s)}} \\ @V{\Large\Delta H_\mathrm{f}(\ce{M+})}VV @AA{\Large\ce{1/2}\Delta H_\mathrm{L}(\ce{M2O})}A \\ \ce{M+(g) + e- + 1/4O2(g)} @>>{\Large\ce{1/2}\Delta H_\mathrm{f}(\ce{O^2-})}> \ce{M+(g) + 1/2O^2-(g)} \end{CD}$$ Now we need more data. $\Delta H_\mathrm{f}(\ce{M+})$ is simply the sum of the atomisation energy and first ionisation energy: $$\begin{array}{cc} \hline \ce{M} & \Delta H_\mathrm{f}(\ce{M+})\text{ / }\mathrm{kJ~mol^{-1}} \\ \hline \ce{Li} & 679 \\ \ce{Na} & 603 \\ \ce{K} & 508 \\ \ce{Rb} & 484 \\ \ce{Cs} & 472 \\ \hline \end{array}$$ and the enthalpies of formation of the anions are $$\begin{array}{cc} \hline \ce{X} & \Delta H_\mathrm{f}(\ce{X})\text{ / }\mathrm{kJ~mol^{-1}} \\ \hline \ce{O2-} & -105 \\ \ce{O2^2-} & +520 \\ \ce{O^2-} & +1020 \\ \hline \end{array}$$ Why is the enthalpy of formation of $\ce{O^2-}$ so large? The answer is that, to get from $\ce{1/2 O2}$ to $\ce{O^2-}$, you need to first break the $\ce{O=O}$ bond, then add two electrons to the oxygen. Furthermore, the second electron affinity is often an unfavourable process . For the other two anions, you don't need to break the $\ce{O=O}$ bond. So now, we can see a trend at work already. Going from the superoxides to peroxides to oxides, the more negative lattice enthalpy favours the formation of the oxide. However, the increasing heat of formation of the anion favours the formation of the superoxide. When do each of these two factors win out? Well, when lattice enthalpies are comparatively large, we would expect the lattice enthalpy factor to outweigh the heat of formation of the anion. Lattice enthalpies are large precisely when the cation is small, and therefore lithium forms the oxide when heated in oxygen. However, with caesium, lattice enthalpies are smaller, less significant, and the heat of formation of the anion wins out; caesium therefore forms the superoxide. The trend is of course not black and white. Going down the group from lithium to caesium, we might guess that perhaps there are one or two elements that form the intermediate peroxide. That element is sodium. You could say that the larger lattice energies of sodium salts sufficiently compensate for the formation of the peroxide ion, but aren't enough to compensate for the formation of the oxide ion. I leave you with the last bunch of numbers, which tabulate the values of $\Delta H_1$ through $\Delta H_3$ for all the elements (all values in $\mathrm{kJ~mol^{-1}}$). You can actually calculate these yourself by plugging the data above into the Hess cycles. It seems that the data is a little different from that given in the Chemguide screenshot you have, but the conclusion is the same, so I'll ignore that: $$\begin{array}{cccc} \hline \ce{M} & \Delta H_1\text{ (superoxide)} & \Delta H_2\text{ (peroxide)} & \Delta H_3\text{ (oxide)} \\ \hline \ce{Li} & -386 & -435 & \mathbf{-459} \\ \ce{Na} & -362 & \mathbf{-375} & -342 \\ \ce{K} & \mathbf{-349} & -320 & -234 \\ \ce{Rb} & \mathbf{-338} & -295 & -194 \\ \ce{Cs} & \mathbf{-316} & -259 & -143 \\ \hline \end{array}$$ As described earlier, the salt with the most negative enthalpy of formation will be preferentially formed. These are bolded in the table.
{ "source": [ "https://chemistry.stackexchange.com/questions/50796", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/7213/" ] }
50,814
Semi-permanent hair dye washes off after several months. What mechanism(s) keep the dye in the hair? (I assume it's not primary chemical bonds, otherwise it would be permanent)
Since I will deal with all of the alkali metals in this answer, I think the question should also be broadened. There is no point in covering one single metal (sodium) without touching the others since it is the trend going down the group that we are interested in. All thermodynamic data is taken from Prof. M. Hayward's lecture notes at Oxford. So, firstly, some data. There is an increase in ionic radius going down the group, which should not be surprising: $$\begin{array}{cc} \hline \ce{M} & \text{Ionic radius of }\ce{M+}\text{ / pm} \\ \hline \ce{Li} & 76 \\ \ce{Na} & 102 \\ \ce{K} & 138 \\ \ce{Rb} & 152 \\ \ce{Cs} & 167 \\ \hline \end{array}$$ This leads to an decrease in the magnitude of the lattice enthalpies of the Group I superoxides, peroxides, and oxides, going down the group: $$\begin{array}{cccc} \hline \ce{M} & \Delta H_\mathrm{L}(\ce{MO2})\mathrm{~/~kJ~mol^{-1}} & \Delta H_\mathrm{L}(\ce{M2O2})\mathrm{~/~kJ~mol^{-1}} & \Delta H_\mathrm{L}(\ce{M2O})\mathrm{~/~kJ~mol^{-1}} \\ \hline \ce{Li} & -960 & -2748 & -3295 \\ \ce{Na} & -860 & -2475 & -2909 \\ \ce{K} & -752 & -2175 & -2503 \\ \ce{Rb} & -717 & -2077 & -2375 \\ \ce{Cs} & -683 & -1981 & -2250 \\ \hline \end{array}$$ As for why there is an increase in magnitude going across the table, we have to look at the factors controlling the magnitude of the lattice enthalpy: $$\Delta H_\mathrm{L} \propto \frac{\nu z_+ z_-}{r_+ + r_-} \tag{1}$$ where $\nu$ is the number of ions in one formula unit, $z_+$ and $z_-$ are the charge numbers on the cation and anion, and $r_+$ and $r_-$ are the ionic radii: $$\begin{array}{ccccc} \hline \text{Formula unit} & \nu & z_+ & z_- & \text{Numerator in eq. (1)} & r_-\text{ / pm} \\ \hline \ce{MO2} & 2 & 1 & 1 & 2 & 149 \\ \ce{M2O2} & 3 & 1 & 2 & 6 & 159 \\ \ce{M2O} & 3 & 1 & 2 & 6 & 120 \\ \hline \end{array}$$ The lattice energies of the peroxides and oxides are roughly 3 times those of the corresponding superoxides, because of the larger numerator. The lattice energies of the oxides have a slightly larger magnitude than those of the corresponding peroxides, because of the smaller anionic radius. Just looking at the lattice enthalpies, we might think that all metal cations would form the oxides. However, this approach is flawed because it does not take into consideration the energy cost of forming the three different anions from molecular dioxygen. Recall that the lattice enthalpy is defined as $\Delta H$ for the reaction $$\ce{M+ (g) + X- (g) -> MX (s)}$$ However when we burn a metal in oxygen we are starting from $\ce{M}$ and $\ce{O2}$. So, we have to figure out the energy needed to get from $\ce{M}$ to $\ce{M+}$, and from $\ce{O2}$ to the relevant anion ($\ce{O2-}$, $\ce{O2^2-}$, or $\ce{O^2-}$). The analysis here is complicated by the fact that the charges on the anions are not the same. Therefore, the reaction $$\ce{M(s) + O2 (g) -> MO2(s)}$$ cannot be directly compared with $$\ce{2M(s) + O2(g) -> M2O2(s)}$$ In order to "standardise" the equations, we will consider the reactions per mole of metal . One can loosely think of the combustion reaction as "releasing" some kind of energy within the metal; the most favourable reaction will be that which "releases" the most energy per mole of metal. Therefore, the three reactions we are considering are: $$\begin{align} \ce{M (s) + O2 (g) &-> MO2 (s)} & \Delta H_1 \\ \ce{M (s) + 1/2 O2 (g) &-> 1/2 M2O2 (s)} & \Delta H_2 \\ \ce{M (s) + 1/4 O2 (g) &-> 1/2 M2O (s)} & \Delta H_3 \\ \end{align}$$ Now, we construct Hess cycles for all three reactions. Superoxides $$\require{AMScd}\begin{CD} \color{blue}{\ce{M(s) + O2(g)}} @>{\Large\color{blue}{\Delta H_1}}>> \color{blue}{\ce{MO2(s)}} \\ @V{\Large\Delta H_\mathrm{f}(\ce{M+})}VV @AA{\Large\Delta H_\mathrm{L}(\ce{MO2})}A \\ \ce{M+(g) + e- + O2(g)} @>>{\Large\Delta H_\mathrm{f}(\ce{O2^-})}> \ce{M+(g) + O2^-(g)} \end{CD}$$ Peroxides $$\require{AMScd}\begin{CD} \color{blue}{\ce{M(s) + 1/2O2(g)}} @>{\Large\color{blue}{\Delta H_2}}>> \color{blue}{\ce{1/2M2O2(s)}} \\ @V{\Large\Delta H_\mathrm{f}(\ce{M+})}VV @AA{\Large\ce{1/2}\Delta H_\mathrm{L}(\ce{M2O2})}A \\ \ce{M+(g) + e- + 1/2O2(g)} @>>{\Large\ce{1/2}\Delta H_\mathrm{f}(\ce{O2^2-})}> \ce{M+(g) + 1/2O2^2-(g)} \end{CD}$$ Oxides $$\require{AMScd}\begin{CD} \color{blue}{\ce{M(s) + 1/4O2(g)}} @>{\Large\color{blue}{\Delta H_3}}>> \color{blue}{\ce{1/2M2O(s)}} \\ @V{\Large\Delta H_\mathrm{f}(\ce{M+})}VV @AA{\Large\ce{1/2}\Delta H_\mathrm{L}(\ce{M2O})}A \\ \ce{M+(g) + e- + 1/4O2(g)} @>>{\Large\ce{1/2}\Delta H_\mathrm{f}(\ce{O^2-})}> \ce{M+(g) + 1/2O^2-(g)} \end{CD}$$ Now we need more data. $\Delta H_\mathrm{f}(\ce{M+})$ is simply the sum of the atomisation energy and first ionisation energy: $$\begin{array}{cc} \hline \ce{M} & \Delta H_\mathrm{f}(\ce{M+})\text{ / }\mathrm{kJ~mol^{-1}} \\ \hline \ce{Li} & 679 \\ \ce{Na} & 603 \\ \ce{K} & 508 \\ \ce{Rb} & 484 \\ \ce{Cs} & 472 \\ \hline \end{array}$$ and the enthalpies of formation of the anions are $$\begin{array}{cc} \hline \ce{X} & \Delta H_\mathrm{f}(\ce{X})\text{ / }\mathrm{kJ~mol^{-1}} \\ \hline \ce{O2-} & -105 \\ \ce{O2^2-} & +520 \\ \ce{O^2-} & +1020 \\ \hline \end{array}$$ Why is the enthalpy of formation of $\ce{O^2-}$ so large? The answer is that, to get from $\ce{1/2 O2}$ to $\ce{O^2-}$, you need to first break the $\ce{O=O}$ bond, then add two electrons to the oxygen. Furthermore, the second electron affinity is often an unfavourable process . For the other two anions, you don't need to break the $\ce{O=O}$ bond. So now, we can see a trend at work already. Going from the superoxides to peroxides to oxides, the more negative lattice enthalpy favours the formation of the oxide. However, the increasing heat of formation of the anion favours the formation of the superoxide. When do each of these two factors win out? Well, when lattice enthalpies are comparatively large, we would expect the lattice enthalpy factor to outweigh the heat of formation of the anion. Lattice enthalpies are large precisely when the cation is small, and therefore lithium forms the oxide when heated in oxygen. However, with caesium, lattice enthalpies are smaller, less significant, and the heat of formation of the anion wins out; caesium therefore forms the superoxide. The trend is of course not black and white. Going down the group from lithium to caesium, we might guess that perhaps there are one or two elements that form the intermediate peroxide. That element is sodium. You could say that the larger lattice energies of sodium salts sufficiently compensate for the formation of the peroxide ion, but aren't enough to compensate for the formation of the oxide ion. I leave you with the last bunch of numbers, which tabulate the values of $\Delta H_1$ through $\Delta H_3$ for all the elements (all values in $\mathrm{kJ~mol^{-1}}$). You can actually calculate these yourself by plugging the data above into the Hess cycles. It seems that the data is a little different from that given in the Chemguide screenshot you have, but the conclusion is the same, so I'll ignore that: $$\begin{array}{cccc} \hline \ce{M} & \Delta H_1\text{ (superoxide)} & \Delta H_2\text{ (peroxide)} & \Delta H_3\text{ (oxide)} \\ \hline \ce{Li} & -386 & -435 & \mathbf{-459} \\ \ce{Na} & -362 & \mathbf{-375} & -342 \\ \ce{K} & \mathbf{-349} & -320 & -234 \\ \ce{Rb} & \mathbf{-338} & -295 & -194 \\ \ce{Cs} & \mathbf{-316} & -259 & -143 \\ \hline \end{array}$$ As described earlier, the salt with the most negative enthalpy of formation will be preferentially formed. These are bolded in the table.
{ "source": [ "https://chemistry.stackexchange.com/questions/50814", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/10104/" ] }
50,906
I've read that the oxygen atom in water is $\mathrm{sp^2}$ hybridized, such that one of the oxygen lone pairs should be in an $\mathrm{sp^2}$ orbital and the other should be in a pure p atomic orbital. First, am I correct about the lone pairs being non-equivalent? Second, if so, does this have any significance in actual physical systems (i.e. is it a measurable phenomenon), and what is the approximate energy difference between the pairs of electrons? Lastly, if it turns out the lone pairs are actually inequivalent, can this be reconciled with the traditional explanation (due to VSEPR theory) that oxygen is $\mathrm{sp^3}$ and the lone pairs are equivalent?
Water, as simple as it might appear, has quite a few extraordinary things to offer. Most does not seem to be as it appears. Before diving deeper, a few cautionary words about hybridisation. Hybridisation is an often misconceived concept. It only is a mathematical interpretation, which explains a certain bonding situation (in an intuitive fashion). In a molecule the equilibrium geometry will result from various factors, such as steric and electronic interactions, and furthermore interactions with the surroundings like a solvent or external field. The geometric arrangement will not be formed because a molecule is hybridised in a certain way, it is the other way around, i.e. a result of the geometry or more precise and interpretation of the wave function for the given molecular arrangement. In molecular orbital theory linear combinations of all available (atomic) orbitals will form molecular orbitals (MO). These are spread over the whole molecule, or delocalised, and in a quantum chemical interpretation they are called canonical orbitals. Such a solution (approximation) of the wave function can be unitary transformed form localised molecular orbitals (LMO). The solution (the energy) does not change due to this transformation. These can then be used to interpret a bonding situation in a simpler theory. Each LMO can be expressed as a linear combination of the atomic orbitals, hence it is possible to determine the coefficients of the atomic orbitals and describe these also as hybrid orbitals. It is absolutely wrong to assume that there are only three types of sp x hybrid orbitals. Therefore it is very well possible, that there are multiple different types of orbitals involved in bonding for a certain atom. For more on this, read about Bent's rule on the network. [1] Let's look at water, Wikipedia is so kind to provide us with a schematic drawing: The bonding angle is quite close to the ideal tetrahedral angle, so one would assume, that the involved orbitals are sp 3 hybridised. There is also a connection between bond angle and hybridisation, called Coulson's theorem, which lets you approximate hybridisation. [2] In this case the orbitals involved in the bonds would be sp 4 hybridised. (Close enough.) Let us also consider the symmetry of the molecule. The point group of water is C 2v . Because there are mirror planes, in the canonical bonding picture π-type orbitals [3] are necessary. We have an orbital with appropriate symmetry, which is the p-orbital sticking out of the bonding plane. This interpretation is not only valid it is one that comes as the solution of the Schrödinger equation. [4] That leaves for the other orbital a hybridisation of sp (2/3) . If we make the reasonable assumption, that the oxygen hydrogen bonds are sp 3 hybridised, and the out-of-plane lone pair is a p orbital, then the maths is a bit easier and the in-plane lone pair is sp hybridised. [5] A calculation on the MO6/def2-QZVPP level of theory gives us the following canonical molecular orbitals: (Orbital symmetries: $2\mathrm{A}_1$, $1\mathrm{B}_2$, $3\mathrm{A}_1$, $1\mathrm{B}_1$) [6,7] Since the interpretation with hybrid orbitals is equivalent, I used the natural bond orbital theory to interpret the results. This method transforms the canonical orbitals into localised orbitals for easier interpretation. Here is an excerpt of the output (core orbital and polarisation functions omitted) giving us the calculated hybridisations: (Occupancy) Bond orbital / Coefficients / Hybrids ------------------ Lewis ------------------------------------------------------ 2. (1.99797) LP ( 1) O 1 s( 53.05%)p 0.88( 46.76%)d 0.00( 0.19%) 3. (1.99770) LP ( 2) O 1 s( 0.00%)p 1.00( 99.69%)d 0.00( 0.28%) 4. (1.99953) BD ( 1) O 1- H 2 ( 73.49%) 0.8573* O 1 s( 23.41%)p 3.26( 76.25%)d 0.01( 0.31%) ( 26.51%) 0.5149* H 2 s( 99.65%)p 0.00( 0.32%)d 0.00( 0.02%) 5. (1.99955) BD ( 1) O 1- H 3 ( 73.48%) 0.8572* O 1 s( 23.41%)p 3.26( 76.27%)d 0.01( 0.30%) ( 26.52%) 0.5150* H 3 s( 99.65%)p 0.00( 0.32%)d 0.00( 0.02%) ------------------------------------------------------------------------------- As we can see, that pretty much matches the assumption of sp 3 oxygen hydrogen bonds, a p lone pair, and a sp lone pair. Does that mean that the lone pairs are non-equivalent? Well, that is at least one interpretation. And we only deduced all that from a gas phase point of view. When we go towards condensed phase, things will certainly change. Hydrogen bonds will break the symmetry, dynamics will play an important role and in the end, both will probably behave quite similarly or even identical. Now let's get to the juicy part: Second, if so, does this have any significance in actual physical systems (i.e. is it a measurable phenomenon), and what is the approximate energy difference between the pairs of electrons? Well the first part is a bit tricky to answer, because that is dependent on a lot more conditions. But the part in parentheses is easy. It is measurable with photoelectron spectroscopy. There is a nice orbital scheme correlated to the orbital ionisation potential on the homepage of Michael K. Denk for water. [8] Unfortunately I cannot find license information, or a reference to reproduce, hence I am hesitant to post it here. However, I found a nice little publication on the photoelectron spectroscopy of water in the bonding region. [9] I'll quote some relevant data from the article. $\ce{H2O}$ is a non-linear, triatomic molecule consisting of an oxygen atom covalently bonded to two hydrogen atoms. The ground state of the $\ce{H2O}$ molecule is classified as belonging to the $C_\mathrm{2v}$ point group and so the electronic states of water are described using the irreducible representations $\mathrm{A}_1$, $\mathrm{A}_2$, $\mathrm{B}_1$, $\mathrm{B}_2$. The electronic configuration of the ground state of the $\ce{H2O}$ molecule is described by five doubly occupied molecular orbitals: $$\begin{align} \underbrace{(1\mathrm{a}_1)^2}_{\text{core}}&& \underbrace{(2\mathrm{a}_1)^2}_{\text{inner-valence orbital}}&& \underbrace{ (1\mathrm{b}_2)^2 (3\mathrm{a}_1)^2 (1\mathrm{b}_1)^2 }_{\text{outer-valence orbital}}&& \mathrm{X~^1A_1} \end{align}$$ [..] In addition to the three band systems observed in HeI PES of $\ce{H2O}$, a fourth band system in the TPE spectrum close to 32 eV is also observed. As indicated in Fig. 1, these band systems correspond to the removal of a valence electron from each of the molecular orbitals $(1\mathrm{b}_1)^{-1}$, $(3\mathrm{a}_1)^{-1}$, $(1\mathrm{b}_2)^{-1}$ and $(2\mathrm{a}_1)^{-1}$ of $\ce{H2O}$. As you can see, it fits quite nicely with the calculated data. From the image I would say that the difference between $(1\mathrm{b}_1)^{-1}$ and $(3\mathrm{a}_1)^{-1}$ is about 1-2 eV. TL;DR As you see your hunch paid off quite well. Photoelectron spectroscopy of water in the gas phase confirms that the lone pairs are non-equivalent. Conclusions for condensed phases might be different, but that is a story for another day. Notes and References What is Bent's rule? Utility of Bent's Rule - What can Bent's rule explain that other qualitative considerations cannot? Formal theory of Bent's rule, derivation of Coulson's theorem (Wikipedia). Worked example for cyclo propane by ron. A π orbital has one nodal plane collinear with the bonding axis, it is asymmetric with respect to this plane. A bit more explanation in my question What would follow in the series sigma, pi and delta bonds? With in the approximation that molecular orbitals are a linear combination of atomic orbitals (MO = LCAO). The terminology we use for hybridisation actually is just an abbreviation: $$\mathrm{sp}^{x} = \mathrm{s}^{\frac{1}{x+1}}\mathrm{p}^{\frac{x}{x+1}}$$ In theory $x$ can have any value; since it is just a unitary transformation the representation does not change, hence \begin{align} 1\times\mathrm{s}, 3\times\mathrm{p} &\leadsto 4\times\mathrm{sp}^3 \\ &\leadsto 3\times\mathrm{sp}^2, 1\times\mathrm{p} \\ &\leadsto 2\times\mathrm{sp}, 2\times\mathrm{p} \\ &\leadsto 2\times\mathrm{sp}^3, 1\times\mathrm{sp}, 1\times\mathrm{p} \\ &\leadsto \text{etc. pp.}\\ &\leadsto 2\times\mathrm{sp}^4, 1\times\mathrm{p}, 1\times\mathrm{sp}^{(2/3)} \end{align} There are virtually infinite possibilities of combination. This and the next footnote address a couple of points that were raised in a comment by DavePhD. While I already extensively answered that there, I want to include a few more clarifying points here. (If I do it right, the comments become obsolete.) What is the reason for concluding 2 lone pairs versus 1 or 3? For example Mulliken has in table V the b1 orbital being a definite lone pair (no H population) but the two a1 orbitals both have about 0.3e population on H. Would it be wrong to say only one of the PES energy levels corresponds to a lone pair, and the other 3 has some significant population on hydrogen? Are Mulliken's calculations still valid? – DavePhD The article Dave refers to is R. S. Mulliken, J. Chem. Phys. 1955, 23 , 1833. , which introduces Mulliken population analysis. In this paper Mulliken analyses wave functions on the SCF-LCAO-MO level of theory. This is essentially Hartree Fock with a minimal basis set. (I will address this in the next footnote.) We have to understand that this was state-of-the-art computational chemistry back then. What we take for granted nowadays, calculating the same thing in a few seconds, was revolutionary back then. Today we have a lot fancier methods. I used density functional theory with a very large basis set. The main difference between these approaches is that the level I use recovers a lot more of electron correlation than the method of Mulliken. However, if you look closely at the results it is quite impressive how well these early approximations perform. On the M06/def2-QZVPP level of theory the geometry of the molecule is optimised to have an oxygen hydrogen distance of 95.61 pm and a bond angle of 105.003°. This is quite close to the experimental results. The contribution to the orbitals are given as follows. I include the orbital energies (OE), too. The contributions of the atomic orbitals are given to 1.00 being the total for each molecular orbital. Because the basis set has polarisation functions the missing parts are attributed to this. The threshold for printing is 3%. (I also rearranged the Gaussian Output for better readability.) Atomic contributions to molecular orbitals: 2: 2A1 OE=-1.039 is O1-s=0.81 O1-p=0.03 H2-s=0.07 H3-s=0.07 3: 1B2 OE=-0.547 is O1-p=0.63 H2-s=0.18 H3-s=0.18 4: 3A1 OE=-0.406 is O1-s=0.12 O1-p=0.74 H2-s=0.06 H3-s=0.06 5: 1B1 OE=-0.332 is O1-p=0.95 We can see that there is indeed some contribution by the hydrogens to the in-plane lone pair of oxygen. On the other hand we see that there is only one orbital where there is a large contribution by hydrogen. One could here easily come up with the theory of one or three lone pairs of oxygen, depending on your own point of view. Mulliken's analysis is based on the canonical orbitals, which are delocalised, so we will never have a pure lone pair orbital. When we refer to orbitals as being of a certain type, then we imply that this is the largest contribution. Often we also use visual aides like pictures of these orbitals to decide if they are of bonding or anti-bonding nature, or if their contribution is on the bonding axis. All these analyses are highly biased by your point of view. There is no right or wrong when it comes to separation schemes. There is no hard evidence for any of these obtainable. These are mathematical interpretations that do in the best case help us understand bonding better. Thus deciding whether water has one, two or three (or even four) lone pairs is somewhat playing with numbers until something seems to fit. Bonding is too difficult to transform it in easy pictures. (That's why I am not an advocate for cautiously using Lewis structures.) The NBO analysis is another separation scheme. One that aims to transform the obtained canonical orbitals into a Lewis like picture for a better understanding. This transformation does not change the wave function and in this way is as equally a representation as other approaches. What you loose by this approach are the orbital energies, since you break the symmetry of the wave function, but this is going much too far to explain. In a nutshell, the localisation scheme aims to transform the delocalised orbitals into orbitals that correspond to bonds. From a quite general point of view, Mulliken's calculations (he actually only interpreted the results of others) and conclusion hold up to a certain point. Nowadays we know that his population analysis has severe problems, but within the minimal basis they still produce justifiable results. The popularity of this method comes mainly because it is very easy to perform. See also: Which one, Mulliken charge distribution and NBO, is more reliable? Mulliken used a SCF-LCAO-MO calculation by Ellison and Shull and was so kind to include the main results into his paper. The oxygen hydrogen bond distance is 95.8 pm and the bond angle is 105°. I performed a calculation on the same geometry on the HF/STO-3G level of theory for comparison. It obviously does not match perfectly, but well enough for a little bit of further discussion. NO SYM HF/STO-3G : N(O) N(H2) | Mulliken : N(O) N(H2) 1 1A1 -550.79 2.0014 -0.0014 | -557.3 2.0007 -0.0005 2 2A1 -34.49 1.6113 0.3887 | -36.2 1.688 0.309 3 1B2 -16.82 1.0700 0.9300 | -18.6 0.918 1.080 4 3A1 -12.29 1.6837 0.3163 | -13.2 1.743 0.257 5 1B1 -10.63 2.0000 0.0000 | -11.8 2.000 As an off-side note: I completely was unable to read the Mulliken analysis by Gaussian. I used MultiWFN instead. It is also not an equivalent approach because they expressed the hydrogen atoms with group orbitals. The results don't differ by much. The basic approach of Mulliken is to split the overlap population to the orbitals symmetric between the elements. That is a principal problem of the method as the contributions to that MO can be quite different. Resulting problematic points are occupation values larger than two or smaller than zero, which have clearly no physical meaning. The analysis is especially ruined for diffuse functions. At the time Mulliken certainly did not know about anything we are able to do today, and under which conditions his approach will break down, it still is funny to read such sentences today. Actually, very small negative values occasionally occur [...]. [...] ideally to the population of the AO [...] should never exceed the number 2.00 of electrons in a closed atomic sub-shell. Actually, [the orbital population] in some instances does very slightly exceed 2.00 [...]. The reason why these slight but only slight imperfections exist is obscure. But since they are only slight, it appears that the gross atomic populations calculated using Eq. (6') may be taken as representing rather accurately the "true" populations in various AOs for an atom in a molecule. It should be realized, of course, that fundamentally there is no such thing as an atom in a molecule except in an approximate sense. For much more on this I found an explanation of the Gaussian output along with the reference to F. Martin, H. Zipse, J. Comp. Chem. 2005, 26 , 97 - 105 , available as a copy . I have not read it though. Scroll down until the bottom of the page for the image, read for more information: CHEM 2070, Michael K. Denk: UV-Vis & PES. (University of Guelph) If dead: Wayback Machine S.Y. Truong, A.J. Yencha, A.M. Juarez, S.J. Cavanagh, P. Bolognesi, G.C. King, Chemical Physics 2009, 355 (2–3), 183-193. Or try this mirror .
{ "source": [ "https://chemistry.stackexchange.com/questions/50906", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/17583/" ] }
51,027
I have seen this phrase several times across DFT textbooks. However, I am not sure if it still holds. Was there a change or a theorem that proved it otherwise? Several programs display wavefunctions for DFT calculations. For example, Quantum Espresso has the option to collect wave functions. Some of the people that I have to talked to mention the Kohn Sham orbitals as if they were a real physical thing. Shouldn't it not be possible to obtain the wavefunction for a certain state? For example, I thought it was not possible to tell that a Kohn Sham orbital belongs to a wavefunction near the surface of a material. I know that the Kohn Sham orbitals lack a physical meaning. They are simply wavefunctions that are used to obtain the correct density. They do not have an analogue of Koopmans' Theorem. In addition, "the exact wave function of the target system is not available in density functional theory" (Koch, 2001). So why do people need/use wavefunctions (after all most DFT programs can group such wavefunctions)? I thought that they were Kohn Sham orbitals. Therefore, there should be seldom any use for them.
There are no wavefunctions in DFT. I don't like that "wave function" is used here in the plural form and I feel like it reflects OP's misconception. The right way to say this is There is no wave function in DFT. See, there exist only one wave function, the function that describes the state of a system in question and can be used to calculate its properties. When it comes to a many-electron system, the wave function is a function that describes the state of all the electrons present in the system interacting with each other. This is the wave function people have in their minds when they say the it is absent in DFT. And that is true: the whole point of DFT is to get rid of this function as a description of state since it is too complex to work with and difficult to visualise. But that is not the whole story. In practise the Kohn-Sham DFT is used which introduces a fictious system of non-interacting electrons. Each electron in the fictious system is described by a Kohn-Sham orbital, a single-particle wave function, while the whole fictious system is described by the Kohn–Sham wave function, a Slater determinant constructed from a set of Kohn-Sham orbitals. Still there is no wave function in KS-DFT in the above mentioned sense: KS orbitals don't qualify as the wave function just because they do not describe the state of a whole system; the Kohn–Sham wave function doesn't qualify because it describes a different system (fictious non-interacting rather than real interacting). So, the KS wave function is not the wave function, since it doesn't describe the (real) system. You can't use it to calculate the properties of the (real) system, you have to use the electron density to do so. In this sence, the statement that there is no wave function in DFT holds true for KS-DFT as well. Can the KS wave functions (both single- and many-particle one) be somehow useful in any other way is a different story. See, for instance, the discussion of the usefulness of KS-orbitals here .
{ "source": [ "https://chemistry.stackexchange.com/questions/51027", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/3983/" ] }
51,099
So, I was doing some research on bonding and electronegativity and one of the examples used was NaCl (Sodium Chloride) which is salt. We don't buy processed salt, rather we buy salt from the Himalayas. Apparently this is better salt because it is not processed, made, etc. But, is there a difference between NaCl and Himalayan salt in regards to their molecular structure?
No salt will be pure NaCl. Each will have some degree of other elements. The fact that the salt isn't white confirms there are other elements present. See Analysis of Gourmet Salts for the Presence of Heavy Metals which investigates 14 salts including two Himalayan salts. See especially "Table 3. Comparison of toxic elements in Table Salt". A "Himalayan Pink Fine Mineral Salt" was found to have the highest level of cadmium and tied for first for the highest level of nickel. NaCl isn't molecular, but instead is a lattice of Na+ and Cl- ions . Other ions can lie within this lattice as impurities.
{ "source": [ "https://chemistry.stackexchange.com/questions/51099", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/28884/" ] }
51,632
My teacher told me about resonance and explained it as different structures which are flipping back and forth and that we only observe a sort of average structure. How does this work? Why do the different structures not exist on their own?
This answer is intended to clear up some misconceptions about resonance which have come up many times on this site. Resonance is a part of valence bond theory which is used to describe delocalised electron systems in terms of contributing structures, each only involving 2-centre-2-electron bonds. It is a concept that is very often taught badly and misinterpreted by students. The usual explanation is that it is as if the molecule is flipping back and forth between different structures very rapidly and that what is observed is an average of these structures. This is wrong! (There are molecules that do this (e.g bullvalene ), but the rapidly interconverting structures are not called resonance forms or resonance structures.) Individual resonance structures do not exist on their own. They are not in some sort of rapid equilibrium. There is only a single structure for a molecule such as benzene, which can be described by resonance. The difference between an equilibrium situation and a resonance situation can be seen on a potential energy diagram. This diagram shows two possible structures of the 2-norbornyl cation. Structure (a) shows the single delocalised structure, described by resonance whereas structures (b) show the equilibrium option, with the delocalised structure (a) as a transition state. The key point is that resonance hybrids are a single potential energy minimum , whereas equilibrating structures are two energy minima separated by a barrier. In 2013 an X-ray diffraction structure was finally obtained and the correct structure was shown to be (a). Resonance describes delocalised bonding in terms of contributing structures that give some of their character to the single overall structure. These structures do not have to be equally weighted in their contribution. For example, amides can be described by the following resonance structures: The left structure is the major contributor but the right structure also contributes and so the structure of an amide has some double bond character in the C-N bond (ie. the bond order is >1) and less double bond character in the C-O bond (bond order <2). The alternative to valence bond theory and the resonance description of molecules is molecular orbital theory. This explains delocalised bonding as electrons occupying molecular orbitals which extend over more than two atoms.
{ "source": [ "https://chemistry.stackexchange.com/questions/51632", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/10185/" ] }
51,643
Is there a single organic molecule that has strong absorbance throughout the visible range? For example a black dye that is based on a single molecule?
This answer is intended to clear up some misconceptions about resonance which have come up many times on this site. Resonance is a part of valence bond theory which is used to describe delocalised electron systems in terms of contributing structures, each only involving 2-centre-2-electron bonds. It is a concept that is very often taught badly and misinterpreted by students. The usual explanation is that it is as if the molecule is flipping back and forth between different structures very rapidly and that what is observed is an average of these structures. This is wrong! (There are molecules that do this (e.g bullvalene ), but the rapidly interconverting structures are not called resonance forms or resonance structures.) Individual resonance structures do not exist on their own. They are not in some sort of rapid equilibrium. There is only a single structure for a molecule such as benzene, which can be described by resonance. The difference between an equilibrium situation and a resonance situation can be seen on a potential energy diagram. This diagram shows two possible structures of the 2-norbornyl cation. Structure (a) shows the single delocalised structure, described by resonance whereas structures (b) show the equilibrium option, with the delocalised structure (a) as a transition state. The key point is that resonance hybrids are a single potential energy minimum , whereas equilibrating structures are two energy minima separated by a barrier. In 2013 an X-ray diffraction structure was finally obtained and the correct structure was shown to be (a). Resonance describes delocalised bonding in terms of contributing structures that give some of their character to the single overall structure. These structures do not have to be equally weighted in their contribution. For example, amides can be described by the following resonance structures: The left structure is the major contributor but the right structure also contributes and so the structure of an amide has some double bond character in the C-N bond (ie. the bond order is >1) and less double bond character in the C-O bond (bond order <2). The alternative to valence bond theory and the resonance description of molecules is molecular orbital theory. This explains delocalised bonding as electrons occupying molecular orbitals which extend over more than two atoms.
{ "source": [ "https://chemistry.stackexchange.com/questions/51643", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/10104/" ] }
52,010
I am wondering if fractional oxidation states of an atom are possible. I'm not referring to cases such as $\ce{Fe3O4}$ or $\ce{Mn3O4}$ where the average oxidation state is fractional, since these actually comprise a mixture of atoms which are individually in the +2 and +3 oxidation states. What I mean is, is it possible for an individual atom in some compound to have an oxidation state of (for example) 2.5? To me it doesn't seem possible just because of the way oxidation states are defined. However I have seen some sources which state that fractional oxidation states are possible. I would be interested in knowing if there is some weird compound that has fractional oxidation states? Note: This is not a duplicate of Are fractional oxidation states possible? I want to know if it is possible for an individual atom in some compound to have a fractional oxidation state, not its average fractional oxidation states.
It depends. Consider various radicals such as the superoxide anion $\ce{O2^{.-}}$ or $\ce{NO2^{.}}$. For both of these, we can draw simple Lewis representations: In these structures, the oxygen atoms would have different oxidation states ($\mathrm{-I}$ and $\pm 0$ for superoxide, $\mathrm{-II}$ and $\mathrm{-I}$ for $\ce{NO2}$). That is the strict, theoretical IUPAC answer to the question. However, we also see that the oxygens are symmetry-equivalent (homotopic) and should thus be identical. Different oxidation states violate the identity rule. For each compound, we can imagine an additional resonance structure that puts the radical on the other oxygen. (For $\ce{NO2}$, we can also draw resonance structures that locate a radical on both oxygens and another one that expands nitrogen’s octet and localises the radical there.) To better explain this physical reality theoretically, we can calculate a ‘resonance-derived average oxidation state’ which would be $-\frac{1}{2}$ for superoxide and $-\frac{3}{2}$ for $\ce{NO2}$. This is not in agreement with IUPAC’s formal definition but closer to the physical reality.
{ "source": [ "https://chemistry.stackexchange.com/questions/52010", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/18002/" ] }
52,026
TL;DR - given a sample of bone which has been encased in asphalt for tens of thousands of years and has become saturated with the asphalt, how can you get an accurate age for the specimen through carbon dating? This past weekend I took my kids to the Page Museum at the La Brea Tar Pits in Los Angeles. While there we saw an astonishing number of fossilized Pleistocene animals. The interesting thing is that all of these fossils have a lovely (to my eyes, at least) brown color which is staining from the asphalt in which these bones have lain for thousands of years. While there, we saw a presentation which explained that the asphalt seeps into the porous bones and it was explained that we know how old the specimens are through carbon dating. Which got me thinking - if the specimens are all highly contaminated with asphalt, how can they know that the sample they're testing contains carbon from the bone and not from the asphalt (which is presumably much older than the bone)? I did try asking one of the docents, but they're really not equipped to handle such a technical question.
It depends. Consider various radicals such as the superoxide anion $\ce{O2^{.-}}$ or $\ce{NO2^{.}}$. For both of these, we can draw simple Lewis representations: In these structures, the oxygen atoms would have different oxidation states ($\mathrm{-I}$ and $\pm 0$ for superoxide, $\mathrm{-II}$ and $\mathrm{-I}$ for $\ce{NO2}$). That is the strict, theoretical IUPAC answer to the question. However, we also see that the oxygens are symmetry-equivalent (homotopic) and should thus be identical. Different oxidation states violate the identity rule. For each compound, we can imagine an additional resonance structure that puts the radical on the other oxygen. (For $\ce{NO2}$, we can also draw resonance structures that locate a radical on both oxygens and another one that expands nitrogen’s octet and localises the radical there.) To better explain this physical reality theoretically, we can calculate a ‘resonance-derived average oxidation state’ which would be $-\frac{1}{2}$ for superoxide and $-\frac{3}{2}$ for $\ce{NO2}$. This is not in agreement with IUPAC’s formal definition but closer to the physical reality.
{ "source": [ "https://chemistry.stackexchange.com/questions/52026", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/30610/" ] }
53,550
I was just wondering, if heating food up is the result of increasing the energy of bends and stretches in the bonds of the molecules, is it ever possible for tiny amounts of x-rays and gamma rays be emitted? When we give a molecule enough energy, its electrons can jump to higher orbitals and then return to ground state, releasing EMR with a frequency proportional to the energy gap. So if we heat food, is there a chance that some electron received enough energy to jump up to a really high energy level?
In theory, yes, you can heat objects to a high enough temperature to emit x-rays or gamma rays. You cannot do this to food, and you certainly cannot do this in your kitchen (or probably any kitchen). Let's take the lowest energy x-ray out there and see what it would take. X-rays range in frequency from $30 \times 10^{ 16}$ to $30\times 10^{10}$ hertz. The energy of one photon of 30 petahertz radiation is: $$E=h\nu = \left(6.626\times 10^{-34}\mathrm{\ J\cdot s}\right)\left(30\times 10^{16}\ \mathrm{s^{-1}}\right) = 1.988 \times 10^{-16}\ \mathrm{J}$$ This is not a lot of energy! However, a single photon is boring. Let's consider a mole of photons. This will also ease comparison with other phenomena, whose energies are listed per mole of events. $$1.988 \times 10^{-16}\ \mathrm{J} \times 6.022\times 10^{23}\ \mathrm{mol^{-1}}=1.197\times 10^8\ \mathrm{J\cdot mol^{-1}}$$ In theory, if you could pump that much energy into something, you should get some high energy photons out. In practice, it does not work that way. Other stuff happens first. To simplify our example, let's just consider 1 mole of water (18.0 grams) and heat it up. The fate of basically any other matter will be the same, but the energy required will vary a bit. First, adding energy heats the water. If we start at room temperature $\left(20\ ^\circ\mathrm{C}\right)$ , it takes $80\ ^\circ\mathrm{C}\times 18\ \mathrm{g}\times 4.184\ \mathrm{J\cdot g^{-1}\cdot ^\circ C^{-1}}=6025\ \mathrm{J}$ to heat that water to boiling. It takes 40.66 kJ to convert the water into gas. Neither of these puts a big dent in our energy. It takes further energy to heat the water vapor again, but let's see how far we need to take it. Once we get enough energy into our sample of water, the molecules start to fall apart. $$\ce{ H2O(g) -> 2H(g) + O(g)} \ \Delta H^\circ =+920\ \mathrm{kJ\cdot mol^{-1}}\ \Delta S^\circ =0.202\ \mathrm{kJ\cdot mol^{-1}\cdot K^{-1}}$$ By fixing $\Delta G=0$ at equilibrium, we can solve for a temperature at which this reaction becomes spontaneous: $$T=\dfrac{\Delta H}{\Delta S}=\dfrac{+920\ \mathrm{kJ\cdot mol^{-1}}}{0.202\ \mathrm{kJ\cdot mol^{-1}}\cdot K^{-1}}=4596\mathrm{K}$$ We need to heat our water vapor up an additional 4218 K, which takes $18\ \mathrm{g}\times 1.996\ \mathrm{J\cdot g^{-1}\cdot K^{-1}}\times 4218\ \mathrm{K}=151.5\times 10^3 \mathrm{J}$ . So, we now have pumped nearly 200,000 J into our water sample, atomized it, and heated it to approximately 5000 K. We are now close to the temperature of the outer layers of the sun! Surely we have enough energy at this temperature to produce x-rays. Nope. At 5000 K, we produce minimal x-rays. Most of the radiation is in the visible, UV, and IR (think about what we get from the sun). Below is a plot of black-body radiation as a function of temperature ( image by Wikipedia user Darth Kule and released into the public domain ): Okay, so we are far beyond the reality of what can happen in a conventional oven (or almost any reasonable heat source used for food). At this temperature, we can use the Planck Law to calculate the power output ( $I$ ) of x-rays at the temperature. We can also do this at some normal temperatures and for gamma rays. This model is a little goofy, since food is not a black body, but we will at least calculate the max x-ray and gamma ray output. Rather than grinding through all the maths, I'll just put in a table of some temperatures and watts. 1 watt is not a lot of power. Most lightbulbs produce light in the kilowatts. $$\begin{array}{|c|c|c|c|}\hline \mathrm{T\ (K)} & \mathrm{P_{x-ray}\ (W)} & \mathrm{P_{gamma}\ (W)} & \mathrm{notes} \\ \hline 378 & \approx 0 & \approx 0 & \text{boiling point of water} \\ \hline 550 & \approx 0 & \approx 0 & \text{approximate common highest temperature on residential ovens}\\ \hline 700-800 & \approx 0 & \approx 0 & \text{temperature range for wood-fired ovens, tandoors, etc.}\\ \hline 5770 & 4.26\times 10^{-129} & \approx 0 & \text{temperature of the photosphere of the sun}\\ \hline 1.57\times 10^7 & 10.4 & 7.87\times 10^{-54} & \text{estimated temperature of the center of the sun} \\ \hline \end{array}$$ So, if you could heat your food to the temperature of the sun, it would produce minuscule x-ray radiation. It would also no long resemble food.
{ "source": [ "https://chemistry.stackexchange.com/questions/53550", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/27584/" ] }
54,108
I'm doing single point energy calculations of FeS using ORCA . I originally used DFT with a variety of functionals, and these calculations all took less than a minute. Now I'm attempting to run CASSCF calculations using 8 active orbitals, 12 active electrons, and the aug-cc-pVTZ basis set. It's been running for 3 days. Is this typical, or should I be attempting to change tolerances and other settings? How long would one anticipate something like this to run? In addition, how can I (generally speaking) evaluate how close a given calculation is to finishing?
There is nothing trivial about MCSCF calculations because it is hard to predict a priori how long a calculation will take. There are well-defined equations for calculating how many determinants $$ D(n,N,S) = \binom{n}{N/2+S} \binom{n}{N/2-S} $$ or configuration state functions (CSFs) $$ D(n,N,S) = \frac{2S+1}{n+1} \binom{n+1}{N/2-S} \binom{n+1}{N/2+S+1} $$ will be required for your problem; $n$ is the number of active orbitals, $N$ is the number of active electrons, and $S$ the total spin. The calculation cost will scale with this number and can be further reduced with the use of point group symmetry. (Actually, depending on your system and the property of interest, calculations without symmetry can be meaningless, like for excited states and their energies.) These equations should make it clear that the cost grows very rapidly, considering that for CASSCF, a full CI is being performed within your chosen active space. This is not what makes MCSCF calculations tricky in practice. Here is a plot of the total energy after the first 30 iterations. This is because the occupation numbers are very close to 0/1/2, and they aren't changing with each iteration. Notice that the occupation numbers have inverted themselves, and repeated at every step is Rot=23,19 . Orbital 19 is within the active space, but 23 is external. This is an indication that the orbitals in the active space are not the correct ones. From the paper : the 12 or 13 valence electrons of the neutral or anion, respectively, were distributed among 14 valence active orbitals: 3p of sulphur plus 3d, 3d' and 4s orbitals of iron. With this active space we found that all important correlating orbitals are included in the active space. MCSCF is not a black box; it's one of the few quantum chemical methods left that requires chemical intuition to even get started. This a (12,14) active space; the one in your calculations is (12,8). The procedure I use for running MCSCF calculations is generally: Converge a single-reference calculation. This could be HF, DFT, MP2, etc. The usual recommendation is to form MP2 natural orbitals, then check the occupations numbers. If the occupation deviates more than 0.02 from an integer value, then it should probably be in the active space. This is in addition to considering the electrons and orbitals you wanted to correlate to begin with, along with antibonding partners. For example, in ethylene this is probably the pi bonding/antibonding pair, leading to a (2,2), and for 3d metal complexes this is whatever coordination (anti)bonds are present, with potentially the double d shell. I've had some luck with using DFT orbitals for metal complexes, where perturbation theory might be a bad idea due to near-degeneracies or cost. The goals here are to confirm your active space and provide the best possible starting orbitals for MCSCF. Look at your orbitals. Of course, read the Mulliken or Lowdin population analysis, but from personal experience, I've caught many potential mistakes by doing this. It also helps when visualizing how the single-state, single-reference orbitals transform to the natural orbitals (potentially state-average) that result from MCSCF calculations. Reorder your orbitals if necessary before running the MCSCF. Except for the most trivial cases, the orbitals that belong in the active space are not the orbitals that come straight out of your single-reference calculation. If they were, then you might not be running MCSCF to begin with. I also tend to look at the reordered orbitals one more time before doing MCSCF to confirm I haven't made any mistakes. Read in the previous set of (reordered) orbitals into a separate MCSCF calculation and cross your fingers. Monitoring the occupation numbers of your active space is a good guide for seeing how the calculation is converging. Do whatever post-processing you need for your project. For me, this is looking at final occupation numbers, the orbitals themselves, and then doing MR-CISD or some other multireference correlated calculation. Doing an auto-occupation procedure without starting from previously-converged orbitals means that 1. you don't have any guarantees about the contents of your active space, 2. you start from orbitals that are close to Hartree-Fock in quality, and 3. the calculation has to do far more work in optimizing both sets of variational parameters (the MO coefficients $\{C\}$ and the CI coefficients $\{c\}$) since the internal and external orbitals (where the CI coefficients are frozen) are still far from convergence using a single-reference method. To walk through part of this workflow, I ran a PBE0/aug-cc-pVTZ calculation (converges much faster than the SCF before MP2) to look at the Lowdin MO populations: ! pbe0 aug-cc-pvtz cc-pvtz/jk ri rijk tightscf usesym zora %output print[p_orbpopmo_l] 1 end * xyz 0 5 Fe 0.000000 0.000000 0.000000 S 1.960000 0.000000 0.000000 * The MO indices are S 3s: 14, Fe 4s: 20, Fe 3d: 15,16,17,18,19, and Fe 3d': 33,34,36-40. Automatic occupation would have gotten the occupieds right, but the virtuals wrong. I'm not sure which of the 7 virtual MOs that might be the Fe 3d' are the correct ones; there is probably some trial-and-error here, but these should be visualized . Another point is that I've only looked at the ordering for alpha-spin orbitals; for difficult cases such as transition metals, the spatial ordering for beta-spin orbitals can be very different. The reordering for alpha- and beta-spin orbitals might be different. Again, this is speaking from experience. The final exhibit is the RI-MP2/aug-cc-pVTZ natural orbitals that might enter an MCSCF calculation based on their occupation numbers: N[ 15]( A2) = 1.97761944 N[ 16]( A1) = 1.97753198 N[ 17]( B2) = 1.97009578 N[ 18]( B1) = 1.97009578 N[ 19]( A1) = 1.03828526 N[ 20]( A1) = 0.99215102 N[ 21]( B1) = 0.98317529 N[ 22]( B2) = 0.98317529 N[ 23]( A2) = 0.02013554 N[ 24]( B2) = 0.01947713 N[ 25]( B1) = 0.01947713 N[ 26]( A1) = 0.01816184 To be thorough, this should be combined with looking at the Lowdin MO populations of these, which might require reading them into another (SCF) calculation and not performing any iterations. However, it is well-known that converging MCSCF equations is generally difficult. In my limited experience, DALTON, GAMESS, and Molcas have less trouble than ORCA. I haven't used Molpro for MCSCF. Which program you choose dictates what post-MCSCF calculations can be performed. The DIIS algorithm in ORCA, in particular, can stall even when it shouldn't. If you can afford it, the Newton-Raphson algorithm ( switchstep nr ), where the active space orbital Hessian is calculated at each step, works very well to force convergence once SuperCI is done. ORCA makes it easy to get started with CASSCF, and it has plenty of knobs, but it has a very slow integral engine, doesn't prevent you from doing stupid things like GAMESS, and more advanced features are poorly documented. The workflow described above (namely the plotting orbitals and rotating them) becomes very tedious as well, but that has more to do with MCSCF in general. Here are some good resources that describe both practical and theoretical aspects of performing MCSCF calculations: 1 , 2 , 3 , 4 .
{ "source": [ "https://chemistry.stackexchange.com/questions/54108", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/31336/" ] }
55,094
Ok, so I learned about the equilibrium constant. Now, I've seen that the equilibrium constant of burning is extremely small $(K \ll 1)$. here, I have a question. you see, $K$ is still NOT 0, which means that the forward reactions happen at least a tiny bit. Then, shouldn't we see some parts of anything burning at least a little bit?
The equilibrium constant for combustion of organic matter in air with oxygen is not small, but extremely large ( $K_\mathrm{eq} \gg 1$ ), as is expected from a reaction that is simultaneously very exothermic and (usually) increases entropy due to the formation of more gaseous molecules than the input oxygen. The major reason carbon-based life can exist at ambient temperature in an oxygen atmosphere is purely kinetic, not thermodynamic . You, the tree outside and everything else made of carbon is right now undergoing continuous combustion. However, in the absence of catalyst, this process is too slow below a couple hundred degrees Celsius for it to be self-sustaining. More technically, combustion of organic matter is a highly exergonic process, but the activation energy is high. The meagre amount of heat generated by the handful of molecules reacting is too quickly diluted into the surroundings, and the reaction does not accelerate and spiral out of control ( a fire , as described by the eternal Feynman). Very luckily for us, Life figured out this vast untapped source of chemical energy held back by kinetics approximately three billion years ago and developed a whole metabolic process to extract this energy in a stepwise fashion using catalysis, which we call aerobic respiration. Without it, multicellular organisms could well never had evolved.
{ "source": [ "https://chemistry.stackexchange.com/questions/55094", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/31990/" ] }
55,557
As the title says, I'm interested in knowing if there is any substance — or combination of substances — that ignites (or even increases its chance of spontaneous ignition) when cooled. I've never heard of such a thing, nor can I find it in Atkins' Physical Chemistry , but I might easily have overlooked it, since I don't know what to call it. Googling gives me information on substances that lower the freezing point or ignition point of explosives/fuels. It seems that this should obviously be forbidden on thermodynamic grounds, but I can't quite rule out a phase change allowing such behaviour, for instance.
Actually... yes! Iron(II) oxide is thermodynamically unstable below $848~\mathrm K$. As it cools down to room temperature (it has to do it slowly) it disproportionates to iron(II,III) oxide and iron: $$ \ce{4FeO -> Fe + Fe3O4}. $$ The iron is in a form of a fine powder, which is pyrophoric (it may catch a fire when exposed to air). You can see it in action here .
{ "source": [ "https://chemistry.stackexchange.com/questions/55557", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/32213/" ] }
55,951
I recently learned about the graph of activation energy that looks like this ( source ): I was wondering, what actually happens to the reactants as time passes on the graph? If we were to look at the molecules when the graph was at its peak, what would we see? And lastly, can the reaction be stopped somewhere in between, say at the peak, or does it always have to stop either at one end as a reactant, or at the other as a product?
Well, a lot of things happen to the reactants. Some bonds stretch (and maybe eventually break), the others shrink, and your molecules morph into different molecules, which are the products. ( source ) As for staying at the very peak, that would be kinda unnatural, but luckily, not every peak looks like this; sometimes there is a tiny dent near the top, and with some effort you might be able to stop your reaction right there, that is, to isolate the activated complex . The discovery of its structure may shed some light on the reaction mechanism and earn you a lot of likes from your colleagues.
{ "source": [ "https://chemistry.stackexchange.com/questions/55951", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/32015/" ] }
57,784
Have the geometries of molecules been proven, or are we operating strictly on mathematical theory? To put it over-simplistically: has anyone ever taken a picture of a molecule to compare against their math?
Yes. Researchers have been using atomic force microscopy (AFM) and scanning tunneling microscopy (STM) for some time for this purpose. Do note that these images are not photographs in the sense that we usually think of "pictures" and are indirect measurements of constituents of the molecule. However, they do yield "pictures" that show the geometry of the molecule(s) in question. See, for instance, this piece from UC Berkeley News published in 2013 which describes the techniques in a manner that is appropriate for a non-chemist. The same article includes images of molecules before and after a chemical reaction occurs. I also encourage you (and others) to have a look at images in the IBM STM Image Gallery , which are quite spectacular. Finally, as others have commented, X-ray diffraction merits mention from both a historical and scientific perspective. As one person notes, the helical secondary structure of DNA was determined by Watson and Crick after examining the fiber X-ray diffraction plates obtained by Franklin's group. The importance of Franklin's contribution and that of Watson and Crick are monumental achievements. That said, the structure that was elucidated was the secondary structure of a large biomolecule and as such might not fit your criteria for how "complete" the "picture" needs to be to satisfy your question. Here is the famous Photo 51 result obtained by Rosalind Franklin's doctoral student Raymond Gosling that was subsequently used by Watson and Crick. Recently, researchers have imaged single atoms using a combination of STM and MRI - see Nature Physics. P. Willke et al. Magnetic resonance imaging of single atoms on a surface. 1 July 2019 .
{ "source": [ "https://chemistry.stackexchange.com/questions/57784", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/34004/" ] }
58,020
With regard to the 'Electron Spin Number', lots of websites mention that electrons don't really spin and that the electron spin number has nothing to do with any physical spinning. However, my chemistry teacher is quite adamant about electrons spinning about their axes in clockwise and counter-clockwise directions. So my question stands, "Do electrons really spin?" Could someone cite a reliable source to read up on this (Apart from Wikipedia, thanks)
It depends on what you mean by "spin". If you mean "have intrinsic internal angular momentum, independent of its trajectory through space", then yes, electrons spin, and that's what the quantum number is measuring. Though if by "spin" you mean "undergoes rotation" ("there's a little billiard ball, and if I were to put a mark on it and watch it, the mark would be rotationally displaced around the center in a periodic fashion"), then no, electrons don't spin. They're point particles(*): there's no billiard ball to mark, and even if you were somehow to mark a "side" of a point particle (you can't), a point particle can't undergo internal rotational displacement - there's no internal to displace! Likewise, the "frequency" of the spin is also meaningless - attempt to find out how many revolutions per minute an electron "spins" and your calculator will spit garbage at you. It's not something that has any physically realizable meaning. So the spin on an electron is a very real thing and not just a convenient bookkeeping label. You can do experiments where you couple the spin angular momentum of an electron to "macro-scale" (or at least non-sub-atomic) angular momentum, and you find that the total sum of angular momentum - both "macro-scale" and electron spin - is conserved. Electron "spin" is as real and functional as a gyroscope's spin is, but not in the same way. The electron has "spin" (angular momentum) without actually rotating. Bizarre, but that's Quantum Mechanics for you. *) I'm assuming the Standard Model of Quantum Physics here. If you start to get into String Theory things get all sorts of complicated, but there's no universally accepted model of String Theory, so I'm just going to ignore it exists. (As most Chemists do.) Also note that the Standard Model doesn't even attempt to predict where this intrinsic internal angular momentum comes from. It's just a property of electrons, taken as a given. Asking "why do electrons have spin?" is like asking "why are electrons charged?": They just are.
{ "source": [ "https://chemistry.stackexchange.com/questions/58020", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/33991/" ] }
58,024
As far as I know, -covalent bonds are formed because atoms are unstable -unfilled valence electrons -want to fill up their outer rings and become stable BUT, why does compounds like ClF3 exist? Why do their valence shells have more than 8 electrons? Isn't it about "filling up the shell"?
It depends on what you mean by "spin". If you mean "have intrinsic internal angular momentum, independent of its trajectory through space", then yes, electrons spin, and that's what the quantum number is measuring. Though if by "spin" you mean "undergoes rotation" ("there's a little billiard ball, and if I were to put a mark on it and watch it, the mark would be rotationally displaced around the center in a periodic fashion"), then no, electrons don't spin. They're point particles(*): there's no billiard ball to mark, and even if you were somehow to mark a "side" of a point particle (you can't), a point particle can't undergo internal rotational displacement - there's no internal to displace! Likewise, the "frequency" of the spin is also meaningless - attempt to find out how many revolutions per minute an electron "spins" and your calculator will spit garbage at you. It's not something that has any physically realizable meaning. So the spin on an electron is a very real thing and not just a convenient bookkeeping label. You can do experiments where you couple the spin angular momentum of an electron to "macro-scale" (or at least non-sub-atomic) angular momentum, and you find that the total sum of angular momentum - both "macro-scale" and electron spin - is conserved. Electron "spin" is as real and functional as a gyroscope's spin is, but not in the same way. The electron has "spin" (angular momentum) without actually rotating. Bizarre, but that's Quantum Mechanics for you. *) I'm assuming the Standard Model of Quantum Physics here. If you start to get into String Theory things get all sorts of complicated, but there's no universally accepted model of String Theory, so I'm just going to ignore it exists. (As most Chemists do.) Also note that the Standard Model doesn't even attempt to predict where this intrinsic internal angular momentum comes from. It's just a property of electrons, taken as a given. Asking "why do electrons have spin?" is like asking "why are electrons charged?": They just are.
{ "source": [ "https://chemistry.stackexchange.com/questions/58024", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/33958/" ] }
58,029
In the question I've to find out the ionic product of water using the facts that the degree of dissociation of water at $18\ \mathrm{^\circ C}$ is $1.8\times10^{-9}$. My attempt Let the concentration of water be $1\ \mathrm M$. $$\ce{H2O <=> H+ + OH-}$$ Initial: $$\begin{align} [\ce{H2O}] &= 1\ \mathrm M\\[6pt] [\ce{H+}] &= 0\ \mathrm M\\[6pt] [\ce{OH-}] &= 0\ \mathrm M \end{align}$$ At equilibrium: $$\begin{align} [\ce{H2O}] &= (1 - 1.8×10^{-9})\ \mathrm M\\[6pt] [\ce{H+}] &= 1.8×10^{-9}\ \mathrm M\\[6pt] [\ce{OH-}] &= 1.8×10^{-9}\ \mathrm M \end{align}$$ Therefore, $$\begin{align} K_\mathrm w &= \frac{[\ce{H+}] + [\ce{OH-}]}{[\ce{H2O}]}\\[6pt] &= \left(1.8\times10^{-9}\right) \times \left(1.8\times10^{-9}\right)\\[6pt] &= 3.24 \times 10^{-18} \end{align}$$ According to my book answer is $1.0\times10^{-14}$, which I know is correct. I want to know where am I going wrong?
It depends on what you mean by "spin". If you mean "have intrinsic internal angular momentum, independent of its trajectory through space", then yes, electrons spin, and that's what the quantum number is measuring. Though if by "spin" you mean "undergoes rotation" ("there's a little billiard ball, and if I were to put a mark on it and watch it, the mark would be rotationally displaced around the center in a periodic fashion"), then no, electrons don't spin. They're point particles(*): there's no billiard ball to mark, and even if you were somehow to mark a "side" of a point particle (you can't), a point particle can't undergo internal rotational displacement - there's no internal to displace! Likewise, the "frequency" of the spin is also meaningless - attempt to find out how many revolutions per minute an electron "spins" and your calculator will spit garbage at you. It's not something that has any physically realizable meaning. So the spin on an electron is a very real thing and not just a convenient bookkeeping label. You can do experiments where you couple the spin angular momentum of an electron to "macro-scale" (or at least non-sub-atomic) angular momentum, and you find that the total sum of angular momentum - both "macro-scale" and electron spin - is conserved. Electron "spin" is as real and functional as a gyroscope's spin is, but not in the same way. The electron has "spin" (angular momentum) without actually rotating. Bizarre, but that's Quantum Mechanics for you. *) I'm assuming the Standard Model of Quantum Physics here. If you start to get into String Theory things get all sorts of complicated, but there's no universally accepted model of String Theory, so I'm just going to ignore it exists. (As most Chemists do.) Also note that the Standard Model doesn't even attempt to predict where this intrinsic internal angular momentum comes from. It's just a property of electrons, taken as a given. Asking "why do electrons have spin?" is like asking "why are electrons charged?": They just are.
{ "source": [ "https://chemistry.stackexchange.com/questions/58029", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/30465/" ] }
58,135
I always thought that p-orbitals had a dumbbell shape as pictured below. ( image source ) However, I was reading an article (see Table 1, item 2) that says, "...the spherical $\mathrm{p_{1/2}}$ subshell..." (my emphasis). The implication being that the $\mathrm{p_{1/2}}$ orbital (subshell?) has a greater electron density near the nucleus than the $\mathrm{p_{3/2}}$ orbital (subshell?). Why is this and why do they call the $\mathrm{p_{1/2}}$ orbital spherical?
As orthocresol mentioned, this is all about relativity, so let's talk about it. I am hardly an expert myself, but I'll try to give an answer to the best of my limited knowledge. For an interesting and accessible overview of the incorporation of relativistic effects into chemistry, I recommend the 2012 review article " Relativistic Effects in Chemistry: More Common Than You Thought " by Pekka Pyykkö (free access!), one of the great names in the field. The theoretical basis for most of our work as chemists is "old" quantum mechanics, developed up to around 1925, and one of the hallmarks of this era is the Schrödinger equation. The whole concept of orbitals comes as the solution of the Schrödinger equation for the hydrogen atom, (i.e. a two-body system composed of an electron in a spherically symmetric electric field originating from the nucleus). The stationary states in this system are spherical harmonics, described by the quantum numbers $n$, $l$ and $m$ (or $m_l$). The first two numbers determine the shape of the orbital, but we've gotten used to calling the second number by a letter ( s , p , d , f , g , ... for 0, 1, 2, 3, 4, ...). So that's what orbital names, such as $\textbf{4s}$, $\textbf{3d}$ and $\textbf{5f}$ mean. That said, old quantum mechanics is fundamentally incomplete, in a way which very much matters to chemists. In particular, in 1922 the Stern-Gerlach experiment discovered the property of spin . While its existence could be accommodated within the prior framework by "tacking it on" and adding an extra quantum number $s$, there was no theoretical background to explain it. This changed a few years later, when Paul Dirac incorporated special relativity into quantum mechanics. Old quantum mechanics implicitly assumes the speed of light is infinite, and by adequately introducing its finitude, many new experimentally-verified phenomena appear, such as antiparticles and, indeed, spin. In other words, spin is a purely relativistic quantum mechanical property ! The equivalent of the Schrödinger equation in relativistic quantum mechanics is the Dirac equation. Much like the Schrödinger equation, the Dirac equation can be solved for a two-body system composed of an electron in a spherically-symmetric electric field created by the nucleus. The resulting solutions ("Dirac orbitals"), however, are somewhat different. In particular, the quantum numbers $l$ and $s$ are no longer "good" quantum numbers (that is, they no longer describe stationary states, due to spin-orbit coupling ), but their sum ($\overrightarrow{j}=\overrightarrow{l}+\overrightarrow{s}$) is. For the electron, $s=\frac{1}{2}$, and thus for each value of $l$, there are now two possible values of $j$, given by $|l+\frac{1}{2}|$ and $|l-\frac{1}{2}|$. This means that the s , p , d , f ... subshells from the Schrödinger solutions are now split according to their $j$ values. The s subshells are unique in that they don't split ($|0+\frac{1}{2}| = |0-\frac{1}{2}| = \frac{1}{2}$), but are now termed $\textbf{s}_{1/2}$ subshells. The p subshells, which are triply degenerate via the Schrödinger equation, now split into a single $\textbf{p}_{1/2}$ orbital and two degenerate $\textbf{p}_{3/2}$ orbitals. The d subshells, previously quintuply degenerate, split into two degenerate $\textbf{d}_{3/2}$ orbitals and three degenerate $\textbf{d}_{5/2}$ orbitals. The trend continues similarly for the further subshells. Here is a figure from " Relativistic quantum chemistry: The electrons and the nodes ", which will hopefully make it clearer: So what are the physical consequences of these different mathematical solutions? Firstly, the orbitals change in energy. Orbitals with a lower $j$ for a given $n$ and $l$ have lower energy (e.g., the single $\textbf{4p}_{1/2}$ orbital is lower in energy than the two degenerate $\textbf{4p}_{3/2}$ orbitals). The shape of the orbitals also change; all orbitals with $j=\frac{1}{2}$ are spherical, regardless of $l$ . According to " Contour diagrams for relativistic orbitals ": Contour plots for $^2P_{1/2}$ and $^2S_{1/2}$ are spherically symmetrical, while those for $n = 2$ and $3$, $l = 1$, $j= 3/2$, and $m = \pm 3/2$ look very similar to those for p orbitals already published in this journal. Similarly, all orbitals with $j=\frac{3}{2}$ are dumbbell-shaped, including $\textbf{d}_{3/2}$ orbitals. The article brings a helpful schematic: As additional confirmation, " Pictorial Representations of the Dirac Electron Cloud for Hydrogen-Like Atoms " states: As an example of these two theorems the angular charge distributions for the magnetic states $m = \pm 3/2$ for a $^2P_{3/2}$ term are not only the same but are also identical with the two magnetic states $m = \pm 3/2$ of the $^2D_{3/2}$ term. It should be pointed out that the radial charge distributions of the $^2P$ and $^2D$ terms are different. [...] In this connection one of the consequences of the Dirac theory is that a single p electron or two similar p electrons, depending on whether $j=\frac{1}{2}$ or $j=\frac{3}{2}$ respectively, present spherical symmetry. Not only are all S states spherically symmetrical, as on the Schrodinger theory, but also all one electron systems having but one valence electron and that electron in a $^2P_{1/2}$ state, eg., the normal states of B, Al, Ga, In, Tl. The article also shows a table of the angular distribution functions for the wavefunction. Note how the function is equal to $1$ for $\textbf{s}_{1/2}$ and $\textbf{p}_{1/2}$ orbitals. That is, there is no angular dependence, implying spherical symmetry. The article also brings some diagrams and pictures of simulated orbital shapes. I wouldn't be able to tell you why this is the case any more than I would be able to explain the shapes for the solutions to the Schrödinger equation - that's just how it pans out. Another difference is that the Dirac orbitals do not possess nodal points or planes. They are now replaced with "pseudo-nodes", where the wavefunction assumes very small values, but never reaches zero anywhere. This is nicely shown in this figure from " Relativistic quantum chemistry: The electrons and the nodes ":
{ "source": [ "https://chemistry.stackexchange.com/questions/58135", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/4231/" ] }
58,138
I know that Gibbs free energy change represents the amount of the non-expansionary work that a reaction is capable of doing but what happens to this energy at equilibrium? Why is the system unable to do work? Why is the Gibbs free energy change equal to zero at equilibrium? Edit: Please don't simply write the derivation that we use to conclude that ∆$G$ must be zero at equilibrium. Please try to explain the physical significance behind this fact. Please also note that I have already searched this website for an answer to my question, found only the following question but the answers only explained the Math: Gibbs free energy-zero or minimum Thanks ever so much in advance :) Regards.
As orthocresol mentioned, this is all about relativity, so let's talk about it. I am hardly an expert myself, but I'll try to give an answer to the best of my limited knowledge. For an interesting and accessible overview of the incorporation of relativistic effects into chemistry, I recommend the 2012 review article " Relativistic Effects in Chemistry: More Common Than You Thought " by Pekka Pyykkö (free access!), one of the great names in the field. The theoretical basis for most of our work as chemists is "old" quantum mechanics, developed up to around 1925, and one of the hallmarks of this era is the Schrödinger equation. The whole concept of orbitals comes as the solution of the Schrödinger equation for the hydrogen atom, (i.e. a two-body system composed of an electron in a spherically symmetric electric field originating from the nucleus). The stationary states in this system are spherical harmonics, described by the quantum numbers $n$, $l$ and $m$ (or $m_l$). The first two numbers determine the shape of the orbital, but we've gotten used to calling the second number by a letter ( s , p , d , f , g , ... for 0, 1, 2, 3, 4, ...). So that's what orbital names, such as $\textbf{4s}$, $\textbf{3d}$ and $\textbf{5f}$ mean. That said, old quantum mechanics is fundamentally incomplete, in a way which very much matters to chemists. In particular, in 1922 the Stern-Gerlach experiment discovered the property of spin . While its existence could be accommodated within the prior framework by "tacking it on" and adding an extra quantum number $s$, there was no theoretical background to explain it. This changed a few years later, when Paul Dirac incorporated special relativity into quantum mechanics. Old quantum mechanics implicitly assumes the speed of light is infinite, and by adequately introducing its finitude, many new experimentally-verified phenomena appear, such as antiparticles and, indeed, spin. In other words, spin is a purely relativistic quantum mechanical property ! The equivalent of the Schrödinger equation in relativistic quantum mechanics is the Dirac equation. Much like the Schrödinger equation, the Dirac equation can be solved for a two-body system composed of an electron in a spherically-symmetric electric field created by the nucleus. The resulting solutions ("Dirac orbitals"), however, are somewhat different. In particular, the quantum numbers $l$ and $s$ are no longer "good" quantum numbers (that is, they no longer describe stationary states, due to spin-orbit coupling ), but their sum ($\overrightarrow{j}=\overrightarrow{l}+\overrightarrow{s}$) is. For the electron, $s=\frac{1}{2}$, and thus for each value of $l$, there are now two possible values of $j$, given by $|l+\frac{1}{2}|$ and $|l-\frac{1}{2}|$. This means that the s , p , d , f ... subshells from the Schrödinger solutions are now split according to their $j$ values. The s subshells are unique in that they don't split ($|0+\frac{1}{2}| = |0-\frac{1}{2}| = \frac{1}{2}$), but are now termed $\textbf{s}_{1/2}$ subshells. The p subshells, which are triply degenerate via the Schrödinger equation, now split into a single $\textbf{p}_{1/2}$ orbital and two degenerate $\textbf{p}_{3/2}$ orbitals. The d subshells, previously quintuply degenerate, split into two degenerate $\textbf{d}_{3/2}$ orbitals and three degenerate $\textbf{d}_{5/2}$ orbitals. The trend continues similarly for the further subshells. Here is a figure from " Relativistic quantum chemistry: The electrons and the nodes ", which will hopefully make it clearer: So what are the physical consequences of these different mathematical solutions? Firstly, the orbitals change in energy. Orbitals with a lower $j$ for a given $n$ and $l$ have lower energy (e.g., the single $\textbf{4p}_{1/2}$ orbital is lower in energy than the two degenerate $\textbf{4p}_{3/2}$ orbitals). The shape of the orbitals also change; all orbitals with $j=\frac{1}{2}$ are spherical, regardless of $l$ . According to " Contour diagrams for relativistic orbitals ": Contour plots for $^2P_{1/2}$ and $^2S_{1/2}$ are spherically symmetrical, while those for $n = 2$ and $3$, $l = 1$, $j= 3/2$, and $m = \pm 3/2$ look very similar to those for p orbitals already published in this journal. Similarly, all orbitals with $j=\frac{3}{2}$ are dumbbell-shaped, including $\textbf{d}_{3/2}$ orbitals. The article brings a helpful schematic: As additional confirmation, " Pictorial Representations of the Dirac Electron Cloud for Hydrogen-Like Atoms " states: As an example of these two theorems the angular charge distributions for the magnetic states $m = \pm 3/2$ for a $^2P_{3/2}$ term are not only the same but are also identical with the two magnetic states $m = \pm 3/2$ of the $^2D_{3/2}$ term. It should be pointed out that the radial charge distributions of the $^2P$ and $^2D$ terms are different. [...] In this connection one of the consequences of the Dirac theory is that a single p electron or two similar p electrons, depending on whether $j=\frac{1}{2}$ or $j=\frac{3}{2}$ respectively, present spherical symmetry. Not only are all S states spherically symmetrical, as on the Schrodinger theory, but also all one electron systems having but one valence electron and that electron in a $^2P_{1/2}$ state, eg., the normal states of B, Al, Ga, In, Tl. The article also shows a table of the angular distribution functions for the wavefunction. Note how the function is equal to $1$ for $\textbf{s}_{1/2}$ and $\textbf{p}_{1/2}$ orbitals. That is, there is no angular dependence, implying spherical symmetry. The article also brings some diagrams and pictures of simulated orbital shapes. I wouldn't be able to tell you why this is the case any more than I would be able to explain the shapes for the solutions to the Schrödinger equation - that's just how it pans out. Another difference is that the Dirac orbitals do not possess nodal points or planes. They are now replaced with "pseudo-nodes", where the wavefunction assumes very small values, but never reaches zero anywhere. This is nicely shown in this figure from " Relativistic quantum chemistry: The electrons and the nodes ":
{ "source": [ "https://chemistry.stackexchange.com/questions/58138", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/-1/" ] }
58,625
According to Hund's first rule, a set of degenerate orbitals are singly occupied first, before the second slot in any of the orbitals are populated. This is quite intuitive because electron-electron repulsions would make an atom more unstable if the electrons start filling two at a time in a single orbital. However, the rule also states that in the ground state, electrons that singly fill the orbitals will adopt the same spin. Why is it necessary for the electrons to have the same spin?
The lowest energy state has parallel spins to maximize the exchange energy. As you say, there's a Coulomb repulsion between two electrons to put them in the same orbital. There's also a quantum mechanical effect. The exchange energy (which is favorable) increases with the number of possible exchanges between electrons with the same spin and energy. Going from the top state to the middle state, we remove the Coulomb repulsion between electrons in the same orbital. Going between the middle to the bottom (most stable state and that predicted by Hund's rule), we gain the exchange energy, because these two electrons are indistinguishable.
{ "source": [ "https://chemistry.stackexchange.com/questions/58625", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/25799/" ] }
58,630
As far as I can tell, there doesn't seem to be a chiral carbon in the compound, 1-ethylidene-4-methylcyclohexane. The C-4 of the cyclohexane ring has two groups with exactly the same connectivity, and the exocyclic double bond can't give rise to optical isomerism. What am I missing?
The strict criterion for a compound to display chirality is that it must not be superimposable upon its mirror image. Let's ignore the chair conformation of the ring for a while, and assume it adopts a planar conformation. You could draw a side-on view of the ring like this: We can see that if we reflect the molecule (in the plane of the ring + double bond), its mirror image is not superimposable on itself, which makes it chiral: In practice, the cyclohexane ring does adopt a chair conformation. That does not fundamentally affect the fact that the compound is chiral, as the presence of a chair conformation cannot "remove" this existing source of chirality (for example, there is no way for the hydrogen and methyl groups to swap places by virtue of a cyclohexane ring flip or a similar process). Actually, this is very similar to the case of an allene, which you may or may not be familiar with. Notice how the conformation drawn above looks almost like this allene: with the planar ring taking the place of the second double bond. These are examples of axial chirality ( Wikipedia ; IUPAC ), where the chirality stems from the disposition of groups about an axis (in both cases, the axis in question is the C=C bond axis). Further explanation of axial chirality in allenes can be found in this question . A note on nomenclature The tetrahedral chiral centre can be named with the usual (R) and (S) stereodescriptors, as described in user55119's answer . When it comes to the double bond, ChemDraw (and likely some other software) suggest that they can be named with the familiar stereodescriptors (E) and (Z) (the process is also described in user55119's answer ). However, this is not fully appropriate. The exact rules are complicated, but it boils down to the fact that the double bond in question is enantiomorphic : that means that when it is reflected, the configuration of the double bond is reversed. We can see that from the names generated above: the two mirror images have supposedly different configurations at the C=C double bond. The descriptors (E) and (Z) are supposed to be used only for diastereomorphic double bonds, where the configuration is invariant (i.e. does not change) upon reflection. The large majority of stereogenic double bonds fall into this category, but this one does not. In place of (E) and (Z) , the preferred descriptors are respectively seqTrans and seqCis (see mykhal's answer , and P-92.1.1 (f) of the 2013 IUPAC Blue Book) .
{ "source": [ "https://chemistry.stackexchange.com/questions/58630", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/16683/" ] }
58,846
A catalyst will provide a new path with a lower activation energy (Figure 1). Won't this mean the forward and backward reactions will both speed up (as they both have a lower activation energy path to take)? So then what is the point of using a catalyst? Or alternatively, how does the catalyst provide a net benefit for the forward reaction if both the forward and backward reaction are sped up? I am assuming it might have something to do with the reactants having a higher energy than the products, but I can't think up a proper formal reason that a catalyst has a net benefit for the forward reaction. Figure 1 (source: Chemguide.co.uk )
Your realisation is correct and something chemistry teachers try to hammer into their students’ heads time and time again (and yet, the point is still often lost): Catalysts will never change the thermodynamics of a reaction. They only ease the path of the reaction. Forward and backward reactions will be accelerated equivalently. So what is the benefit of a catalyst? There are multiple ones. Speed Take for example the Haber-Bosch process to synthesise ammonia from nitrogen and hydrogen. $$\ce{N2 + 3 H2 <=> 2 NH3}\tag{1}$$ $$\Delta_\mathrm{r} H^0_\mathrm{298~K} = -45.8~\mathrm{\frac{kJ}{mol}}$$ This reaction is exothermic and thus should, theoretically or thermodynamicly, proceed spontaneously, e.g. if you mixed nitrogen and hydrogen in the appropriate ratio and added a spark. It does not, however. Significant activation energy is required to cleave the $\ce{N#N}$ triple bond. Typical methods to add activation energy include heating. In the Haber-Bosch process, the mixture is heated to $400$ to $500~\mathrm{^\circ C}$ to supply the required activation energy. However, since the reaction is exothermic, heating will favour the reactant side. Increasing the pressure improves the entropic term of the Gibbs free energy equation, hence why pressures of $15$ to $25~\mathrm{MPa}$ are used. Catalysts, based on iron with different promotors, are used to accelerate the reaction. By using catalysts, one can lower the temperature required in a trade-off between speed of reaction and favouring the product side of the equilibrium. With the conditions and catalysts used, one achieves a yield of $\approx 15~\%$ of ammonia within a reasonable timeframe. Not employing a catalyst would give much lower yields at much longer timeframes — economically much less feasible. Direct reaction path not accessable This is mainly true for many transition-metal catalysed carbon-carbon bond formation reactions, but is also true for some inorganic processes like the disproportionation of hydrogen peroxide as in equation $(2)$. $$\ce{2 H2O2 -> 2 H2O + O2}\tag{2}$$ Hydrogen peroxide is a reactive chemical that cannot be stored forever, but the direct disproportionation path is not typically what degrades it. However, you can add $\ce{MnO2}$ to it. Upon addition, oxygen gas vigourously bubbles out of the solution. In this case, there was a kinetic barrier impeding the direct transformation due to reactants and products having different multiplicities (oxygen gas’ ground state is a triplet, all others are singlets). The $\mathrm{d^3}$ ion manganese(IV) is a radical itself that can partake in different radical reactions, allowing the diradical oxygen to be liberated. Selectivity This is exceptionally true for transition-metal catalysed organic carbon-carbon bond formation reactions. Note first that the action of a catalyst is frequently depicted as a catalytic cycle: A reactant reacts with the catalyst to some intermediate species, this rearranges or reacts with other reactants/additives/solvents in a set of specific steps until finally the products are liberated and the catalytic species is regenerated. Many such reactions require organic halides as one of the reacting species. And the first step is typically an oxidative addition as shown in equation $(3)$, where $\ce{X}$ is a halide ($\ce{Cl, Br, I}$). $$\ce{R-X + Pd^0 -> R-Pd^{+II}-X}\tag{3}$$ Palladium typically prefers oxidatively adding to bromides or iodides and tends to leave chlorides alone. I myself have performed a reaction with near-quatitative yield in which a reactant contained both a $\ce{C-Br}$ and a $\ce{C-Cl}$ bond — selectively, only the $\ce{C-Br}$ bond took part in the palladium(0) catalysed Sonogashira reaction. Although I did not try it myself, I am pretty sure that switching to a nickel(0) catalyst species would shift the reaction in favour of reacting with the carbon-chlorine bond rather than the carbon-bromine one. Mildness This is basically a reiteration of the first point albeit with different intentions. Many a time in organic synthesis, one has a rather sensitive reactant that would degrade or undergo side-reactions if subjected to standard reaction conditions, such as high pH-value or elevated temperatures. As an example, consider a transesterification as shown in equation $(4)$. $$\ce{R-COO-Et + Me-OH <=> R-COO-Me + EtOH}\tag{4}$$ This reaction is, of course, an equilibrium and by using methanol as the solvent we can shift it to the product side. For the reaction to happen, one would need a base strong enough to deprotonate methanol, giving the methanolate anion, which can then attack the ester functionality. However, methanolate being a strong (and nucleophilic) base itself can introduce undesired side-reactions, including epimerisation of the α-carbon. One can catalyse this reaction by using $\ce{Bu2SnO}$, which will activate the carbonyl group, making it more susceptible to a nucleophilic attack. The reaction speed is the same but the conditions are milder (no additional base required) and the number of side-reactions is those strongly limited. In particular, I noticed no epimerisation of the α-carbon in the tin(IV) catalysed method.
{ "source": [ "https://chemistry.stackexchange.com/questions/58846", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/31697/" ] }
59,124
Why is chirality defined differently for organic and inorganic compounds? Why are inorganic compounds deemed to be optically active if they have more than one of the same ligands attached to the central metal atom, but glycine is optically inactive for having two $\ce{H}$ atoms attached to the carbon atom? Finally, what should be the criterion for chirality (optical activity) and the perfect definition for it? Could anyone explain how $\ce{$cis$-[PtCl2(en)2]^2+}$ is optically active, but the $trans-$isomer is not?
The correct definition of chirality is given in the IUPAC gold book as follows: chirality The geometric property of a rigid object (or spatial arrangement of points or atoms) of being non-superposable on its mirror image; such an object has no symmetry elements of the second kind (a mirror plane, σ = S 1 , a centre of inversion, i = S 2 , a rotation-reflection axis, S 2n ). If the object is superposable on its mirror image the object is described as being achiral. From this we can deduce, that all organic compounds, which have tetrahedral carbons with four different ligands, are chiral. This is a rule of thumb and an easy way to recognise chirality. Glycine is optically inactive as it has an internal mirror plane: Often we teach this as a rule or a definition, because it is so easy to comprehend and to see. In inorganic chemistry, we usually deal with more complex structures, where it is not necessary, that all ligands have to be different. One example for this is the cation in tris(ethylenediamine)cobalt(III) chloride : Another interesting example is (Λ/Δ)- cis -dichlorobis(ethylenediamine)cobalt(III) chloride .
{ "source": [ "https://chemistry.stackexchange.com/questions/59124", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/23944/" ] }
59,125
Does a resource exist that has the SMILES formula of the generalized pattern that represents the group? For instance: I have the aldehydes group, which has the pattern: -CHO . Is there a resource that lists each functional group with the generic pattern that defines the group and its SMILES representation? I'm hoping for, at the very least, getting the SMILES formula for the functional groups represented in this pdf: EPA Chemical Compatibility Chart Follow up question: Is SMILES still relevant or are there better ways to represent chemicals?
The correct definition of chirality is given in the IUPAC gold book as follows: chirality The geometric property of a rigid object (or spatial arrangement of points or atoms) of being non-superposable on its mirror image; such an object has no symmetry elements of the second kind (a mirror plane, σ = S 1 , a centre of inversion, i = S 2 , a rotation-reflection axis, S 2n ). If the object is superposable on its mirror image the object is described as being achiral. From this we can deduce, that all organic compounds, which have tetrahedral carbons with four different ligands, are chiral. This is a rule of thumb and an easy way to recognise chirality. Glycine is optically inactive as it has an internal mirror plane: Often we teach this as a rule or a definition, because it is so easy to comprehend and to see. In inorganic chemistry, we usually deal with more complex structures, where it is not necessary, that all ligands have to be different. One example for this is the cation in tris(ethylenediamine)cobalt(III) chloride : Another interesting example is (Λ/Δ)- cis -dichlorobis(ethylenediamine)cobalt(III) chloride .
{ "source": [ "https://chemistry.stackexchange.com/questions/59125", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/34897/" ] }
59,809
From what I was taught in middle school, cations are those ions that move towards the cathode, likewise anions are those ions which move towards the anode. I didn't have issues with this back then, since all we studied were electrolytic cells. But now that we've crossed over to electrochemical cells, I'm having doubts. Every piece of chemistry literature I've come across so far, always deals with positive ions as cations and negative ions as anions; even my school textbook does it! That makes sense with respect to electrolytic cells, since the cathode's negative and anode's positive therefore positive ions would move to the cathode (hence 'cations') and negative ions would move to the anode (hence 'anions'). But in an electrochemical cell, the cathode's positive and the anode's negative. So if ions are to be classed according to the electrodes they move over to, then positive ions would be anions and negative ions would be cations, which is exactly opposite to the first case (on the basis of electrolytic cells). This is really confusing.... So should I refer to positive and negative ions as cations and anions respectively or as anions and cations respectively? Or are both acceptable, depending on the scenario?
Yes, cations always have a positive charge and anions always have a negative one. The difficulty is that the term cathode and anode do not always correspond to the same pole. The cathode is that pole of an electrolytic/electrochemical cell where reduction takes place (cathodic reduction) while the anode is where oxidation takes place (anodic oxidation). Since it depends on the type of cell which will be oxidised and which will be reduced, the terms cathode and anode mean different things in different cells.
{ "source": [ "https://chemistry.stackexchange.com/questions/59809", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/33991/" ] }
59,822
The Heisenberg uncertainty principle states that $$\Delta x \Delta p \geq \frac{\hbar}{2}$$ where $\Delta x$ is the uncertainty in the position, $\Delta p$ is the uncertainty in linear momentum, and $\hbar = 1.054571800(13) \times 10^{-34}\ \mathrm{J\ s}$ [ source ] is the reduced Planck constant This means that, regardless of what quantum mechanical state the particle is in, we cannot simultaneously measure its position and momentum with perfect precision. I read that this is intrinsically linked to the fact that the position and momentum operators do not commute: $[\hat{x},\hat{p}] = \mathrm{i}\hbar$. How can I derive the uncertainty principle, as given above?
The proof I will use is taken from Griffiths, Introduction to Quantum Mechanics , 2nd ed., pp 110-111. Defining "uncertainty" Let's assume that the normalised state $|\psi\rangle$ of a particle can be expanded as a linear combination of energy eigenstates $|n\rangle$, with $\hat{H}|n\rangle = E_n |n\rangle$. $$| \psi \rangle = \sum_n c_n |n\rangle \tag{1}$$ The expectation value (the "mean") of a quantity, such as energy, is given by $$\begin{align} \langle E\rangle &= \langle \psi | H | \psi \rangle \tag{2} \end{align}$$ and the variance of the energy can be defined analogously to that used in statistics , which for a continuous variable $x$ is simply the expectation value of $(x - \bar{x})^2$: $$\sigma_E^2 = \left\langle (E - \langle E\rangle)^2 \right\rangle \tag{3}$$ The standard deviation is the square root of the variance, and the "uncertainty" refers to the standard deviation. It's more proper to use $\sigma$ as the symbol, instead of $\Delta$, and this is what you will see in most "proper" texts. $$\sigma_E = \sqrt{\left\langle (E - \langle E\rangle)^2 \right\rangle} \tag{4}$$ However, it's much easier to stick to the variance in the proof. Let's generalise this now to any generic observable, $A$, which is necessarily represented by a hermitian operator, $\hat{A}$. The expectation value of $A$ is merely a number, so let's use the small letter $a$ to refer to it. With that, we have $$\begin{align} \sigma_A^2 &= \left\langle (A - a)^2 \right\rangle \tag{5} \\ &= \left\langle \psi \middle| (\hat{A} - a)^2 \middle| \psi \right\rangle \tag{6} \\ &= \left\langle \psi \middle| (\hat{A} - a) \middle| (\hat{A} - a)\psi \right\rangle \tag{7} \\ &= \left\langle (\hat{A} - a)\psi \middle| (\hat{A} - a) \middle| \psi \right\rangle \tag{8} \\ &= \left\langle (\hat{A} - a)\psi \middle| (\hat{A} - a)\psi \right\rangle \tag{9} \end{align}$$ where, in going from $(7)$ to $(8)$, I have invoked the hermiticity of $(\hat{A} - a)$ (since $\hat{A}$ is hermitian and $a$ is only a constant). Likewise, for a second observable $B$ with $\langle B \rangle = b$, $$\sigma_B^2 = \left\langle (\hat{B} - b)\psi \middle| (\hat{B} - b)\psi \right\rangle \tag{10}$$ The Cauchy-Schwarz inequality ... states that , for all vectors $f$ and $g$ belonging to an inner product space (suffice it to say that functions in quantum mechanics satisfy this condition ), $$\langle f | f \rangle \langle g | g \rangle \geq |\langle f | g \rangle|^2 \tag{11}$$ In general, $\langle f | g \rangle$ is a complex number, which is why we need to take the modulus. By the definition of the inner product, $$\langle f | g \rangle = \langle g | f \rangle^* \tag{12}$$ For a generic complex number $z = x + \mathrm{i}y$, we have $$|z|^2 = x^2 + y^2 \geq y^2 \qquad \qquad \text{(since }x^2 \geq 0\text{)} \tag{13}$$ But $z^* = x - \mathrm{i}y$ means that $$\begin{align} y &= \frac{z - z^*}{2\mathrm{i}} \tag{14} \\ |z|^2 &\geq \left(\frac{z - z^*}{2\mathrm{i}}\right)^2 \tag{15} \end{align}$$ and plugging $z = \langle f | g \rangle$ into equation $(15)$, we get $$|\langle f | g \rangle|^2 \geq \left[\frac{1}{2\mathrm{i}}(\langle f | g \rangle - \langle g | f \rangle) \right]^2 \tag{16}$$ Now, if we let $| f \rangle = | (\hat{A} - a)\psi \rangle$ and $| g \rangle = | (\hat{B} - B)\psi \rangle$, we can combine equations $(9)$, $(10)$, $(11)$, and $(16)$ to get: $$\begin{align} \sigma_A^2 \sigma_B^2 &= \langle f | f \rangle \langle g | g \rangle \tag{17} \\ &\geq |\langle f | g \rangle|^2 \tag{18} \\ &\geq \left[\frac{1}{2\mathrm{i}}(\langle f | g \rangle - \langle g | f \rangle) \right]^2 \tag{19} \end{align}$$ Expanding the brackets If you've made it this far - great job - take a breather before you continue, because there's more maths coming. We have 1 $$\begin{align} \langle f | g \rangle &= \left\langle (\hat{A} - a)\psi \middle| (\hat{B} - b)\psi \right\rangle \tag{20} \\ &= \langle \hat{A}\psi |\hat{B}\psi \rangle - \langle a\psi |\hat{B}\psi \rangle - \langle \hat{A}\psi | b\psi \rangle + \langle a\psi |b\psi \rangle \tag{21} \\ &= \langle \psi |\hat{A}\hat{B}|\psi \rangle - a\langle \psi |\hat{B}\psi \rangle - b\langle \hat{A}\psi | \psi \rangle + ab\langle \psi |\psi \rangle \tag{22} \\ &= \langle \psi |\hat{A}\hat{B}|\psi \rangle - ab - ab + ab \tag{23} \\ &= \langle \psi |\hat{A}\hat{B}|\psi \rangle - ab \tag{24} \end{align}$$ Likewise, $$\langle g | f \rangle = \langle \psi |\hat{B}\hat{A}|\psi \rangle - ab \tag{25}$$ So, substituting $(24)$ and $(25)$ into $(19)$, $$\begin{align} \sigma_A^2 \sigma_B^2 &\geq \left[\frac{1}{2\mathrm{i}}(\langle\psi |\hat{A}\hat{B}|\psi \rangle - \langle \psi |\hat{B}\hat{A}|\psi\rangle) \right]^2 \tag{26} \\ &= \left[\frac{1}{2\mathrm{i}}(\langle\psi |\hat{A}\hat{B} - \hat{B}\hat{A}|\psi \rangle ) \right]^2 \tag{27} \end{align}$$ The commutator of two operators is defined as $$[\hat{A},\hat{B}] = \hat{A}\hat{B} - \hat{B}\hat{A} \tag{28}$$ So, the term in parentheses in equation $(27)$ is simply the expectation value of the commutator, and we have reached the Robertson uncertainty relation : $$\sigma_A^2 \sigma_B^2 \geq \left(\frac{1}{2\mathrm{i}}\langle[\hat{A},\hat{B} ]\rangle \right)^2 \tag{29}$$ This inequality can be applied to any pair of observables $A$ and $B$. 2 The Heisenberg uncertainty principle Simply substituting in $A = x$ and $B = p$ gives us $$\sigma_x^2 \sigma_p^2 \geq \left(\frac{1}{2\mathrm{i}}\langle[\hat{x},\hat{p} ]\rangle \right)^2 \tag{30}$$ The commutator of $\hat{x}$ and $\hat{p}$ is famously $\mathrm{i}\hbar$, 3 and the expectation value of $\mathrm{i}\hbar$ is of course none other than $\mathrm{i}\hbar$. This completes the proof: $$\begin{align} \sigma_x^2 \sigma_p^2 &\geq \left(\frac{1}{2\mathrm{i}}\cdot\mathrm{i}\hbar \right)^2 \tag{31} \\ &= \left(\frac{\hbar}{2}\right)^2 \tag{32} \\ \sigma_x \sigma_p &\geq \frac{\hbar}{2} \tag{33} \end{align}$$ where we have simply "removed the square" on both sides because as standard deviations, $\sigma_x$ and $\sigma_p$ are always positive. Notes 1 I have skipped some stuff. Namely, $\langle \hat{A}\psi |\hat{B}\psi \rangle = \langle \psi |\hat{A}\hat{B}|\psi \rangle$ which is quite straightforward to prove using the hermiticity of both operators; $\langle \psi |\hat{A}|\psi \rangle = a$; $\langle \psi |\hat{B}|\psi \rangle = b$; and $a = a^*$ since it is the expectation value of a physical observable, which must be real. 2 This does not apply to, and cannot be used to derive, the energy-time uncertainty principle. There is no time operator in quantum mechanics, and time is not a measurable observable , it is only a parameter . 3 Technically, it is a postulate of quantum mechanics. (If I am not wrong, it derives from the Schrodinger equation, which is itself a postulate.)
{ "source": [ "https://chemistry.stackexchange.com/questions/59822", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/16683/" ] }
60,716
Upon coming back to University I've started studying basic chemistry again. I've always had trouble with the concept of "moles". According to my study book, 1 mole of any element or compound is equal to its molecular weight in grams. I understand that part but later in the paragraph it also says: One mole of any substance always contains exactly the same number of solute particles, that is, $6.02 \times 10^{23}$ (Avogadro's number). So whether you weigh out 1 mole of glucose (180 g) or 1 mole of water (18 g) or 1 mole of methane (16 g), in each case you will have $6.02 \times 10^{23}$ molecules of that substance. If 1 mole equals the molecular weight of a compound, then how come 1 mole of glucose doesn't equate to 1 molecule of glucose? Basically, why is 1 mole of glucose $6.02\times10^{23}$ molecules instead of 1 molecule?
The first thing to realize is that "mole" is not a mass unit. It is simply a quantity - a number - like dozen or gross or score . Just as a dozen eggs is 12 eggs, a mole of glucose is $6.02 \times 10^{23}$ glucose molecules, and a mole of carbon atoms is $6.02 \times 10^{23}$ carbon atoms. "Moles" are only associated with mass because individual objects have mass, and thus a mole of objects also has a certain mass. So why $6.02 \times 10^{23}$? What's so special about Avogadro's number? Well, nothing, really, it just makes calculations work out nicely. Avogadro's number is defined as the number of $\ce{^{12}C}$ atoms which weigh 12 g. So it's effectively a ratio: how many times larger is a gram than an atomic mass unit? If you have one atom of $\ce{^{12}C}$, it weighs 12 amu. If you have $6.02 \times 10^{23}$ of them, they weigh 12 g. If you have one molecule of glucose, it weighs 180 amu (or thereabouts). If you have $6.02 \times 10^{23}$ of them, they weigh 180 g (or thereabouts). -- This is just like if one egg weighs 60 g (on average), then a dozen of them weigh 720 g (on average), and if one cup of flour weighs 120 g (on average), then a dozen cups of flour weigh 1440 g (on average). The only difference is that dozen is defined forwards ("a dozen is twelve"), whereas a mole is defined "backwards" ("a dozen is the number of 60 g eggs that are in a collection of eggs that weighs 720 g.") This convenience definition - pegging the value of Avogadro's number directly to the difference in scale between the amu and the gram - is what is probably throwing you. Moles are not a mass unit, but the definition is intimately tied to mass units. The equivalence in numbers at the atomic scale (amu) and at the macroscopic scale (grams) can also result in chemists playing fast and loose with terminology, quickly working back and forth from atomic to macroscopic scale, without a necessarily clear distinction between the two.
{ "source": [ "https://chemistry.stackexchange.com/questions/60716", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/35899/" ] }
60,829
My teacher told me that trans -decalin (see below) is achiral due to the presence of both, a centre of symmetry and a plane of symmetry. But I could not spot the plane of symmetry untill now. Can someone point out the plane of symmetry in a diagram?
It's not easy to see from a diagram, because it distorts bonds and angles. I recommend building it with a balls-and-sticks model set. You can also use a molecular viewer to model it; there are a couple of open-source (or at least free) ones out there. I have calculated the molecule on the DF-BP86/def2-SVP level of theory. The point group of the molecule is C 2h . In the following model I have highlighted the plane of symmetry, and the rotational C 2 -axis. At their intersection is an inversion centre C i . (I needed to downscale this a lot, click on the image to get to a high resolution still. Images created with ChemCraft, assembled with GIMP.)
{ "source": [ "https://chemistry.stackexchange.com/questions/60829", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/-1/" ] }
60,841
Soap comes in different colors, but why is soap lather always white ?
Though @DHMO's answer is quite interesting, you shouldn't take it without a pinch of salt. I've to disagree with it in some aspects. Soap colorants range from various kinds of Mica to dyes found naturally in plants. Now typically when you dye anything the quantity of dye used in comparison to the quantity of the substance to be dyed is very small. Too much dye, and it could possibly stain your hands/clothes, so soap manufacturers use as little dye as possible, but just enough to give the soap a good color. Now for the rest of this answer, I will assume (reasonably enough) that by 'colored' soap you are referring to soap that goes by colors other than white. Now when you lather using soap, only a really teeny-tiny bit actually goes into the water. As @DHMO correctly points out, the dye/pigment used to color the soap is greatly diluted. Why don't you carve off a piece of colored soap (having more or less the same size as as an almond) and dissolve it a mug of water by gently stirring it. I emphasize on "gently" so that you don't stir up a lather. Have a look at the water. You'll see that the soap, apart from making it more cloudy, has not visibly imparted any particular color to the water. This is not unexpected, as I've already mentioned, the quantity of dye used is very small. Now if you go ahead and agitate that soap solution, it will give rise to white lather, and this shouldn't be surprising anymore. Now @DHMO's goes on to mention that Total Internal Reflection (TIR) is what imparts the white color to the foam. But this is grossly incorrect. While TIR can result 'white light', it is not the dominant phenomenon acting here (I'm not saying it's completely absent here either). What is largely giving foam it's white appearance is another phenomenon called Scattering . Now you might ask, 'Then why isn't soap water white?'. Well, since the foam is made up of lots of teeny tiny bubbles, light passing through it will have to encounter several surfaces, and it's these surfaces that scatter the light in so many directions. So to say it straight DHMO's answer is incorrect . (No offence DHMO) Remember I said you can't see any visible coloration in the water because of the dye is present in really small quantities? Well here's a way to validate that claim: Simply combine a teaspoon of red food-color with a bit of hand-wash (now that's really concentrated). Now you can lather the soap and lo behold! You have red foam. [Credit to @ACuriousMind from Physics.SE, for that bit on why soap water isn't white. Also he confirmed that the white color of foam is due to scattering] Edit- More kudos to @ACuriousMind for providing this amazing link on scattering.
{ "source": [ "https://chemistry.stackexchange.com/questions/60841", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/31666/" ] }
60,860
We were dealing with the Third Law of Thermodynamics in class, and my teacher mentioned something that we found quite fascinating: It is physically impossible to attain a temperature of zero kelvin (absolute zero). When we pressed him for the rationale behind that, he asked us to take a look at the graph for Charles' Law for gases: His argument is, that when we extrapolate the graph to -273.15 degrees Celsius (i.e. zero kelvin), the volume drops down all the way to zero ; and " since no piece of matter can occupy zero volume ('matter' being something that has mass and occupies space), from the graph for Charles' Law, it is very clear that it is not possible to attain the temperature of zero kelvin ". However, someone else gave me a different explanation: "To reduce the temperature of a body down to zero kelvin, would mean removing all the energy associated with the body. Now, since energy is always associated with mass, if all the energy is removed there won't be any mass left. Hence it isn't possible to attain absolute zero." Who, if anybody, is correct? Edit 1 : A note-worthy point made by @Loong a while back: (From the engineer's perspective) To cool something to zero kelvin, first you'll need something that is cooler than zero kelvin. Edit 2 : I've got an issue with the 'no molecular motion' notion that I seem to find everywhere (including @Ivan's fantastic answer) but I can't seem to get cleared. The notion: At absolute zero, all molecular motion stops. There's no longer any kinetic energy asscoiated with molecules/atoms. The problem? I quote Feynman: As we decrease the temperature, the vibration decreases and decreases until, at absolute zero, there is a minimum amount of motion that atoms can have, but not zero. He goes on to justify this by bringing in Heisenberg's Uncertainity Principle: Remember that when a crystal is cooled to absolute zero, the atoms do not stop moving, they still 'jiggle'. Why? If they stopped moving, we would know were they were and that they had they have zero motion, and that is against the Uncertainity Principle. We cannot know where they are and how fast they are moving, so they must be continually wiggling in there! So, can anyone account for Feynman's claim as well? To the not-so-hardcore student of physics that I am (high-schooler here), his argument seems quite convincing. So to make it clear; I'm asking for two things in this question: 1) Which argument is correct? My teacher's or the other guy's? 2) At absolute zero, do we have zero molecular motion as most sources state, or do atoms go on "wiggling" in there as Feynman claims?
There was a story in my days about a physical chemist who was asked to explain some effect, illustrated by a poster on the wall. He did that, after which someone noticed that the poster was hanging upside down, so the effect appeared reversed in sign. Undaunted, the guy immediately explained it the other way around, just as convincingly as he did the first time. Cooking up explanations on the spot is a respectable sport, but your teacher went a bit too far. What's with that Charles' law? See, it is a gas law; it is about gases. And even then it is but an approximation. To make it exact, you have to make your gas ideal, which can't be done. As you lower the temperature, all gases become less and less ideal . And then they condense, and we're left to deal with liquids and solids, to which the said law never applied, not even as a very poor approximation. Appealing to this law when we are near the absolute zero is about as sensible as ruling out certain reaction mechanism on the grounds that it requires atoms to move faster than allowed by the road speed limit in the state of Hawaii. The energy argument is even more ridiculous. We don't have to remove all energy, but only the kinetic energy. The $E=mc^2$ part remains there, so the mass is never going anywhere. All that being said, there is no physical law forbidding the existence of matter at absolute zero. It's not like its existence will cause the world to go down with error 500. It's just that the closer you get to it, the more effort it takes, like with other ideal things (ideal vacuum, ideally pure compound, crystal without defects, etc). If anything, we're doing a pretty decent job at it. Using sophisticated techniques like laser cooling or magnetic evaporative cooling , we've long surpassed the nature's record in coldness.
{ "source": [ "https://chemistry.stackexchange.com/questions/60860", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/33991/" ] }
61,107
I was watching a video lecture today on hydrocarbons and came across this. The instructor says that, as there is a plane of symmetry, cis -1,2-dimethylcyclohexane is a meso -compound: However, the cyclohexane ring is never planar as it adopts a chair conformation. I cannot find any plane of symmetry in chair form. Like this: Is the compound meso or not?
cis -1,2-Dimethylcyclohexane is achiral, not because there is a plane of symmetry, but because it consists of two enantiomeric conformations which interconvert rapidly via ring flipping at normal temperatures. This is exactly the same case as amine inversion. "Chiral nitrogens" such as that in $\ce{NHMeEt}$ do not lead to chirality or optical activity because of rapid inversion of configuration at the nitrogen atom, leading to interconversion of the two enantiomeric forms 1a and 1b . Likewise, the ring flip in cis -1,2-dimethylcyclohexane leads to two different conformers. I have deliberately chosen to depict the ring flip in the following fashion, to make the mirror image relationship more obvious. The green methyl group, equatorial in conformer 2a , is changed into an axial methyl group in conformer 2b . Likewise, the blue methyl group goes from axial to equatorial. The 1,2- cis relationship between the two methyl groups is retained in both conformers. Each individual conformer can be said to be chiral, but just like how the amine is considered achiral, cis -1,2-dimethylcyclohexane as a whole is considered achiral. Is the compound meso ? According to the IUPAC Gold Book , a meso -compound is defined as: A term for the achiral member(s) of a set of diastereoisomers which also includes one or more chiral members. 1,2-Dimethylcyclohexane possesses two diastereomers, one cis and one trans form. The trans form is chiral, but the cis form is achiral, as explained above. Therefore, the cis form satisfies the above definition and is considered a meso -compound.
{ "source": [ "https://chemistry.stackexchange.com/questions/61107", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/-1/" ] }
61,176
As the wikipedia article for the exchange interaction so aptly notes, exchange "has no classical analogue." How wonderful. Exchange shows up essentially while enforcing the condition that two electrons should not be distinguishable. That makes sense as a condition to enforce, but I don't understand how this should influence the energetics of the system. For instance, when two neon atoms interact, there is a very small amount of energy contribution which is attributed to exchange. I understand that exchange is not a force because there is no force carrier, but there is indeed an interaction taking place which affects the energy of the system. What then am I meant to believe is going on? Am I supposed to believe that two electrons spontaneously switch places so as to maintain their indistinguishability? This seems ridiculous. Especially because such a process ought to happen over a finite period of time and thus be observable, but this doesn't seem to be the case (at least as far as I've heard). What is exchange then? I understand this may not even be an answerable question beyond the fact that it is an effect which drops out of the math and simply is. Hopefully someone has some interesting insight for me here. Also, I have seen this question which I believe is referencing the same effect because exchange is known to play a role in ferromagnetism, but nothing ever really caught hold there probably because the question referenced a specific application and so was very difficult to answer well.
In quantum chemistry, probably the easiest way to understand the "exchange interaction" is within the context of the Hartree-Fock model . $ \newcommand{\op}{\hat} \newcommand{\el}{_\mathrm{e}} \newcommand{\elel}{_\mathrm{ee}} \newcommand{\elnuc}{_{\mathrm{en}}} \newcommand{\core}{^{\mathrm{core}}} \newcommand{\bracket}[3]{\langle{#1}\vert{#2}\vert{#3}\rangle} $ To reduce the complexity of the electronic Schrödinger equation $$ \op{H}\el \psi\el(\vec{q}\el) = E\el \psi\el(\vec{q}\el) \, , $$ where $$ \op{H}\el = \op{T}\el + \op{V}\elnuc + \op{V}\elel = - \sum\limits_{i=1}^{n} \frac{1}{2} \nabla_{i}^{2} - \sum\limits_{\alpha=1}^{\nu} \sum\limits_{i=1}^{n} \frac{Z_{\alpha}}{r_{\alpha i}} + \sum\limits_{i=1}^{n} \sum\limits_{j > i}^{n} \frac{1}{r_{ij}} \, , $$ we would like to separate the electronic coordinates from each other by writing down the many-electron wave function as a product of one-electron ones. Unfortunately, such separation won't work due to the presence of $\op{V}\elel$ term in the electronic Hamiltonian. But if instead of $\op{V}\elel$ potential, which prevents the separation of electronic coordinates, a model potential of the form $\sum\nolimits_{i=1}^{n} v_{\mathrm{MF}}(\vec{r}_{i})$ had entered the electronic Schrödinger equation, it would be reduced to a set of $n$ one-electron Schrödinger equations with the many-electron wave function being just a simple product of their solutions, one-electron wave functions $\psi_{i}(\vec{r}_{i})$. More precisely, since we have to take spin of electrons and the principle of antisymmetry of the electronic wave function into account, the electronic wave function would be an antisymmetric product of one-electron wave functions $\psi_{i}(\vec{q}_{i})$ termed the Slater determinant , $$ \Phi = \frac{1}{\sqrt{n!}} \begin{vmatrix} \psi_{1}(\vec{q}_{1}) & \psi_{2}(\vec{q}_{1}) & \cdots & \psi_{n}(\vec{q}_{1}) \\ \psi_{1}(\vec{q}_{2}) & \psi_{2}(\vec{q}_{2}) & \cdots & \psi_{n}(\vec{q}_{2}) \\ \vdots & \vdots & \ddots & \vdots \\ \psi_{1}(\vec{q}_{n}) & \psi_{2}(\vec{q}_{n}) & \cdots & \psi_{n}(\vec{q}_{n}) \end{vmatrix} \, , $$ where the one-electron wave functions $\psi_{i}$ of the joint spin-spatial coordinates of electrons $\vec{q}_{i} = \{ \vec{r}_{i}, m_{si} \}$ are referred to as spin orbitals. Physically the model potential of the above mentioned form represent the case when electrons do not instantaneously interact with each other, but rather each and every electron interacts with the average, or mean, electric field created by all other electrons and given by $v_{\mathrm{MF}}(\vec{r}_{i})$. Minimization of the electronic energy functional $$ E\el^{\mathrm{HF}} = \bracket{ \Phi }{ \op{H}\el }{ \Phi } = \bracket{ \Phi }{ \op{T}\el }{ \Phi } + \bracket{ \Phi }{ \op{V}\elnuc }{ \Phi } + \bracket{ \Phi }{ \op{V}\elel }{ \Phi } $$ with respect to variations in spin orbitals subject to constraint that spin orbitals remain orthonormal eventually results in the set of the so-called canonical Hartree-Fock equations which define canonical spin orbitals $\psi_i$ together with the corresponding orbital energies $\varepsilon_i$ $$ \op{F} \psi_{i}(\vec{q}_{1}) = ε_{i} \psi_{i}(\vec{q}_{1}) \, , \quad i = 1, \dotsc, n \, , $$ where $\op{F} = \op{H}\core + \sum\nolimits_{j=1}^{n} \big(\op{J}_{j} - \op{K}_{j} \big)$ is the Fock operator with $$ \op{H}\core \psi_{i}(\vec{q}_1) = \Big( - \frac{1}{2} \nabla_{1}^2 - \sum\limits_{\alpha=1}^{\nu} \frac{Z_{\alpha}}{r_{\alpha 1}} \Big) \psi_{i}(\vec{q}_1) \, , \\ \op{J}_{j} \psi_{i}(\vec{q}_1) = \bracket{ \psi_{j}(\vec{q}_2) }{ r_{12}^{-1} }{ \psi_{j}(\vec{q}_2) } \psi_{i}(\vec{q}_1) \, , \\ \op{K}_{j} \psi_{i}(\vec{q}_1) = \bracket{ \psi_{j}(\vec{q}_2) }{ r_{12}^{-1} }{ \psi_{i}(\vec{q}_2) } \psi_{j}(\vec{q}_1) \, . $$ Here, the first part of the Fock operator, $\op{H}\core$, is known as the one-electron core Hamiltonian and it is simply the Hamiltonian operator for a system containing the same number of nuclei, but only one electron. The second part of the Fock operator, namely, $\sum\nolimits_{j=1}^{n} \big(\op{J}_{j} - \op{K}_{j} \big)$, plays the role of the mean-field potential $v_{\mathrm{MF}}$ that approximates the true potential $\op{V}\elel$ of interactions between the electrons. The first operator $\op{J}_{j}$ rewritten in the following form $$ \op{J}_{j}(\vec{q}_{1}) = \int \psi_{j}^*(\vec{q}_{2}) r_{12}^{-1} \psi_{j}(\vec{q}_{2}) \,\mathrm{d} {\vec{q}_{2}} = \int \frac{ |\psi_{j}(\vec{q}_2)|^{2} }{ r_{12} } \mathrm{d} \vec{q}_2 \, , $$ can be clearly interpreted as the the Coulomb potential for electron-one at a particular point $\vec{r}_{1}$ in an electric field created by electron-two distributed over the space with the probability density $|ψ_{j}(\vec{q}_2)|^{2}$, and for this reason it is called the Coulomb operator . The second operator $\op{K}_{j}$ has no simple physical interpretation, but it can be shown to arise entirely due to anti-symmetry requirement, i.e. if instead of a Slater determinant one uses a simple product of spin orbitals, termed the Hartree product, there will be no $\op{K}_{j}$ terms in the resulting equations (the so-called Hartree equations). It is for that reason that $\op{K}_{j}$ is called the exchange operator . To quickly understand why exchange terms appear when using a Slater determinant instead of a simple product of spin orbitals look no further than at the Slater rules . For $\op{V}\elel$ which is a two-electron operator we have $$ \bracket{ \Phi }{ \op{V}\elel }{ \Phi } = \sum\limits_{i=1}^{n} \sum\limits_{j>i}^{n} \Big( \bracket{ \psi_{i}(1) \psi_{j}(2) }{ r_{12}^{-1} }{ \psi_{i}(1) \psi_{j}(2) } - \bracket{ \psi_{i}(1) \psi_{j}(2) }{ r_{12}^{-1} }{ \psi_{j}(1) \psi_{i}(2) } \Big) \, , $$ where the second (exchange) part would be absent if $\Phi$ were a simple product of spin orbitals, rather than a Slater determinant. 1) To the question on why do we interpret exchange terms in the Hartree-Fock as a manifestation of some sort of interaction. On the one hand, it must be clear from exchange terms $\op{K}_{j}$ being part of potential energy terms $v_{\mathrm{MF}}$, that they are indeed related to some type of interaction between electrons. However, it must be said that, unlike for the four fundamental interactions , there exists no force based on exchange interaction. Strictly speaking, I don't even think that the term "interaction" has a definite meaning in physics, unless it simply stands for the "fundamental interaction" in which case it is merely a synonym for "fundamental force". Exchange interaction is not one of the four fundamental interactions, so the meaning of the word "interaction" here is a bit different than that for fundamental interactions. Oxford Dictionary defines interaction as follows, 1.1 Physics A particular way in which matter, fields, and atomic and subatomic particles affect one another, e.g. through gravitation or electromagnetism. And exchange interaction indeed is a way by which electrons (of the same spin) "affect one another". This can be readily seen as follows. Consider the simplest many-electron system, a two-electron one, and let us examine the following two important events: $r_1$ - the event of finding electron-one at a point $\vec{r}_{1}$; $r_2$ - the event of finding electron-one at a point $\vec{r}_{2}$. Probability theory reminds us that in the general case the so-called joint probability of two events, say, $\vec{r}_{1}$ and $\vec{r}_{2}$, i.e. the probability of finding electron-one at point $\vec{r}_{1}$ and at the same time electron-two at point $\vec{r}_{2}$, is given by $$ \Pr(r_1 \cap r_2) = \Pr(r_1\,|\,r_2) \Pr(r_2) = \Pr(r_1) \Pr(r_2\,|\,r_1) \, , $$ where $\Pr(r_1)$ is the probability of finding electron-one at point $\vec{r}_{1}$ irrespective of the position of electron-two; $\Pr(r_2)$ is the probability of finding electron-two at point $\vec{r}_{2}$ irrespective of the position of electron-one; $\Pr(r_1\,|\,r_2)$ is the probability of finding electron-one at point $\vec{r}_{1}$, given that electron-two is at $\vec{r}_{2}$; $\Pr(r_2\,|\,r_1)$ is the probability of finding electron-two at point $\vec{r}_{2}$, given that electron-one is at $\vec{r}_{1}$. The first two of the above probabilities are referred to as unconditional probabilities, while the last two are referred to as conditional probabilities, and in general $\Pr(A\,|\,B) \neq \Pr(A)$, unless events $A$ and $B$ are independent of each other. If the above mentioned events $\vec{r}_{1}$ and $\vec{r}_{2}$ were independent, then the conditional probabilities would be equal to their unconditional counterparts $$ \Pr(r_1\,|\,r_2) = \Pr(r_1) \, , \quad \Pr(r_2\,|\,r_1) = \Pr(r_2) \, , $$ and the joint probability of $\vec{r}_{1}$ and $\vec{r}_{2}$ would simply be equal to the product of unconditional probabilities, $$ \Pr(r_1 \cap r_2) = \Pr(r_1) \Pr(r_2) \, . $$ In reality, however, the events $\vec{r}_{1}$ and $\vec{r}_{2}$ are not independent of each other, because electrons "affect one another", $$ \Pr(r_1\,|\,r_2) \neq \Pr(r_1) \, , \quad \Pr(r_2\,|\,r_1) \neq \Pr(r_2) \, , $$ and consequently the joint probability of $\vec{r}_{1}$ and $\vec{r}_{2}$ is not equal to the product of unconditional probabilities, $$ \Pr(r_1 \cap r_2) \neq \Pr(r_1) \Pr(r_2) \, . $$ Inequalities above hold true for two reasons, i.e. there are two ways electron "affect one another". First, electrons repel each other by Coulomb forces, and, as a consequence, at small distances between the electrons $\Pr(r_1\,|\,r_2) < \Pr(r_1)$ and $\Pr(r_2\,|\,r_1) < \Pr(r_2)$, while at large distances $\Pr(r_1\,|\,r_2) > \Pr(r_1)$ and $\Pr(r_2\,|\,r_1) > \Pr(r_2)$. In the extreme case when $\vec{r}_{1} = \vec{r}_{2}$, the Coulomb repulsion between the electrons becomes infinite, and thus, $\Pr(\vec{r}_{1}\,|\,\vec{r}_{1}) = 0$ and consequently $\Pr(\vec{r}_{1} \cap \vec{r}_{1}) = 0$, i.e. the probability of finding two electrons at the same point in space is zero. Secondly, as a consequence of the Pauli exclusion principle, electrons in the same spin state can not be found at the same location in space, so that for electrons in the same spin state there is an additional contribution into the inequalities between conditional and unconditional probabilities above. This effect is relatively localized as compared to one due to Coulomb repulsion, but bearing in mind the relationship between probabilities and wave functions and taking into account the continuity of the latter, it is still noticeable when electrons are close to each other and not just at the very same location in space. Now, for a Hartree product wave function $$ \psi_{\mathrm{HP}}(\vec{q}_{1}, \vec{q}_{2}) = \psi_{1}(\vec{q}_{1}) \psi_{2}(\vec{q}_{2}) \, , $$ one can show that $\Pr(r_1 \cap r_2) = \Pr(r_1) \Pr(r_2)$ if two electrons are in different spin states as well as when they are in the same spin state 2) . But for a Slater determinant $$ \Phi(\vec{q}_{1}, \vec{q}_{2}) = \frac{1}{\sqrt{2}} \Big( \psi_{1}(\vec{q}_{1}) \psi_{2}(\vec{q}_{2}) - \psi_{1}(\vec{q}_{2}) \psi_{2}(\vec{q}_{1}) \Big) \, , $$ $\Pr(r_1 \cap r_2) = \Pr(r_1) \Pr(r_2)$ will hold only for electrons of unlike spin, while when both electrons are in the same spin state $\Pr(r_1 \cap r_2) \neq \Pr(r_1) \Pr(r_2)$ 2) . And this inequality is a clear indication of exchange interaction. 1) I leave it to OP as an exercise. Derivation for a simple product of spin orbitals is pretty trivial, while for the case of a Slater determinant OP can consult, for instance, Szabo & Ostlund, Modern Quantum Chemistry, Section 2.3.3. 2) I leave it as another exercise.
{ "source": [ "https://chemistry.stackexchange.com/questions/61176", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/17583/" ] }
61,406
My textbook mentions that SCUBA tanks often contain a mixture of oxygen and nitrogen along with a little helium which serves as a diluent. Now as I remember it, divers take care not to surface too quickly because it results in ' the Bends ', which involves the formation of nitrogen bubbles in the blood and is potentially fatal. If that's the case, why not use pure oxygen gas in SCUBA tanks? It seems like a good idea since it would a) Enable divers to stay underwater for longer periods of time (I keep hearing that ordinary SCUBA tanks only give divers a pathetic hour or so of time underwater. b) Possibly eliminate the chances of developing ' the Bends ' upon surfacing. Well, it seems plausible, that is if the diver were to take a 10 minute deep-breathing session with pure oxygen to flush out whatever nitrogen's there in his lungs before hooking up a cylinder of pure oxygen and going for a dive. So if there's no gaseous nitrogen in his lungs and blood, then he wouldn't have to worry about nitrogen bubbles developing in his system. Now those two possible advantages aren't hard to overlook, but since no one fills SCUBA tanks with pure oxygen, there must be some reason that I've overlooked, that discourages divers from filling the tanks with pure oxygen. So what is it? Also, I hear that the oxygen cylinders used in hospitals have very high concentrations of oxygen; heck, there's one method of treatment called the Hyperbaric Oxygen Therapy (HBOT) where they give patients 100% pure oxygen at elevated pressures. Hence I doubt whether the increase in pressure associated with diving is the problem here. So I reiterate: Why is it a bad idea for divers to breathe pure oxygen underwater? I guess most of the recent answers have kinda missed a main point, so I'll rephrase the question: Why is it a bad idea for divers to breathe pure oxygen underwater? If it is indeed due to pressure considerations as most sources claim, then why doesn't it seem to be a problem when patients are given 100% pure oxygen in cases like the HBOT (which is performed at elevated pressures) ?
The other answers here, describing oxygen toxicity are telling what can go wrong if you have too much oxygen, but they are not describing two important concepts that should appear with their descriptions. Also, there is a basic safety issue with handling pressure tanks of high oxygen fraction. An important property of breathed oxygen is its partial pressure . At normal conditions at sea level, the partial pressure of oxygen is about 0.21 atm. This is compatible with the widely known estimate that the atmosphere is about 78% nitrogen, 21% oxygen, and 1% "other". Partial pressures are added to give total pressure; this is Dalton's Law . As long as you don't use toxic gasses, you can replace the nitrogen and "other" with other gasses, like Helium, as long as you keep the partial pressure of oxygen near 0.21, and breathe the resulting mixtures without adverse effects. There are two hazards that can be understood by considering the partial pressure of oxygen. If the partial pressure drops below about 0.16 atm, a normal person experiences hypoxia . This can happen by entering a room where oxygen has been removed. For instance, entering a room which has a constant source of nitrogen constantly displacing the room air, lowering the concentration -- and partial pressure -- of oxygen. Another way is to go to the tops of tall mountains. The total atmospheric pressure is lowered and the partial pressure of oxygen can be as low as 0.07 atm (summit of Mt. Everest) which is why very high altitude climbing requires carrying additional oxygen. Yet a third way is "horsing around" with Helium tanks -- repeatedly inhaling helium to produce very high pitched voices deprives the body of oxygen and the partial pressure of dissolved oxygen in the body falls, perhaps leading to loss of consciousness. Alternatively, if the partial pressure rises above about 1.4 atm, a normal person experiences hyperoxia which can lead to oxygen toxicity (described in the other answers). At 1.6 atm the risk of central nervous system oxygen toxicity is very high. So, don't regulate the pressure that high? There's a problem. If you were to make a 10-foot long snorkel and dive to the bottom of a swimming pool to use it, you would fail to inhale. The pressure of air at your mouth would be about 1 atm, because the 10-foot column of air in the snorkel doesn't weigh very much. The pressure of water trying to squeeze the air out of you (like a tube of toothpaste) is about 1.3 atm. Your diaphragm is not strong enough to overcome the squeezing and fill your lungs with air. Divers overcome this problem by using a regulator (specifically, a demand valve), which allows the gas pressure at the outlet to be very near that of the ambient pressure. The principle job of the regulator is to reduce the very high pressure inside the tank to a much lower pressure at the outlet. The demand valve tries to only supply gas when the diver inhales and tries to supply it at very nearly ambient pressure. Notice that at depth the ambient pressure can be much greater than 1 atm, increasing by about 1 atm per 10 m (or 33 feet). If the regulator were to supply normal air at 2 atm pressure, the partial pressure of oxygen would be 0.42 atm. If at 3 atm, 0.63 atm. So as a diver descends, the partial pressure of oxygen automatically increases as a consequence of having to increase the gas pressure to allow the diver to inflate their lungs. Around 65 m (220 ft), the partial pressure of oxygen in an "air mix" would be high enough to risk hyperoxia and other dangerous consequences. Now imagine a gas cylinder containing 100% oxygen. If we breathe from it at the surface, the partial pressure of oxygen is 1 atm -- high, but not dangerous. At a depth of 10 m, the partial pressure of supplied oxygen is 2 atm -- exceeding acceptable exposure limits. This is a general pattern -- raising the oxygen fraction of diving gasses decreases the maximum diving depth. And you can't lower the partial pressure much because the lower limit, 0.16 atm, isn't that much lower than the 0.21 atm of sea level atmosphere. One general category of solutions is to change gas mixes at various depths. This is complicated, requires a great deal of planning, and is outside the scope of your question. But it is certainly not as straightforward as just simplifying the gas mixtures or just raising the partial pressure of oxygen. Additionally, compressed oxygen is a relatively annoying gas to work with. It is not itself flammable, but it makes every nearby organic thing flammable. For instance using grease or oil on or near an oxygen fitting risks spontaneously igniting the grease or oil. Merely having grease on your hand while handling oxygen refilling gear (with a small leak) can burn your hand .
{ "source": [ "https://chemistry.stackexchange.com/questions/61406", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/33991/" ] }
61,443
Just say I wanted to give a pH calculation to three significant figures. Eg: the $\mathrm{pH}$ of a $0.500 M$ solution of $\ce{HCl}$ is $-\log(0.5) = 0.3010299957...$ Would 3 s.f. be $0.301$ or $0.30$? I know normally when zero comes before a number, it isn't considered a significant number (i.e $0.00034$ and $0.34$ both have two significant figures). But since pH is a scale, can I treat $0.30$ as three significant figures?
The other answers here, describing oxygen toxicity are telling what can go wrong if you have too much oxygen, but they are not describing two important concepts that should appear with their descriptions. Also, there is a basic safety issue with handling pressure tanks of high oxygen fraction. An important property of breathed oxygen is its partial pressure . At normal conditions at sea level, the partial pressure of oxygen is about 0.21 atm. This is compatible with the widely known estimate that the atmosphere is about 78% nitrogen, 21% oxygen, and 1% "other". Partial pressures are added to give total pressure; this is Dalton's Law . As long as you don't use toxic gasses, you can replace the nitrogen and "other" with other gasses, like Helium, as long as you keep the partial pressure of oxygen near 0.21, and breathe the resulting mixtures without adverse effects. There are two hazards that can be understood by considering the partial pressure of oxygen. If the partial pressure drops below about 0.16 atm, a normal person experiences hypoxia . This can happen by entering a room where oxygen has been removed. For instance, entering a room which has a constant source of nitrogen constantly displacing the room air, lowering the concentration -- and partial pressure -- of oxygen. Another way is to go to the tops of tall mountains. The total atmospheric pressure is lowered and the partial pressure of oxygen can be as low as 0.07 atm (summit of Mt. Everest) which is why very high altitude climbing requires carrying additional oxygen. Yet a third way is "horsing around" with Helium tanks -- repeatedly inhaling helium to produce very high pitched voices deprives the body of oxygen and the partial pressure of dissolved oxygen in the body falls, perhaps leading to loss of consciousness. Alternatively, if the partial pressure rises above about 1.4 atm, a normal person experiences hyperoxia which can lead to oxygen toxicity (described in the other answers). At 1.6 atm the risk of central nervous system oxygen toxicity is very high. So, don't regulate the pressure that high? There's a problem. If you were to make a 10-foot long snorkel and dive to the bottom of a swimming pool to use it, you would fail to inhale. The pressure of air at your mouth would be about 1 atm, because the 10-foot column of air in the snorkel doesn't weigh very much. The pressure of water trying to squeeze the air out of you (like a tube of toothpaste) is about 1.3 atm. Your diaphragm is not strong enough to overcome the squeezing and fill your lungs with air. Divers overcome this problem by using a regulator (specifically, a demand valve), which allows the gas pressure at the outlet to be very near that of the ambient pressure. The principle job of the regulator is to reduce the very high pressure inside the tank to a much lower pressure at the outlet. The demand valve tries to only supply gas when the diver inhales and tries to supply it at very nearly ambient pressure. Notice that at depth the ambient pressure can be much greater than 1 atm, increasing by about 1 atm per 10 m (or 33 feet). If the regulator were to supply normal air at 2 atm pressure, the partial pressure of oxygen would be 0.42 atm. If at 3 atm, 0.63 atm. So as a diver descends, the partial pressure of oxygen automatically increases as a consequence of having to increase the gas pressure to allow the diver to inflate their lungs. Around 65 m (220 ft), the partial pressure of oxygen in an "air mix" would be high enough to risk hyperoxia and other dangerous consequences. Now imagine a gas cylinder containing 100% oxygen. If we breathe from it at the surface, the partial pressure of oxygen is 1 atm -- high, but not dangerous. At a depth of 10 m, the partial pressure of supplied oxygen is 2 atm -- exceeding acceptable exposure limits. This is a general pattern -- raising the oxygen fraction of diving gasses decreases the maximum diving depth. And you can't lower the partial pressure much because the lower limit, 0.16 atm, isn't that much lower than the 0.21 atm of sea level atmosphere. One general category of solutions is to change gas mixes at various depths. This is complicated, requires a great deal of planning, and is outside the scope of your question. But it is certainly not as straightforward as just simplifying the gas mixtures or just raising the partial pressure of oxygen. Additionally, compressed oxygen is a relatively annoying gas to work with. It is not itself flammable, but it makes every nearby organic thing flammable. For instance using grease or oil on or near an oxygen fitting risks spontaneously igniting the grease or oil. Merely having grease on your hand while handling oxygen refilling gear (with a small leak) can burn your hand .
{ "source": [ "https://chemistry.stackexchange.com/questions/61443", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/36219/" ] }
62,001
"Chemical accuracy" in computational chemistry, is commonly understood to be $1~\mathrm{kcal\over mol}$, or about $4~\mathrm{kJ\over mol}$. Spectroscopic accuracy is $1~\mathrm{kJ\over mol}$, and that definition has intuitive sense. However, where does the $1~\mathrm{kcal\over mol}$ quantity come from? From Wikipedia : A particularly important objective, called computational thermochemistry, is to calculate thermochemical quantities such as the enthalpy of formation to chemical accuracy. Chemical accuracy is the accuracy required to make realistic chemical predictions and is generally considered to be 1 kcal/mol or 4 kJ/mol.
Short answer: the goal of "thermochemical accuracy" for computational chemistry is to match or exceed experimental accuracy. Thus, ~1 kcal/mol comes from the typical error in thermochemical experiments. The drive began with John Pople , who begin the modern effort to consider "Model Chemistries," comparing the accuracy of different methods across many molecules and often multiple properties. He realized that for thermodynamic properties, one could approach the accuracy of experiments. (See, for example his Nobel lecture ). As the model becomes quantitative, the target should be that data is reproduced and predicted within experimental accuracy. For energies, such as heats of formation or ionization potentials, a global accuracy of 1 kcal/mole would be appropriate. He then started work on composite methods like G1, G2, G3, etc. that could approach predicting many chemical properties to this accuracy.
{ "source": [ "https://chemistry.stackexchange.com/questions/62001", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/36893/" ] }
62,934
According to international system of units (SI), we can write "7 kg of apples" to refer to the mass of these apples. However, if we want refer to the amount of apples, that is, the number of entities , the unit should be the mole . So, what's the correct way to denote "7 apples" in accordance with the SI convention? Additionally, is it correct (according to SI) to say "7 atoms of hydrogen"? Or must we use mole?
In accordance with the International System of Units (SI) [ Brochure in English, 8th edition, 2006; updated in 2014 ] and the corresponding International System of Quantities (ISQ) [ ISO/IEC 80000 Quantities and units (14 parts) ], you can define a suitable new quantity, for example with the quantity name “number of apples” and the quantity symbol “$N_\text{apples}$”. The number of apples $N_\text{apples}$ is a quantity of dimension one (for historical reasons, a quantity of dimension one is often called dimensionless): $$\dim N_\text{apples} = 1$$ A quantity of dimension one acquires the unit one, (symbol: $1$); i.e. the coherent SI unit for the number of apples is the unit one. Generally, the unit one is an SI derived unit; for example, the derived SI unit for friction factor is newton per newton equal to one, (symbol: $N/N = 1$). However, the unit one for counting numbers, e.g. number of protons in an atom or number of apples, is considered as a base quantity because it cannot be expressed in terms of any other base quantities. Hence, in this case, the unit one is usually considered as a base unit, although the CGPM has not yet adopted it as an SI base unit. The name and symbol of the measurement unit one are generally not indicated. Therefore, you may write: “The number of apples is $N_\text{apples}=7$.” The unit one or its symbol $1$ may not be combined with SI prefixes. For example, if you have 2000 apples, you must not write “$N_\text{apples}=2\ \mathrm k$” for $N_\text{apples}=2000$. (And by the way, when you see something like “10K reputation” mentioned on any stackexchange site, you are looking at at least three nonconformities at the same time.) Any attachment to a unit symbol as a means of giving information about the special nature of the quantity or context of measurement under consideration is not permitted. Expressions for units shall contain nothing else than unit symbols and mathematical symbols. Therefore, write “the maximum electric potential difference is $U_\text{max}=1000\ \mathrm V$”, not “$U=1000\ \mathrm V_\text{max}$” “the gauge pressure is $p_\mathrm e=0.5\ \text{bar}$”, not “$p=0.5\ \text{bar(g)}$” “the electric power is $P_\text{el}=1300\ \mathrm{MW}$”, not “$P=1300\ \mathrm{MW_{el}}$” “the water content is $170\ \mathrm{g/l}$”, not “$170\ \mathrm{g\ \ce{H2O}/l}$” and also “the number of apples is $N_\text{apples}=7$”, not “$N=7\ \text{apples}$”
{ "source": [ "https://chemistry.stackexchange.com/questions/62934", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/36919/" ] }
64,159
When I use hot stuff like hair straightener on my hair, my hair begins to smell bad, which is very different from smell produced from burning other things. So what's the gas produced that is responsible for this smell?
Hair is largely (~90%) composed of a protein called keratin , which originates in the hair follicle. Now, keratin is composed of a variety of amino acids, including the sulfur containing amino acid, cysteine . All these amino acids are joined to each other by chemical bonds called peptide bonds to form these long chains that we call polypeptide chains . In the case of human hair, the polypeptide that we're talking about is keratin . The polypeptide chains are intertwined around each other in a helix shape. The average composition of normal hair is 45.2 % carbon, 27.9% oxygen, 6.6% hydrogen, 15.1% nitrogen and 5.2% sulfur. (I got that diagram off of Google Images) Now, there are a whole bunch of chemical interactions that maintain the secondary and tertiary structures of proteins, such as van der Waals forces, hydrophobic interactions, polypeptide linkages, ionic bonds, etc. But there is, however, one additional chemical interaction in proteins that contain the amino acids cysteine and methionine (both of which contain sulfur) called disulfide linkages . You can see that in the diagram above (it's been marked in yellow, which is fortunately, a very intuitive color when you're dealing with sulfur). When you burn hair (or skin or nails... anything that has keratin in it for that matter) these disulfide linkages are broken. The sulfur atoms are now free to chemically combine with other elements present in the protein and air, such as oxygen and hydrogen. The volatile sulfur compounds formed as a result is what's responsible for the fetid odor of burning hair. Quite a few of the "bad smells" we come across everyday are due to some sulfur containing compound or the other. A great example would be the smell of rotten eggs, which can be attributed to a volatile sulfur compound called hydrogen sulfide . Yet another example (as @VonBeche points out in the comments) would be that of tert-butylthiol , which is the odorant that is used to impart the characteristic smell of Liquefied Petroleum Gas (LPG).
{ "source": [ "https://chemistry.stackexchange.com/questions/64159", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/33507/" ] }
64,362
Hydrogen is flammable, and for any fire to burn it needs oxygen . Why does a compound made of hydrogen and oxygen put out fires instead of catalyzing them? I understand that hydrogen and water are chemically different compounds, but what causes water to be non-flammable?
You can think of water as the ash from burning hydrogen : it's already given off as much energy as possible from reacting hydrogen with oxygen. You can, however, still burn it. You just need an even stronger oxidizer than oxygen. There aren't many of them, but fluorine will work, $$ \ce{2F2 + 2H2O -> 4HF + O2} $$ as will chlorine trifluoride : $$ \ce{ClF3 + 2H2O -> 3HF + HCl + O2} $$
{ "source": [ "https://chemistry.stackexchange.com/questions/64362", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/38559/" ] }
64,695
As a total chemistry layman I enjoyed reading " Why doesn't $\ce{H2O}$ burn? ", but it prompted another question in my mind. One of the answers there was that $\ce{H2O}$ can burn in the presence of a stronger oxidizer like fluorine, so burning stable compounds is just a question of using a stronger oxidizers. Or is it? Is there a chemical species, or a "least reactive compound" that, once made, is difficult or impossible to chemically transform into something else? A chemical black hole, if you will. Does the above answer change if we allow high temperatures and pressures versus restricting ourselves to roughly standard temperature and pressure?
I think a good argument can be made for either helium or neon, the most noble of the noble gasses. Those are the two prototypical unreactive elements. They are the only two stable elements for which no more complex compounds (i.e., other than the single atoms themselves) have yet been isolated, at any temperature. The slightly more reactive element argon will admit formation of compounds such as argon hydrofluoride ($\ce{HArF}$). This compound is only stable up to $\mathrm{17\ K}$, because any hotter and the frail bonds are overcome by random thermal collisions which break the compound apart into $\ce{Ar}$ and $\ce{HF}$. Picking which of helium or neon is less reactive is a bit more difficult. A naive analysis of periodic trends would point to helium as the most inert, but more detailed computational studies suggest that at least in some cases neon may be less reactive. For example , the extremely Lewis acidic compound beryllium monoxide ($\ce{BeO}$) may potentially form an isolable, if very weakly bound, compound with helium, $\ce{HeBeO}$, but neon is not thought to form the analogous $\ce{NeBeO}$. $\ce{HHeF}$ may also be just barely stable, whereas $\ce{HNeF}$ is not thought to form. None of these have yet been observed in the laboratory, but there is definitely ongoing research into coaxing helium and neon to make isolable compounds. All that said, we can get helium and neon to react, if we drop the requirement that the product must be isolated (that is, "put into a bottle"). Chemical species such as $\ce{He2^+}$, $\ce{Ne2^+}$ and $\ce{HeNe^+}$ have long been known from mass spectrometry experiments, they just can't be isolated because that would require the presence of a counterbalancing negative ion, which would immediately proceed to react with the positive ion and cause decomposition with release of the noble gas. You can also bring what is arguably the most reactive species in chemistry, the hydrogen cation ($\ce{H^+}$) into the fray. Helium and neon will both easily react with $\ce{H^+}$ to form $\ce{HeH+}$ and $\ce{NeH+}$ , as shown by the exothermic proton affinities of $\ce{He}$ and $\ce{Ne}$. Again, these composite particles cannot be isolated, as they are amongst the strongest Brønsted-Lowry acids in existence, and will protonate anything they come into contact with in order to release the neutral noble gas atom. Some other possible non-isolable relevant noble gas ions are $\ce{FHeO^{-}}$, $\ce{HeCCH+}$ and $\ce{PbHe15^{2+}}$ (!), among others. Edit: I forgot to mention that there are a few other cases of helium or neon binding to other atoms. By exciting one of the electrons in the atom, it is possible to coax it to bond with another one, forming an excimer or exciplex. See for example, the dihelium excimer $\mathrm{He_2^{*}}$. This is a short-lived species that still can't be isolated, however, because in a matter of microseconds it releases a photon and de-excites, promptly separating into the two free atoms.
{ "source": [ "https://chemistry.stackexchange.com/questions/64695", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/38963/" ] }
65,044
Now I have been learning chemistry for five years. I remember when I started organic chemistry, it was fun to draw arrows between molecules to show, as if in a mathematical demonstration, how the reactions occurred. In every lesson I had, teachers explained to us how a specific reaction (for example the Shapiro reaction ) occurs step by step, explaining the chemistry of each group in each intermediate as if things were obvious (you know how teachers are). But I've been wondering for some weeks now how does a mechanism come to be considered as accepted or still discussed? If they use some, what kind of spectrometry techniques are used to measure the amount of each intermediate? If not how do they proceed? Do they use computational chemistry? Because for example for a reaction such like a $\mathrm{S_N2}$ it doesn't look too tricky to find how it works, whereas for Fries rearrangement (I don't know if the mechanism is considered as accepted or not) it seems to be more tricky. ( Ref. ) So can you explain the methods (at least the most used) to confirm a mechanism? I am aware that "confirm" does not mean that we are 100% sure, but rather that it is simply the best we have found so far.
Great question! When I was teaching, Anslyn and Dougherty was a decent text for this. Here are some general comments: First, please note that you cannot be sure about a mechanism. That's the real killer. You can devise experiments that are consistent with the mechanism but because you cannot devise and run all possible experiments, you can never be sure that your mechanism is correct. It only takes one good experiment to refute a mechanism. If it's inconsistent with your proposed mechanism, and you're unable to reconcile the differences, then your mechanism is wrong (or incomplete at best). Writing mechanisms for new reactions is hard. Good thing we have a whole slew of existing reactions that people already have established (highly probable, but not 100% guaranteed) mechanisms for. Computational chemistry is pretty awesome now and provides some really good insights into how a specific reaction takes place. It doesn't always capture all relevant factors so you need to be careful. Like any tool, it can be used incorrectly. The types of reactions you run really depend heavily on the kind of reaction you're studying. Here are some typical ones: Labeling -- very good for complex rearrangements Kinetics (including kinetic isotope effects) -- good for figuring out rate-determining steps Stereochemistry -- Good for figuring out if steps are concerted (see this example mechanism I wrote for a different question ) Capturing intermediates -- This can be pretty useful but some species that you capture aren't involved in the reaction, so be careful. Substitution effects and LFER studies -- Great for determining if charge build-up is accounted for in your mechanism For named reactions, the Kurti-Czako book generally has seminal references if you want to actually dig through the literature for experiments. For your specific reaction, what do we think the rate-determining step is? Probably addition into the acylium? You could try to capture the acylium intermediate. You could run the reaction with reactants that have two labelled oxygens and reactants that have no labelled oxygens. Do they mix? If not, it's fully intramolecular. Otherwise, there's an intermolecular component and the mechanism as written is incomplete. A quick Google search suggests that the boron trichloride mediated version has been studied via proton, deuterium, and boron NMR. I didn't follow up on this, but there's clearly some depth here. When I was T.A.ing for Greg Fu, he really liked to use an example with the von Richter reaction. I might be able to find those references...
{ "source": [ "https://chemistry.stackexchange.com/questions/65044", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/15235/" ] }
65,233
The Wittig reaction is one of the most significant advances in synthetic organic chemistry in the 20 th century and rightfully won its discoverer, Georg Wittig , the Nobel Prize in Chemistry . A Wittig reaction is the addition of a phosphorus ylide (previously thought to be an ylene with a $\ce{C=P}$ bond) to an aldehyde or a ketone resulting in a $\ce{C=C}$ double bond. Mechanistically, there is little debate that the final step is a cycloreversion of an oxaphosphetane liberating a phosphane oxide. However, as far as I know there is no agreement on the preceeding step(s) to form said oxaphosphetane; with two different competing mechanistic proposals: Nucleophilic attack of the ylide onto the carbonyl carbon to give a charge-separated betain structure. This rotates around the newly-formed $\ce{C-C}$ bond until oxide and phosphonium are in proximity whereupon a bond is formed to close the oxaphosphetane (stepwise mechanism) [2+2] cycloaddition of the ylide and the carbonyl to immediately form the oxaphosphetane in a concerted manner. Wikipedia just presents these two mechanisms side by side hence it is not really helpful. Which of the two mechanisms is, at present, considered the most likely reaction pathway? Which principal pieces of evidence point towards it and disfavour the competing mechanism? Please answer including references to the corresponding journal articles.
tl;dr : I don't think there is any mechanism that is 100% correct, and, in cases like this especially, I think it would completely depend upon what set of carbonyl/ylid/base/solvent etc. was used. But, of course, we like being able to generalise, and to my knowledge theres a lot more evidence to support a concerted type mechanism. General Background The question is hopefully summarised in the scheme below (no geometry, that, in itself, is a pretty lengthy discussion). Reaction of a carbonyl (in this case a ketone) with a phosphorus ylid is able to give rise to two species, either an oxaphosphetane directly or a betaine (an internal salt for the benefit of Loong) which then goes on to form the oxaphosphetane which itself is able to eliminate to afford the olefin with generation of triphenylphosphine oxide as a thermodynamic driving force. This answer will present the case for a cycloaddition mechanism, and evidence against the betaine pathway. Importantly, only Wittig reactions of unstabilised ylids , as with stabilised ylids the rules of the game change due to reversibility of intermediate formation (see a Horner–Wadsworth–Emmons). Evidence against the initial formation of a betaine Betaines have never been observed during the course of a 'normal' Wittig reaction, that is, spectroscopically, we are unable to see it (this may, of course, just be due to the fact that the formation of the oxaphosphetane is so rapid relative to the timescale of the methods used). Many reports of betaines have been from accidental (or otherwise) opening of oxaphosphetane intermediates, and indeed this seems to have been what led Wittig down the betaine pathway in the initial reports (though in the first ever paper he suggested that oxaphosphetane was the sole intermediate, but subsequently couldn't find any evidence for it back in the days before NMR etc). Wittig [2] and others [3] reported the isolation of crystalline betaine salts. Phosphonium bromides were treated with PhLi to give a ylid, to which a carbonyl was added; subsequent addition of hydrogen bromide afforded crystalline solids which were able to be elucidated, proving the formation of a betaine. Whilst this was convincing at the time, we now know that the betaine isn't observed, but rather the oxaphosphetane is. Vedejs (whom we'll come back to), has gone on to show that the earlier findings could be explained by quenching of the oxaphosphetane rather than as a result of directly trapping the betaine [4] , which is more consistent with the other data available to us. Scheme 1 : Generation of the crystalline betaine derivative from the oxaphosphetane. There are more recent reports of 'stable' betaines for very specific substrates where subsequent collapse isn't possible, again generated from oxaphosphetane intermediates. Stefan Berger's group at Leipzig reported a betaine stabilised by a bipyridyl group, allowing NMR data to be reported for the first time [1] . Scheme 2 Reversible formation of a stabilised betaine In this case, the lithium holds a chelated structure together, preventing it from forming the alkene (and also to a certain extent preventing reformation of the oxaphosphetane on steric grounds). Interesting, with addition of 18-crown-6 to sequester the lithium, the reaction proceeded normally in the forward direction with alkene signals observed. The fact Berger was able to do this, does of course not imply that they are real intermediates along the Wittig pathway, but it was a nice piece of mechanistic work that I thought deserved mention. In addition to this mechanistic work, there are some empirical issues with invoking a betaine intermediate that cannot be easily explained. Namely, all Wittig reactions using non-stabilised ylids should be under kinetic control and irreversible (hence highly ( Z )-selective due to the inability of the initial intermediates to reverse and hence equilibrate to the thermodynamic product), however this is frequently observed not to be the case, and in certain cases, high ( E )-selectivity can be achieved. Evidence for direct oxaphosphetane formation Vedejs was the one of the first to propose direct (irreversible) cycloaddition of the ylid and carbonyl to give rise to the oxaphosphetane, quickly followed by cycloreversion to form the desired alkene and a phosphine oxide. Scheme 3 : cycloaddition/cycloreversion mechanism for Wittig In his early reports, direct observation of the oxaphosphetane by P-NMR was conducted [5] , the paper reasons that the high-field chemical shift observed for the phosphorus containing intermediate ruled out charged, 4-valent species such as betaines, but was instead more consistent with formation of the 5-valent neutral species such as an oxaphosphetane. Scheme 4 : Vedejs' observation of oxaphosphetane. This NMR analysis isn't really too convincing by itself, however in the scheme above, compound 4 had previously been characterised by X-ray crystallography [6] and various NMR methods and as such gave some weight to the species observed by Vedejs being similar to the known compound. More recent work [7] has also observed directly the cis/trans oxaphosphetanes formed via competing cycloadditions. Summary Overall, direct observation of the oxaphosphetane but not the betaine does seem to suggest that mechanistically the cyclic intermediate is formed directly rather than via the betaine closing. As stated previously, we can't 100% rule out the possibility that the betaine closure is just sufficiently rapid to appear invisible to our currently spectroscopic methods however every report of isolated/observed betaines that I've came across thus far has either intentionally been formed from an oxaphosphetane or can be explained by this without the authors recognising it. One thing that is hugely missing from the story is computational work, whilst some studies have calculated the relative energies of the two intermediates, to my knowledge, the entire reaction coordinate hasn't been fully explored. In conclusion, I think the cycloaddition mechanism is safest, and indeed it was the one I was taught as an undergraduate (and the one I know several other leading universities use), so if in doubt i'd go for that explanation, but on the firm understanding that more mechanistic work is needed. References Generally: Modern Carbonyl Olefination , Carey Advanced Organic A , and Comprehensive Organic Synthesis I . [1] Neumann, R. A.; Berger, S. Observation of a Betaine Lithium Salt Adduct During the Course of a Wittig Reaction. Eur. J. Org. Chem. 1998 , 6 , 1085. Subramanyam had reported similar chemistry earlier, but the NMR data was incomplete. [2] Wittig, G.; Haag, A. Über Phosphin-alkylene als olefinbildende Reagenzien, VIII. Allenderivate aus Ketenen. Chem. Ber. 1963, 96 , 1535. DOI: 10.1002/cber.19630960609 . A very old paper in German. [3] Schlosser, M.; Christmann, K. F. Olefinierungen mit Phosphor-Yliden, I. Mechanismus und Stereochemie der Wittig-Reaktion. Liebigs Ann. Chem. 1967, 708 , 1. DOI: 10.1002/jlac.19677080102 . [4] Vedejs, E.; Meier, G. P.; Snoble, K. A. J. Low-temperature characterization of the intermediates in the Wittig reaction. J. Am. Chem. Soc. 1981, 103 , 2823. DOI: 10.1021/ja00400a055 . [5] Vedejs, E.; Snoble, K. A. J. Direct observation of oxaphosphetanes from typical Wittig reactions. J. Am. Chem. Soc. 1973, 95 , 5778. DOI: 10.1021/ja00798a066 . [6] Mazhar-Ul-Haque; Caughlan, C. N.; Ramirez, F.; Pilot, J. F.; Smith, C. P. Crystal and molecular structure of a four-membered cyclic oxyphosphorane with pentavalent phosphorus, PO2(C6H5)2(CF3)4C3H2. J. Am. Chem. Soc. 1971, 93 , 5229. DOI: 10.1021/ja00749a044 . [7] Maryanoff, B. E.; Reitz, A. B.; Mutter, M. S.; Whittle, R. R.; Olofson, R. A. Stereochemistry and mechanism of the Wittig reaction. Diasteromeric reaction intermediates and analysis of the reaction course. J. Am. Chem. Soc. 1986, 108 , 7664. DOI: 10.1021/ja00284a034 .
{ "source": [ "https://chemistry.stackexchange.com/questions/65233", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/7475/" ] }
65,297
$\ce{NH3}$ is a weak base so I would have expected $\ce{NH4+}$ to be a strong acid. I can't find a good explanation anywhere and am very confused. Since only a small proportion of $\ce{NH3}$ molecules turn into $\ce{NH4+}$ molecules, I would have expected a large amount of $\ce{NH4+}$ molecules to become $\ce{NH3}$ molecules.
First, let’s get the definition of weak and strong acids or bases out of the way. The way I learnt it (and the way everybody seems to be using it) is: $\displaystyle \mathrm{p}K_\mathrm{a} < 0$ for a strong acid $\displaystyle \mathrm{p}K_\mathrm{b} < 0$ for a strong base $\displaystyle \mathrm{p}K_\mathrm{a} > 0$ for a weak acid $\displaystyle \mathrm{p}K_\mathrm{b} > 0$ for a weak base Thus strong acid and weak base are not arbitrary labels but clear definitions based on an arbitrary measurable physical value — which becomes a lot less arbitrary if you remember that this conincides with acids stronger than $\ce{H3O+}$ or acids weaker than $\ce{H3O+}$ . Your point of confusion seems to be a statement that is commonly taught and unquestionably physically correct, which, however, students have a knack of misusing: The conjugate base of a strong acid is a weak base. Maybe we should write that in a more mathematical way: If an acid is strong, its conjugate base is a weak base. Or in mathematical symbolism: $$\mathrm{p}K_\mathrm{a} (\ce{HA}) < 0 \Longrightarrow \mathrm{p}K_\mathrm{b} (\ce{A-}) > 0\tag{1}$$ Note that I used a one -sided arrow. These two expressions are not equivalent. One is the consequence of another. This is in line with another statement that we can write pseudomathematically: If it is raining heavily the street will be wet. $$n(\text{raindrops}) \gg 0 \Longrightarrow \text{state}(\text{street}) = \text{wet}\tag{2}$$ I think we immediately all agree that this is true. And we should also all agree that the reverse is not necessarily true: if I empty a bucket of water on the street, then te street will be wet but it is not raining. Thus: $$\text{state}(\text{street}) = \text{wet} \rlap{\hspace{0.7em}/}\Longrightarrow n(\text{raindrops}) \gg 0\tag{2'}$$ This should serve to show that sometimes, consequences are only true in one direction. Spoiler: this is also the case for the strength of conjugate acids and bases. Why is the clause above on strength and weakness only true in one direction? Well, remember the way how $\mathrm{p}K_\mathrm{a}$ values are defined: $$\begin{align}\ce{HA + H2O &<=> H3O+ + A-} && K_\mathrm{a} (\ce{HA}) = \frac{[\ce{A-}][\ce{H3O+}]}{[\ce{HA}]}\tag{3}\\[0.6em] \ce{A- + H2O &<=> HA + OH-} && K_\mathrm{b} (\ce{A-}) = \frac{[\ce{HA}][\ce{OH-}]}{[\ce{A-}]}\tag{4}\end{align}$$ Mathematically and physically, we can add equations $(3)$ and $(4)$ together giving us $(5)$: $$\begin{align}\ce{HA + H2O + A- + H2O &<=> A- + H3O+ + HA + OH-}&& K = K_\mathrm{a}\times K_\mathrm{b}\tag{5.1}\\[0.6em] \ce{2 H2O &<=> H3O+ + OH-}&&K = K_\mathrm{w}\tag{5.2}\end{align}$$ We see that everything connected to the acid $\ce{HA}$ cancels out in equation $(5)$ (see $(\text{5.2})$) and thus that the equilibrium constant of that reaction is the autodissociation constant of water $K_\mathrm{w}$. From that, equations $(6)$ and $(7)$ show us how to arrive at a well-known and important formula: $$\begin{align}K_\mathrm{w} &= K_\mathrm{a} \times K_\mathrm{b}\tag{6}\\[0.6em] 10^{-14} &= K_\mathrm{a} \times K_\mathrm{b}\\[0.6em] 14 &= \mathrm{p}K_\mathrm{a} (\ce{HA}) + \mathrm{p}K_\mathrm{b} (\ce{A-})\tag{7}\end{align}$$ Now let us assume the acid in question is strong, e.g. $\mathrm{p}K_\mathrm{a} (\ce{HA}) = -1$. Then, by definition the conjugate base must be (very) weak: $$\mathrm{p}K_\mathrm{b}(\ce{A-}) = 14- \mathrm{p}K_\mathrm{a}(\ce{HA}) = 14-(-1) = 15\tag{8}$$ Hence, our forward direction of statement $(1)$ holds true. However, the same is not true if we add an arbitrary weak acid to the equation; say $\mathrm{p}K_\mathrm{a} (\ce{HB}) = 5$. Then we get: $$\mathrm{p}K_\mathrm{b} (\ce{B-}) = 14-\mathrm{p}K_\mathrm{a}(\ce{HB}) = 14-5 = 9\tag{9}$$ A base with a $\mathrm{p}K_\mathrm{b} = 9$ is a weak base. Thus, the conjugate base of the weak acid $\ce{HB}$ is a weak base. We realise that we can generate a weak base in two ways: by plugging a strong acid into equation $(7)$ or by plugging a certain weak base. Since the sum of $\mathrm{p}K_\mathrm{a} + \mathrm{p}K_\mathrm{b}$ must equal $14$, it is easy to see that both cannot be strong. However, it is very possible that both the base and the acid are weak. Thus, the reverse statement of $(1)$ is not true. $$\mathrm{p}K_\mathrm{a}(\ce{HA}) < 0 \rlap{\hspace{1em}/}\Longleftarrow \mathrm{p}K_\mathrm{b} (\ce{A-}) > 0\tag{1'}$$
{ "source": [ "https://chemistry.stackexchange.com/questions/65297", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/29436/" ] }
65,351
This might be more of a physics question, but is there a ceiling on how hot things can get? What happens at this temperature?
In the actual theories of physics the highest temperature which has a physical meaning is the Planck's temperature . $$T_\mathrm{P} = \frac{m_\mathrm{P} c^2}{k} = \sqrt{\frac{\hslash c^5}{G k^2}} \approx \pu{1.4e32 K}$$ For the moment no theory predict higher temperature because of the limit of our theories. There is a Wikipedia article about absolute hot with some references. You must have a look. Contemporary models of physical cosmology postulate that the highest possible temperature is the Planck temperature, which has the value $\pu{1.416785(71)e32}$ kelvin [...]. Above about $\pu{10^{32} K}$ , particle energies become so large that gravitational forces between them would become as strong as other fundamental forces according to current theories. There is no existing scientific theory for the behavior of matter at these energies. A quantum theory of gravity would be required. The models of the origin of the universe based on the Big Bang theory assume that the universe passed through this temperature about $10^{−42}$ seconds after the Big Bang as a result of enormous entropy expansion.
{ "source": [ "https://chemistry.stackexchange.com/questions/65351", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/402/" ] }
65,396
The other day, when we were dealing with the chapter Solutions , our teacher asked us this: If I add a drop of water, to a tin full of sugar (without mixing it in), what's the solvent here? The water or the sugar? Naturally we were taken aback by the 'unorthodox' nature of the question. Seeing that none of us were in a position to answer that any time soon, he gave us the "answer": It's the sugar . If you remember the definition we learnt ( solute + solvent = solution; between the solute and the solvent, the solvent is one present in a larger quantity and is in the same phase/state of matter as the solution ) you ought to have realized that, since there's more sugar than water, and at the end of the process the tin's filled with a solid. Now that may seem like a clever way of testing our command of terminologies and definitions, but I saw a hitch there. A solution is defined as a homogeneous mixture. Adding a drop of water to a tin of sugar (without mixing it in) does not result in a homogeneous mixture. So as interesting as the question is, I feel it's flawed on this account... because what we're dealing with isn't technically even a solution. Now I told this to my teacher, but he (quite deftly) side-stepped my query...a subtle way of indicating that he's not comfortable discussing this. So I guess my question here boils down to this: Was my teacher's "answer" correct? Or was the question seriously flawed to begin with? EDIT- I wouldn't call this a duplicate. Sure, I mean, both my question and the other one that was linked with this was about identifying the solvent and solute but I feel my question is distinct because: 1) It isn't answered well enough in the other question (that answer was too generalized... I've given a specific instance here) 2) I also want to know if a drop of water added to a tin full of sugar (without mixing it) can be considered a solution.
You could imagine stirring the sugar enough for the water molecules to be uniformly distributed throughout - it would then be homogeneous. However, even then, to refer to the mixture as a solution of water in sugar is unhelpful, not least because referring to a slightly damp solid as a solution will only confuse. A definition needs to be useful, as well as being a set of criteria to be met. Consider adding more water, and stirring till uniformly distributed. At some point, the mixture will become a thick syrupy liquid. Add yet more, and it will eventually resemble an ordinary solution. At what arbitrary point do you say one is a solution of the other? As a device to get students thinking about the interactions that take place when a solid dissolves in a liquid, or a liquid in another liquid, it's an excellent question, but not one that has either a right or a wrong answer, other than "neither", or indeed "both". What is definitely wrong is to insist that there's only one right answer to the question.
{ "source": [ "https://chemistry.stackexchange.com/questions/65396", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/33991/" ] }
65,495
I've come across the term "Heavy metals" innumerable times in articles, mostly pertaining to environmental issues. Is there a weight range (of sorts) against which an element (metal) is classified as a "Heavy metal"? Often I see "Heavy metals" refer to Cadmium (Atomic number 48), Mercury (Atomic number 80) and Lead (Atomic number 82), but does this term also apply to metals of intermediate atomic masses [such as Tungsten (Atomic number 50), Gold (Atomic number 79) and Tin (Atomic number 81)]? I can't seem to find any chemical literature that deals with this, so I suspect that the term "Heavy metal" has no real serious scientific definition.
There is no true, accepted definition of heavy metal . I was taught to apply the option a metal that has density equal to or over $5.0\ \mathrm{g/cm^3}$ . Other variants include a different density range, specific gravity over density, environmental impact, atomic number, toxicity, or atomic mass, even chemical properties. See here$^{[1]}$ for further information and references. $[1]$ John H. Duffus. '"Heavy metals"—a meaningless term?'. IUPAC technical report. Pure and Applied Chemistry , $(2002)$, 74(5), pp 793$-$807.
{ "source": [ "https://chemistry.stackexchange.com/questions/65495", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/33991/" ] }
65,865
I keep seeing the term "Aqua" in the ingredient labels on several shampoo varieties, but I really don't see why it should be there in the first place. I mean, if the manufacturers just wanted to say it contains water, couldn't they've printed out "Water" instead? Or could it be that "Aqua" is slang for purified water (or water that's been treated in some godforsaken way) in the shampoo industry? Well, I guess there could always be the possibility that the manufacturers think "Aqua" sounds a lot fancier than plain ol' "Water". So why's the term "Aqua" mentioned there?
In most countries, cosmetic product labels use the International Nomenclature of Cosmetic Ingredients (INCI) for listing ingredients. The INCI name “AQUA” indeed just describes water (which is used as a solvent).
{ "source": [ "https://chemistry.stackexchange.com/questions/65865", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/33991/" ] }
66,088
In physics class, we write the first law of thermodynamics as $\mathrm dU =\mathrm dQ - \mathrm dW$ and in the physical chemistry class, we write the same law as $\mathrm dU =\mathrm dQ + \mathrm dW$. The reason being the sign convention is different in both the cases. In physics we take work done by the system as positive and in chemistry work done on the system. I realize that this does not cause any change in the actual law of nature but I just want to know why we have different sign conventions. Wouldn't just one convention make life easier? Is there a historical reason? Or is this is to differentiate between subjects?
This is not a simple physics versus chemistry distinction. I taught Physics for 25 years and saw many examples of either usage in multiple textbooks. In fact, at some point in my tenure, the AP Physics committee swapped conventions on the equation sheet for the AP Exam. Just my take here: I've always attributed the work-done-by-the-system camp as being more prone to be used by engineering types who want to know "what the system can do for us" in practical applications. On the other hand, work-done-on-the-system seems to foster the view of an experimenter or theoretician operating on a system from without.
{ "source": [ "https://chemistry.stackexchange.com/questions/66088", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/-1/" ] }
66,091
Can someone explain (on an advanced level) what phys-chem property of compounds/elements determines what their half-cell redox potential will be? For example, Zn and Cu are structurally very similar and their ionization energies, electronegativities, etc. won't be all that different. So, why is the 2-electron redox potential for each so different? ie Zn is -0.76 V (SHE) and Cu is +0.34 V (SHE), which is a significant 1.1 V different. Isn't electric potential the amount of "stored" energy? So, wouldn't removing or adding 2 electrons to Zn or Cu be very similar since they are next to each other on the periodic table? Edit: I've added some sources and clarity to the discussion in the answer below.
This is not a simple physics versus chemistry distinction. I taught Physics for 25 years and saw many examples of either usage in multiple textbooks. In fact, at some point in my tenure, the AP Physics committee swapped conventions on the equation sheet for the AP Exam. Just my take here: I've always attributed the work-done-by-the-system camp as being more prone to be used by engineering types who want to know "what the system can do for us" in practical applications. On the other hand, work-done-on-the-system seems to foster the view of an experimenter or theoretician operating on a system from without.
{ "source": [ "https://chemistry.stackexchange.com/questions/66091", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/39780/" ] }
66,109
Some amino acids have a one-letter code that's just the first letter of the name of the amino acid. This makes sense and obviously, since there is more than one amino acid that begins with the same letter, other letters had to be used. But why were these other letters chosen? For instance, is there a reason 'W' specifically was chosen for tryptophan (other than the fact that 'T' was taken)?
Tryptophan For instance, is there a reason 'W' specifically was chosen for tryptophan (other than the fact that 'T' was taken)? Once you have assigned the other 19 amino acids, there are only 7 letters of the alphabet left: B, J, O, U, W, X, and Z. (Certainly not a nice Scrabble hand to have!) If one wants to use a letter found within the name of the amino acid, the only available letter would be O. However, the usage of U and O was historically discouraged because these letters could be easily confused with other letters (U with V; O with G, Q, C, D, and the number 0). It turns out that the choice was made because W is a very fat letter and was reminiscent of the indole ring system present in tryptophan (the only amino acid to contain a bicyclic system). (See below for source.) What happened to the other six letters, then? All 26 letters of the alphabet now find use as a one-letter code for amino acids or various combinations thereof. After their discoveries, selenocysteine and pyrrolysine (the latter only found in bacteria) were assigned U and O respectively. Furthermore, B is used to represent aspartic acid OR asparagine; J is used to represent leucine OR isoleucine; Z is used to represent glutamic acid OR glutamine; and X is used to represent an unknown amino acid. I believe B and Z find use because, in protein sequencing, acid hydrolysis is often used to break peptide bonds. This has the undesirable side-effect of hydrolysing the amide groups in asparagine/glutamine, leading to the formation of aspartic/glutamic acids, which means that one cannot tell exactly which amino acid it was at the start. J is used in NMR spectroscopy where isoleucine and leucine are difficult to distinguish. Why did they choose the letters they did? As far as I am aware the usage of one-letter symbols is adopted by both IUPAC and IUB (since 1991, IUBMB) in their joint 1983 recommendations on "Nomenclature and Symbolism for Amino Acids and Peptides". 1 ankit7540's concise summary of the historical development already mentioned these recommendations. In particular, Section 3AA-21.2 "The Code Symbols" has a description of why the letters were chosen. This document is probably the most authoritative stance on the matter. The rationale is mostly in line with Jan's answer: Initial letters of the names of the amino acids were chosen where there was no ambiguity. There are six such cases: cysteine, histidine. isoleucine, methionine, serine, and valine. All the other amino acids share the initial letters A, G, L, P or T, so arbitrary assignments were made. These letters were assigned to the most frequently occurring and structurally most simple of the amino acids with these initials, alanine (A), glycine (G), leucine (L), proline (P) and threonine (T). Other assignments were made on the basis of associations that might be helpful in remembering the code, e.g. the phonetic associations of F for phenylalanine and R for arginine. For tryptophan the double ring of the molecule is associated with the bulky letter W. The letters N and Q were assigned to asparagine and glutamine respectively; D and E to aspartic and glutamic acids respectively. K and Y were chosen for the two remaining amino acids, lysine and tyrosine, because, of the few remaining letters, they were close alphabetically to the initial letters of the names. U and O were avoided because U is easily confused with V in handwritten material, and O with G, Q, C and D in imperfect computer print-outs, and also with zero. J was avoided because it is absent from several languages. Two other symbols are often necessary for partly determined sequences, so B was assigned to aspartic acid or asparagine when these have not been distinguished; Z was similarly assigned to glutamic acid or glutamine. X means that the identity of an amino acid is undetermined, or that the amino acid is atypical. One can only hypothesise what they meant by "associations that might be helpful in remembering the code" in the case of N/Q/D/E. My best guess is: D and E were possibly chosen for aspartic and glutamic acids because they were the only consecutive pair of letters left, emphasising their chemical similarity. Aspartic acid is shorter than glutamic acid by one methylene group (CH 2 ), so it gets the earlier letter D. Glutamine sounds like Q-tamine. If you don't think it sounds similar, repeat it 50 times until you do. AsparagiNe was assigned N. Reference IUPAC-IUB Joint Commission on Biochemical Nomenclature. Nomenclature and Symbolism for Amino Acids and Peptides: Recommendations 1983. FEBS J. 1984, 138 (1), 9–37. DOI: 10.1111/j.1432-1033.1984.tb07877.x. A HTML version (perhaps more user-friendly) can be found at this address .
{ "source": [ "https://chemistry.stackexchange.com/questions/66109", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/402/" ] }
66,618
An atom typically consists of electrons, protons, and neutrons. Electrons are negatively charged and participate in bonding to stabilize the atom. Conversely, protons are positively charged and balance the charge of the atom. In addition, their positive charge attracts the negatively charged electrons. What role do neutrons play in an atom?
Neutrons bind with protons and one another in the nucleus through the strong force , effectively moderating the repulsive forces between the protons and stabilizing the nucleus.$^{[1]}$ $\ce{^2He}$ (2 protons, 0 neutrons) is extremely unstable, though according to theoretical calculations would be much more stable if the strong force were 2% stronger. Its instability is due to spin–spin interactions in the strong force, and the Pauli exclusion principle, which forces the two protons to have anti-aligned spins and gives the $\ce{^2He}$ nucleus a negative binding energy. $\ce{^3He}$ (2 protons, 1 neutron), on the other hand, is stable, and is also the only stable isotope other than $\ce{^1H}$ with more protons than neutrons.$^{[2]}$ $^{[1]}$ Wikipedia, Neutron, Beta Decay and the Stability of the Nucleus $^{[2]}$ Wikipedia, Isotopes of Helium
{ "source": [ "https://chemistry.stackexchange.com/questions/66618", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/39536/" ] }
66,663
Disclaimer: I'm asking this because I am curious. Not because I'm actually handling nitric acid. In another answer I was reading, the poster mentioned that if you spill nitric acid, you shouldn't pour water on anything but a small spill, because it will generate heat (maybe enough to flash-boil the water?), but if you spill nitric acid on yourself, you should have an emergency shower ready. Except, wouldn't the emergency shower be using water and cause the exact same problem? Is this just a case of deciding that possible burns from a hot liquid, while under a cold shower are much better than definite acid burns?
If you spill nitric acid onto the table, you yourself are unharmed and you can use a cool head to decide what to do next. If the spill is small, pour water on it to both dilute it and dilute the heat — remember water has a high specific heat capacity. If the spill is larger, I would probably try adding sodium hydrogencarbonate to react away the acid before wiping it up. Adding larger amounts of water will work, too, but you need to get them from somewhere and then need to get rid of the resulting puddle. If the spill is on you you want to get the acid off your clothes and body immediately. The quickest and most effective solution is to pour water over your head and lots of it. This is what the emergency showers are for. Lots, here, has two functions: it not only quickly dilutes the acid to a less harmful concentration but also (because it is washing down) serves well to actually remove the heat — remember again that water has a high specific heat capacity. Emergency showers have a high throughput for exactly this reason.
{ "source": [ "https://chemistry.stackexchange.com/questions/66663", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/40129/" ] }
66,878
In the case of a name of a person, the first letter is written in capital letters. Should the first letter of name of a chemical compound or element be written in capital letters?
The names of chemical compounds and elements should be capitalized if they appear at the beginning of a sentence or in a title - that is, they are treated just like any other common noun. For example, a title: Why I Don't Like Zinc or a sentence: Boron is my favorite element. Within a sentence: We used boron and zinc in the experiment. Vinegar contains acetic acid. The symbols for chemical elements are always capitalized, no matter what: We combined $\ce{As}$ and $\ce{W}$ to make a new alloy.
{ "source": [ "https://chemistry.stackexchange.com/questions/66878", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/15429/" ] }
67,199
Aqua regia (Latin: Royal Water ) is one of the strongest acids known in Chemistry, and is capable of dissolving gold and platinum. My copy of the Oxford science dictionary goes on to say (under the entry: Aqua regia) that metallic silver does not dissolve in aqua regia. Morever it does not mention any other examples of aqua regia-resistant metals. Further down, it mentions that silver's invulnerability to aqua regia is due to the formation of a protective silver chloride coating on the metal, which serves to protect the metal from further decomposition. However, this Wikipedia article claims: ...aqua regia does not dissolve or corrode silver... This, I find contradictory to the dictionary's "formation of silver chloride" claim. So, What metals (elemental, forget alloys) are neither attacked by nor dissolved in (freshly prepared) aqua regia? What makes those metals that don't dissolve or corrode in aqua regia so impervious to the acid? Does silver metal actually develop a silver chloride layer on exposure to aqua regia? If so, would that mean the Wikipedia article is incorrect?
Keep in mind The answer will depend upon the reaction conditions. Most importantly, physical state of the metal: porosity, degree of comminution; temperature; mechanical aggravation of metal surface during reaction. Often times a chemistry text mentions that no reaction occurs . The reaction might still happen. It is just that for the specified parameters the process is meaningless and negligible. Short overview of aqua regia Aqua regia is the $3:1$ volumetric mixture of $\ce{HCl}$ and $\ce{HNO3}$. Its additional reactive power draws from monochlorine created in situ .$^{[1]\ [2]}$ $$\ce{HNO3 + 3HCl -> Cl2 + NOCl + H2O\\ NOCl -> Cl + NO}$$ Which metals are impervious to $3:1\ \ce{HCl/HNO3}$? Almost every metal will react with aqua regia provided certain criteria are met.$^{[1]\ [2]}$ The closest you will probably get is ruthenium $\ce{Ru}$, and perhaps osmium $\ce{Os}$. To the best of my knowledge, $\ce{Ru}$ will not react with aqua regia in a meaningful way even if aqua regia is boiling.$^{[2]}$ The difference with $\ce{Os}$ is that powdered osmium is attacked by boiling aqua regia.$^{[1]\ [2]}$ $$\ce{Ru + HNO3 + HCl $\kern.6em\not\kern -.6em \longrightarrow$}$$ $$\ce{\underset{powder}{Os} + $\underbrace{\mathrm{HNO_3}}_{\text{boiling}}$ -> OsO4 + N_xO_y + H2O \\ OsO4 + 2H2O <=> H2[OsO4(OH)2] \\ OsO4 + HCl ->OsO2Cl2 + Cl2 + 2H2O\\ OsO2Cl2 + HCl ->OsCl4 + Cl2 + 2H2O\\ 2OsO2Cl2 + H2O <=> OsO2 + H2[OsO2Cl4] \\ 3OsCl4 + 2H2O <=> OsO2 + 2H2[OsCl6]\\ OsO2 + 6HCl <=> H2[OsCl6] + 2H2O}$$ Brief discussion about the list provided in the comments Titanium $\ce{Ti}$ does react , and does so at room temperature. $$\ce{3Ti + $\underbrace{\mathrm{12HCl + 4HNO_3}}_{\text{room temperature}}$ -> 3TiCl4 + 4NO + 8H2O}$$ Rhenium $\ce{Re}$ reacts slowly at room temperature $\ce{->HReO4}$. This will further react with $\ce{HCl -> ReCl4 + Cl2}$.$^{[2]}$ Hafnium $\ce{Hf}$ does react at room temperature. The reaction is slower than with titanium; overall equation is identical.$^{[2]}$ Tantalum $\ce{Ta}$ reacts when aqua regia is heated to $150\ ^{\circ}\mathrm{C}$. Rhodium $\ce{Rh}$ reacts in a grinded state. As a large compact piece, iridium $\ce{Ir}$ is affected over temperatures of $100\ ^{\circ}\mathrm{C}$. Niobium $\ce{Nb}$ is inert at room temperatures.$^{[2]}$ Summary: ruthenium $\ce{Ru}$ is your best bet. What makes metals $\ce{Ru}$ and $\ce{Os}$ so stable in aqua regia? The nobility of these metals is not the best explanation. As you correctly pointed out, $\ce{Pt}$ and $\ce{Au}$ react fine. This is direct evidence that for other metals a protective layer should form. The layer varies from metal to metal, but usually is either an oxide (or oxide hydrate), or a chloride. Effectiveness of mechanical aggravation also points to stable, non-reactive compound formation on the metal's surface. For ruthenium, as of now I am unsure what this precipitate could be. If anyone has a reference, please edit or leave a comment.$^\text{[reference needed]}$ What happens with silver? Silver and aqua regia react very poorly, and for a short amount of time.$^{[2]}$ The culprit is $\ce{AgCl}$ ($K_s = 1.8 \cdot 10^{-10}$)$^{[2]}$. A slow reaction might still take place due to complexation.$^{[2]}$ Surprisingly, silver reacts with $\ce{HBr}$!$^{[2]}$ Its solubility product is even worse, $K_s = 5.0 \cdot 10^{-13}$.$^{[3]}$ My guess is that this layer is not as dense as $\ce{AgCl}$ but this still needs verifying.$^\text{[citation needed]}$ References (In progress) $[1]$ N. N. Ahmetov. Anorgaaniline keemia . (1974) $[2]$ H. Karik, Kalle Truus. Elementide keemia . (2003) $[3]$ Skoog, West, Holler, Crouch. Fundamentals of Analytical Chemistry. 9th edition. (2014)
{ "source": [ "https://chemistry.stackexchange.com/questions/67199", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/33991/" ] }
67,220
I was reading an article about a chemical reaction, and I came across the phrase: The oxygen atom at this point has three bonds and has a net positive charge How can this happen? Oxygen has 2 missing electrons in the valence shell. Therefore it can only form 2 bonds at the most, if both are sigma bonds. Does it mean the 3rd bond is not covalent? Can it happen with a hydrogen or an ionic bond?
Consider the auto-ionization of water : $\ce{ 2H_2O->H_3O+ + OH-}$ The first oxygen has three bonds, the second only has one. You can think of the reaction taking place by a lone pair on the oxygen of one water molecule ripping off the proton only of the hydrogen of another water molecule to form a covalent bond between them using just the lone pair. The electron of the hydrogen is left behind and stays with the oxygen of the other molecule. If you calculate the formal charges on each oxygen you will see the first one has a positive charge and the second one has a negative. The formal charge is just the valence number of electrons minus the number of bonds minus non-bonding electrons (using the lewis structure) and is a useful book keeping method to think about where the electrons go/are and what are the most stable structures. Formal Charge Calculation
{ "source": [ "https://chemistry.stackexchange.com/questions/67220", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/40318/" ] }
67,874
I vaguely recall having heard that drinking too much water can, over time, prove fatal to the human body. Nothing special about the water; not distilled or de-ionized or anything … just plain ol' water. Now the reason that accompanied this "fact", was that drinking too much water serves to dilute, and thereby, disrupt the electrolyte balance in all (living) body tissue. This in turn, messes with the sodium-potassium pump present in cells leading to loss of functionality of cells. Now I can't seem to find the source of that information, but I do recall it wasn't particularly trustworthy. So what I want to know is: Is it true, that drinking a lot of water can be fatal? If so, is it because the electrolyte concentration inside and outside cells are "diluted"? Does this affect all (living) tissue equally?** Edit: I fail to see why this was put on hold as a personal medical question. I vote to reopen this on the grounds that it does not : a) Ask for advice on treating water intoxication b) Suggest, promote or request, in any way, alternative forms of medicine.
Based on what I gathered from this Wikipedia article , Yes . Drinking copious amounts of water can prove fatal. The proper term is " Water intoxication ". When you start taking in a lot of water (by "a lot" I mean more water than your body can excrete via sweat or urine), the interstitial fluid that bathe the cells that form your (living) tissue end up getting "diluted", i.e- the concentration of ions like $\mathrm{Na^{+}}$ is greatly reduced. The result? The concentration of ions in the interstitial fluid is far lower than the concentration of ions inside the cells it surrounds. Thinking of this in another way; the concentration of water in the interstitial fluid is far higher than the concentration of water inside the cells. Ever soaked dry raisins in water? Over time they begin to absorb water and swell up as a consequence of the concentration gradient set up between the tissue in the raisin (Low concentration of water) and the water in the bowl. This phenomenon, where water flows from a region of higher concentration (of water) to a region of lower concentration (of water) across a semi-permeable membrane is called osmosis . It's pretty much the same thing in the case of water intoxication. A concentration gradient has been set up between cells and the interstitial fluid that surrounds it. So water begins to flow into the cell, and the cell swells up quite a bit, resulting in a build-up of turgor pressure This increase in turgor pressure will be seen to varying degrees in all sorts of (living) tissue, especially tissue that is highly vascular. An important example would be neural tissue, particularly that in the brain. This swelling up of tissue results in an increase in intracranial pressure which can lead to loss of functionality over time. This is what makes water intoxication so lethal. There isn't really much about the chemistry of water that's responsible for this, apart from osmosis that is. Addendum: Drinking a ton of water isn't the only way for water to build up in potentially lethal amounts in the body. Hyponatremia , which refers to a condition of low (below-normal) sodium levels in body fluids, can also lead to water accumulating in the body in excess.
{ "source": [ "https://chemistry.stackexchange.com/questions/67874", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/33991/" ] }
68,052
I’m struggling to understanding the true nature and implications of recent confirmations of the existence of stable species containing carbon atoms bound six (in another case 5) other carbon atoms. Is this really a breakthrough of science or one of semantics and scientific media hype? I've had a hard time understanding what I've read on this subject. From Moritz Malischewski, Prof. Dr. K. Seppelt. Angew. Chem. Int. Ed. 2017, 56 , 368-370 , an open, online article: Herein we report the crystal structure of $\ce{C6(CH3)6^{2+} (SbF6^−)2⋅HSO3F}$, which revealed a non-classical structure for this labile species (Figure 1). Figure 1. Molecular structure of $\ce{C6(CH3)6^{2+} in C6(CH3)6^{2+} (SbF6^−)2⋅HSO3F}$, ellipsoids are shown at 50 % probability, C grey, H white; counteranions and co-crystallized $\ce{HSO3F}$ omitted for clarity. Further discussion of this matter was also given in C&EN: "Six bonds to carbon: Confirmed", C&EN, Volume 94 Issue 49, p. 13 ( abstract ). So, I understand how the octet rule is not actually violated. But then, can these molecules really contain carbons with 6 primarily covalently bound carbon neighbors? In what orbitals do all these electrons lie? Should we not be surprised as similar hypervalency occurs elsewhere on the periodic table? Any clarification at all of these issues would be greatly appreciated.
The carbon is not hexavalent, it is hexacoordinated. A covalent bond does not equal to a total of two electrons between the bonding partners and the nature of the chemical bond may lie somewhere between totally covalent and totally ionic. Examples of this include boranes, with their three-centre-two-electron bonds. But we don't need to stop there; any π coordination has fewer electrons in the bonding orbitals than double the bonding partners. We struggle understanding the bonding in simple molecules like carbon monoxide, while more complex molecules like many organic molecules have all similar bonding situations which are quite easy to grasp. The concept of aromaticity is still not fully understood and while we believe it is as simple as drawing a MO diagram, it most certainly is not. However, molecular orbitals certainly enhance our understanding. Chemist in general are interested in unusual bonding situations, since they challenge our understanding of the bonding and the chemistry of such molecules itself. One of the prominent examples is the 2-norbonyl cation , the structure of which remained a mystery for a long time since it didn't fit within the common constraint of organic chemistry. Such ions are nowadays usually referred to as non-classical ions. Their bonding is different from the more common two-electron-two-centre bonds we expect in organic (and inorganic) molecules. In a first approximation they can be described with resonance (see here: What is resonance, and are resonance structures real? ), but the understanding comes with a few misconceptions. A MO description involves multi-centre bonds and typically bond orders less than one. Another interesting example of unusual bonding are fluxional molecules like bullvalene . (See also here: What is the conformer distribution in monosubstituted fluoro bullvalene? ) Similarly the bonding situation in these molecules is quite fluid, which allow them to change shape in such a way and at room temperature we obtain a single signal in the proton NMR ( Addison Ault. J. Chem. Educ. 2001, 78 (7), 924-927. ). As such the hexamethylbenzene dication is another representative of unusual bonding situations. While it has been prepared more than forty years ago, it took until now to confirm the actual structure. It is special since it contains a pyramidal coordinated carbon exclusively bonded to other carbons, and it marks the highest observed coordination number for carbon so far. The authors actually state the general motivation behind such approaches in the first two sentences: The tetravalency of carbon and the hexagonal-planar ring structure of benzene are fundamental axioms of organic chemistry, and were developed 150 years ago by Kekulé. Chemists have long been fascinated by finding exceptions from these rules. Let's go a little deeper, and look at the questions you are asking step by step: So, I understand how the octet rule is not actually violated. But then, can these molecules really contain carbons with 6 primarily covalently bound carbon neighbors? The covalent-to-ionic character of a bonding orbital is completely independent of whether it is occupied or not. I believe there still is no unilaterally accepted criterion for what constitutes a covalent bond and what does an ionic bond. As previously stated, a bond does quite often deviate from the traditional Lewis-concept. This is to be expected as the Lewis concept is quite crude and cannot account for many chemical phenomena. We can analyse the electron density with the Quantum Theory of Atoms in Molecules, which defines us a set of parameters on which we can judge whether a bond is predominantly covalent of ionic. First and foremost we can obtain bond paths, i.e. these are paths that connect nuclei in a way that the value of the electron density is maximum. On this path we will find a bond critical point, where the electron density is minimal. The Laplacian at this point gives us a measure whether the bond is predominantly covalent (negative) or ionic (positive). In the picture below areas of positive Laplacian are marked with solid lines, i.e. charge depletion, and negative values with dashed lines, i.e. charge accumulation. The location of the plane is given in the 3D model. The hexa-coordinated carbon is C10. rho/a.u. Lapl./a.u. BCP44 0.259 -0.811 BCP47 0.294 -0.121 BCP53 0.157 0.226 BCP61 0.270 -0.882 BCP64 0.288 -0.117 From the pure visual inspection we can see that the area between C10 and C12 clearly is a predominantly covalent bond, which is obvious from the dashed lines. This is also supported by the numbers. Visual inspection of C10 to C4 would suggest the same. However, the bond critical point lies outside of the charge accumulation area. This might well be an issue with the methodology. I have reproduced the calculation from the paper only on their geometry. Therefore the optimum MP2 density might be slightly different and completely describes the bond as covalent. Also note that there are no symmetry restrictions, which could also lead to slight variations. From what we can see, we can derive that there is a significant covalent contribution though. The values of the electron density ( rho ) also show that the pyramidal bonds are only about half as strong as the bonds in the 1-ethylium-1-ylidene or the five-membered ring. (Fun fact: The AIM analysis shows no cage critical point. From a technical point of view, the five-membered ring is no ring. The whole molecule is rather a goblet (of fire, lol).) In what orbitals do all these electrons lie? Since it is a cation of benzene, the atomic orbitals involved in bonding are the same as in regular benzene. The total number of molecular orbitals that are formed are one fewer than in the neutral case. Obviously due to the different geometric arrangement they combine also differently. The canonical orbitals are actually quite pretty and reproduce the (almost) symmetry quite nicely as some of the orbitals are nearly degenerate (for example HOMO and HOMO-1, or LUMO and LUMO+1), but have a look for yourself: Should we not be surprised as similar hypervalency occurs elsewhere on the periodic table? Well, no. Hypercoordinate bonding is one of the most common patterns for anything outside of carbon chemistry. A few words of advice: It has nothing to do with hypervalency. The term is an ancient relic from the times where people believed that d-orbitals are necessary to describe molecules that exceed the coordination number dictated by Lewis theory. The gold book writes on hypervalency : The ability of an atom in a molecular entity to expand its valence shell beyond the limits of the Lewis octet rule. Hypervalent compounds are common for the second and subsequent row elements in groups 15–18 of the periodic table. A description of the hypervalent bonding implies a transfer of the electrons from the central (hypervalent) atom to the nonbonding molecular orbitals which it forms with (usually more electronegative) ligands. Note that this orbital expansion is not necessary when considering MO theory. As such, the term valency, and especially hypervalency, should be avoided at all cost. It is important to know that according to the gold book, valence is an absolute property of an element , it does not change: The maximum number of univalent atoms (originally hydrogen or chlorine atoms) that may combine with an atom of the element under consideration, or with a fragment, or for which an atom of this element can be substituted. Carbon is tetravalent. Always. (Seriously, from now on never use *valen{ce .. cy .. t} again, please.) Is this really a breakthrough of science or one of semantics and scientific media hype? Of course it is. It is always something incredible when we refine our understanding of the universe, challenge our own theories. Is it as ground breaking as the discovery of electricity; probably not. However, with such advances in synthetic chemistry we can obtain more robust data to develop our theoretical models. We can in general expand our knowledge of bonding, think outside the box. With advances like this we achieve the complete break down of our theories; without such we would probably still be using Lewis structures (for everything, and that would be ridiculous ).
{ "source": [ "https://chemistry.stackexchange.com/questions/68052", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/39539/" ] }
68,054
I often encounter the term "flouride" used as if it is a chemical. From my understanding, it would be incorrect to call rust oxide and not iron oxide or to call salt chloride instead of sodium chloride. Isn't it incorrect to say drinking water or toothpaste has flouride?
The carbon is not hexavalent, it is hexacoordinated. A covalent bond does not equal to a total of two electrons between the bonding partners and the nature of the chemical bond may lie somewhere between totally covalent and totally ionic. Examples of this include boranes, with their three-centre-two-electron bonds. But we don't need to stop there; any π coordination has fewer electrons in the bonding orbitals than double the bonding partners. We struggle understanding the bonding in simple molecules like carbon monoxide, while more complex molecules like many organic molecules have all similar bonding situations which are quite easy to grasp. The concept of aromaticity is still not fully understood and while we believe it is as simple as drawing a MO diagram, it most certainly is not. However, molecular orbitals certainly enhance our understanding. Chemist in general are interested in unusual bonding situations, since they challenge our understanding of the bonding and the chemistry of such molecules itself. One of the prominent examples is the 2-norbonyl cation , the structure of which remained a mystery for a long time since it didn't fit within the common constraint of organic chemistry. Such ions are nowadays usually referred to as non-classical ions. Their bonding is different from the more common two-electron-two-centre bonds we expect in organic (and inorganic) molecules. In a first approximation they can be described with resonance (see here: What is resonance, and are resonance structures real? ), but the understanding comes with a few misconceptions. A MO description involves multi-centre bonds and typically bond orders less than one. Another interesting example of unusual bonding are fluxional molecules like bullvalene . (See also here: What is the conformer distribution in monosubstituted fluoro bullvalene? ) Similarly the bonding situation in these molecules is quite fluid, which allow them to change shape in such a way and at room temperature we obtain a single signal in the proton NMR ( Addison Ault. J. Chem. Educ. 2001, 78 (7), 924-927. ). As such the hexamethylbenzene dication is another representative of unusual bonding situations. While it has been prepared more than forty years ago, it took until now to confirm the actual structure. It is special since it contains a pyramidal coordinated carbon exclusively bonded to other carbons, and it marks the highest observed coordination number for carbon so far. The authors actually state the general motivation behind such approaches in the first two sentences: The tetravalency of carbon and the hexagonal-planar ring structure of benzene are fundamental axioms of organic chemistry, and were developed 150 years ago by Kekulé. Chemists have long been fascinated by finding exceptions from these rules. Let's go a little deeper, and look at the questions you are asking step by step: So, I understand how the octet rule is not actually violated. But then, can these molecules really contain carbons with 6 primarily covalently bound carbon neighbors? The covalent-to-ionic character of a bonding orbital is completely independent of whether it is occupied or not. I believe there still is no unilaterally accepted criterion for what constitutes a covalent bond and what does an ionic bond. As previously stated, a bond does quite often deviate from the traditional Lewis-concept. This is to be expected as the Lewis concept is quite crude and cannot account for many chemical phenomena. We can analyse the electron density with the Quantum Theory of Atoms in Molecules, which defines us a set of parameters on which we can judge whether a bond is predominantly covalent of ionic. First and foremost we can obtain bond paths, i.e. these are paths that connect nuclei in a way that the value of the electron density is maximum. On this path we will find a bond critical point, where the electron density is minimal. The Laplacian at this point gives us a measure whether the bond is predominantly covalent (negative) or ionic (positive). In the picture below areas of positive Laplacian are marked with solid lines, i.e. charge depletion, and negative values with dashed lines, i.e. charge accumulation. The location of the plane is given in the 3D model. The hexa-coordinated carbon is C10. rho/a.u. Lapl./a.u. BCP44 0.259 -0.811 BCP47 0.294 -0.121 BCP53 0.157 0.226 BCP61 0.270 -0.882 BCP64 0.288 -0.117 From the pure visual inspection we can see that the area between C10 and C12 clearly is a predominantly covalent bond, which is obvious from the dashed lines. This is also supported by the numbers. Visual inspection of C10 to C4 would suggest the same. However, the bond critical point lies outside of the charge accumulation area. This might well be an issue with the methodology. I have reproduced the calculation from the paper only on their geometry. Therefore the optimum MP2 density might be slightly different and completely describes the bond as covalent. Also note that there are no symmetry restrictions, which could also lead to slight variations. From what we can see, we can derive that there is a significant covalent contribution though. The values of the electron density ( rho ) also show that the pyramidal bonds are only about half as strong as the bonds in the 1-ethylium-1-ylidene or the five-membered ring. (Fun fact: The AIM analysis shows no cage critical point. From a technical point of view, the five-membered ring is no ring. The whole molecule is rather a goblet (of fire, lol).) In what orbitals do all these electrons lie? Since it is a cation of benzene, the atomic orbitals involved in bonding are the same as in regular benzene. The total number of molecular orbitals that are formed are one fewer than in the neutral case. Obviously due to the different geometric arrangement they combine also differently. The canonical orbitals are actually quite pretty and reproduce the (almost) symmetry quite nicely as some of the orbitals are nearly degenerate (for example HOMO and HOMO-1, or LUMO and LUMO+1), but have a look for yourself: Should we not be surprised as similar hypervalency occurs elsewhere on the periodic table? Well, no. Hypercoordinate bonding is one of the most common patterns for anything outside of carbon chemistry. A few words of advice: It has nothing to do with hypervalency. The term is an ancient relic from the times where people believed that d-orbitals are necessary to describe molecules that exceed the coordination number dictated by Lewis theory. The gold book writes on hypervalency : The ability of an atom in a molecular entity to expand its valence shell beyond the limits of the Lewis octet rule. Hypervalent compounds are common for the second and subsequent row elements in groups 15–18 of the periodic table. A description of the hypervalent bonding implies a transfer of the electrons from the central (hypervalent) atom to the nonbonding molecular orbitals which it forms with (usually more electronegative) ligands. Note that this orbital expansion is not necessary when considering MO theory. As such, the term valency, and especially hypervalency, should be avoided at all cost. It is important to know that according to the gold book, valence is an absolute property of an element , it does not change: The maximum number of univalent atoms (originally hydrogen or chlorine atoms) that may combine with an atom of the element under consideration, or with a fragment, or for which an atom of this element can be substituted. Carbon is tetravalent. Always. (Seriously, from now on never use *valen{ce .. cy .. t} again, please.) Is this really a breakthrough of science or one of semantics and scientific media hype? Of course it is. It is always something incredible when we refine our understanding of the universe, challenge our own theories. Is it as ground breaking as the discovery of electricity; probably not. However, with such advances in synthetic chemistry we can obtain more robust data to develop our theoretical models. We can in general expand our knowledge of bonding, think outside the box. With advances like this we achieve the complete break down of our theories; without such we would probably still be using Lewis structures (for everything, and that would be ridiculous ).
{ "source": [ "https://chemistry.stackexchange.com/questions/68054", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/37304/" ] }
68,251
Under spherical symmetry, the irreducible representations corresponding to $L = 0, 1, 2, \cdots$ are assigned the letters $\mathrm{S}, \mathrm{P}, \mathrm{D}, \mathrm{F}, \cdots$ after which the letters progress in alphabetical order. (The familiar names of the atomic orbitals are also labelled with this sequence of letters.) Therefore, we have: $$\mathrm{S}, \mathrm{P}, \mathrm{D}, \mathrm{F}, \mathrm{G}, \mathrm{H}, \mathrm{I}, \mathrm{K}, \cdots$$ but as far as I can tell, $\mathrm{J}$ is conspicuously omitted. I am guessing there is a reason behind this - what is it? Is it to avoid confusion with some other $J$, like the total angular momentum quantum number $J$, or is it perhaps based on some typographical argument (J being easily confused with I)?
Omitting j when alphabetically enumerating things has a long tradition. First of all, the alphabet did not always exist in the form we know it today. Quoting Wikipedia : After [...] the 1st century BC, Latin adopted the Greek letters ⟨Y⟩ and ⟨Z⟩ [...] Thus it was during the classical Latin period that the Latin alphabet contained 23 letters: [no J, V, W] [...] It was not until the Middle Ages that the letter ⟨W⟩ [...] was added [...] only after the Renaissance did the convention of treating ⟨I⟩ and ⟨U⟩ as vowels, and ⟨J⟩ and ⟨V⟩ as consonants, become established. Prior to that, the former had been merely allographs of the latter. In some books, this has consequences till today. The footnotes to the Confession and Catechisms [of the Presbyterian Church], containing the proof texts, are enumerated in the traditional manner, that is, by letters of the alphabet (omitting j and v, as alternative forms for i and u in the Latin alphabet). Source But even later, long after the letters I and J were considered distinct in terms of proper spelling, their alphabetical order (I preceding J) was not firmly established. The New General English Dictionary of 1768 had a combined section for I and J, treating both equal with respect to alphabetical order (but not regarding spelling). Same with Handwörterbuch der allgemeinen Chemie , a German chemistry book printed in 1818. The latest book, I could find, is the Handwörterbuch der reinen und angewandten Chemie of 1850. This is pretty close to when the letters spdefghik.. must have been defined! (Does anyone know when exactly this was?) And now, it perfectly makes sense to me. Even if i and j were distinct letters at the time and their order should have been commonly established in the early 20th century, the possibility that some readers could still be confused about what comes first, must have lead to the decision to leave j out. Update (2017-12) Concerning the spelling and order of names , I was able to find evidence that is even 100 years younger. The Berlin telephone book contained a spelling table. The first book of 1890 contained a spelling table that assigned numbers to each letter, omitting the letter J, i.e. I=9 and K=10. That is pretty similar to our orbital labels, isn't it? In the 1903 printing, words were assigned to the letters, but J was still left out. The 1905 printing was the first to include J into the spelling table. ( source ) Even the Berlin address book of 1943 did not distinguish between I and J. For instance Jutta is listed before Iwanski . (Interestingly, this book doesn't even use different glyphs for I and J in the Fraktur font.) Of course, there are many books around that time, and earlier, that sort I before J and that use different glyphs, even if printed in Fraktur ( example ). Nevertheless, this shows that the convention "I before J", as we know it today, was not firmly established in early 20th century Germany.
{ "source": [ "https://chemistry.stackexchange.com/questions/68251", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/16683/" ] }
69,371
I have read that teflon coatings undergo decomposition at temperatures above 300 degree Celsius. What do they decompose to? And what other conditions apply for the decomposition? Is this decomposition harmful for humans as cooking goes beyond that temperature?
Like any other polymer decomposition process, the products of PTFE decomposition depends on the chemical species present while PTFE is undergoing the process and temperatures. General process is this : Decomposition is initiated by random-chain scission, followed by depolymerization. Termination is by dis-proportionation. And all of this happens rapidly above 600 K (~326 C). There are no other conditions applied for the decomposition to take place hence can happen in dry or aqueous environments, which would give different by products. Cooking below 200 C, it would be completely safe since the mass loss below 300 C is undetectable. Only above the glass transition temperature (~326 C) is the mass loss significant. Decomposition products will depend on the environment and usually Oxygen usually does not enter the cycle directly but through water, to give species like, Carbonyl fluoride Carbonyl difluoride As expected other species like fluorinated alkanes and alkenes are obtained like, the monomer, tetrafluoroethylene Hexafluoroethane Octafluorocyclobutane Octafluoroisobutylene Perfluoroisobutylene Tetrafluoroethylene and more. However, the chemicals (species) listed above are for controlled lab experiments. What happens in real life cooking scenario is not (or cannot) be anticipated. But, it can be readily said that the above species can react with other chemicals in food to give fluorinated compounds that will be harmful for humans. As a precaution, one should not cook above 200 C on PTFE coated utensils to be completely safe. Check this extremely detailed page for abstracts and toxicity remarks from research papers on PTFE decomposition on Fluoridealert.org where the conditions and level of toxicity is reported in an organized manner. Polymer decompositions- check PTFE section for general comments.
{ "source": [ "https://chemistry.stackexchange.com/questions/69371", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/29666/" ] }
70,347
My high-school chemistry teacher taught us the mantra "bases are slippery-soapy-slimy!" This question and this answer in Quora say this is due to saponification - conversion of lipids from the top layer dead skin cells into a soap-like substance. During some household cleaning I ended up with a small amount of a fairly strong solution of bleach in water between my thumb and forefinger and noticed it felt quite slippery, and it took several seconds of rubbing under water for it to stop. In this case I would guess that saponification is not the explanation for the very slippery feeling. Is there another explanation?
Maybe it needs to be clarified that the salt of a strong base and a weak acid can conduct saponification. Therefore the fact that bleach reacts with fatty acids creating soap, does not necessarily mean that bleach should be all just base (nor that something else other than saponification should be happening). Household bleach is mainly sodium hypochlorite ( $\ce{NaClO}$ ) dissolved in water (~<5%). One reason it works as a disinfectant is that it reacts with fatty acids of living organisms' membrane and turns them into soap. $$\ce{NaClO (bleach) + R-COOH (fatty acid) → HClO + R-COONa (soap)}$$ (There are other mechanisms by which hypochlorite is known to perform disinfection, though the focus of this answer is to address how the slippery feeling comes about) Why is soap slippery The non-polar side of the soap molecule is less interactive with solid surfaces than polar substances such as water. Therefore soapy water flows with less friction on solid surfaces compared to water and is more slippery.
{ "source": [ "https://chemistry.stackexchange.com/questions/70347", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/16035/" ] }
70,408
Does water have a chemical name? If so, what is it? P.S. I checked up the web and got all sorts of crazy answers like dihydrogenmonoxide, oxidane, hydrogendihydride etc. Please validate.
TL;DR IUPAC hasn’t made up their mind, but plain old water appears to be an appropriate name. However, chemical derivatives of water may not be named using water . In Nomenclature of Organic Chemistry: IUPAC Recommendations and Preferred Names 2013 it is stated ( P-21.1.1.2 ) that The common names water, ammonia, [...] are used in these recommendations but their use as preferred IUPAC names is deferred pending publication of recommendations for the selection of preferred inorganic names; thus, no PIN label will be assigned in names including them. I’m not sure when the next edition of the Red Book (i.e. inorganic chemistry nomenclature recommendations) is coming out, but I can quote the relevant section from the most recent version. Table IR-6.1 of Nomenclature of Inorganic Chemistry: IUPAC Recommendations 2005 lists "oxidane" as the parent hydride name for $\ce{H2O}$. However, it also adds the caveat that The names ‘azane’ and ‘oxidane’ are only intended for use in naming derivatives of ammonia and water, respectively, by substitutive nomenclature, and they form the basis for naming polynuclear entities (e.g. triazane, dioxidane). Examples of such use may be found in Section IR-6.4 and Table IX. Therefore the compound $\ce{ONONO} = \ce{(ON)2O}$, a nitrosylated derivative of water, is named "dinitrosooxidane" and not "dinitrosowater". More examples may be found in Table IX of the same publication. Water itself is still called water. For example, tritiated water $\ce{H^3HO}$ is named ( 3 H 1 )water (Section IR-2.2.3.2).
{ "source": [ "https://chemistry.stackexchange.com/questions/70408", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/42352/" ] }
71,249
In transition metal chemistry the Jahn–Teller effect arises when the configuration of the metal ion and d orbital splitting set up a doubly degenerate state, which is less stable than a state without the degeneracy but with lower symmetry. In Ian Fleming's Molecular Orbitals it is stated that cyclobutadiene would be a diradical considering Huckel theory: but it isn't because of Jahn–Teller distortion (it is actually rectangular and ESR shows no spin). There isn't a doubly degenerate state in this case, yet Jahn–Teller distortion happens according to Fleming. How is this so? (In the singlet state it would be doubly degenerate, but how to know it a priori?)
Very interesting question, and it kept me up despite daylight saving time cheating me of one hour of sleep last night... A good reference is Albright, Burdett and Whangbo, Orbital Interactions in Chemistry 2nd ed. pp 282ff. which explains this in much greater detail than I can. (In general, that book is fantastic.) I will try my best to summarise what they have written - please point out any mistakes that I may have made. I think there is also much more information out there in the literature (a search for "Jahn–Teller cyclobutadiene" should give lots of good results), although it is of course not always easy to read. $\require{begingroup}\begingroup\newcommand{\ket}[1]{|#1\rangle}$ 1. What is the ground state of undistorted $\ce{C4H4}$? Based on simple Huckel theory as you have shown above, one would expect the ground state to be a triplet state with the two highest electrons in different orbitals. This is simply because of exchange energy (Hund's first rule). However, it turns out that the singlet state with the two electrons in different orbitals is more stable. That explains the lack of signal in ESR experiments. (Note that this orbital picture is a simplification, as I will describe later.) In short, this is due to dynamic spin polarisation ; the idea is that there are low-lying excited singlet states which mix with the ground state to stabilise it. This is the basic idea underlying so-called configuration interaction , which mixes in excited molecular states into the wavefunction in order to obtain a better estimate of the ground-state wavefunction. If you are interested in the theoretical aspect of this, I think Szabo and Ostlund's Modern Quantum Chemistry should cover it. The topic of dynamic spin polarisation is treated in depth in pp 285ff. of Albright et al. Your question, "how to predict the singlet state a priori"? is a good one. Albright et al. write that (p 289) In general, high-level configuration interaction calculations are needed to determine the relative stabilities of the singlet and triplet states of a diradical. So, it does not seem to be something that you can figure out a priori, and it is most certainly not something you can figure out by just using Huckel theory (which is an incredibly simplistic treatment). 2. What is the degeneracy of the ground state? You also wrote that "the singlet state will be doubly degenerate". However, in the case where both electrons are in different orbitals, there is only one possible spatial wavefunction, due to the quantum mechanical requirement of indistinguishability. (As we will see in the group theoretical treatment, there is actually no doubly degenerate case, not even when the electrons are paired in the same orbital.) If I label the two SOMOs above as $\ket{a}$ and $\ket{b}$ then the combination $\ket{ab}$ is not a valid spatial wavefunction. Neither is $\ket{ba}$ a valid wavefunction. You have to take linear combinations of these two states, such that your spatial wavefunction is symmetric with respect to interchange of the two electrons. In this case the appropriate combination is $$2^{-1/2}(\ket{ab} + \ket{ba}) \qquad \mathrm{{}^1B_{2g}}$$ There are two more possible symmetric wavefunctions: $$2^{-1/2}(\ket{aa} + \ket{bb}) \qquad \mathrm{{}^1A_{1g}}$$ $$2^{-1/2}(\ket{aa} - \ket{bb}) \qquad \mathrm{{}^1B_{1g}}$$ (Simply based on the requirement for indistinguishability, one might expect $\ket{aa}$ and $\ket{bb}$ to be admissible states. However, this is not true. If anybody is interested to know more, please feel free to ask a new question.) The orbital picture is actually woefully inadequate at dealing with quantum mechanical descriptions of bonding. However, Voter and Goddard tried to make it work out, by adding and subtracting electron configurations: 1 Ignore the energy ordering in this diagram. It is based on Hartree–Fock calculations, which do not include the effects of excited state mixing (CI is a "post-Hartree–Fock method"), and therefore the triplet state is predicted to be the lowest in energy. Instead, just note the form of the three singlet states; they are the same as what I have written above, excluding the normalisation factor. The point of this section is that, this singlet state is (spatially) singly degenerate . The orbital diagrams that organic chemists are used to are inaccurate; the actual electronic terms are in fact linear combinations of electronic configurations. So, using orbital diagrams to predict degeneracy can fail! 3. Why is there a Jahn–Teller (JT) distortion in a singly degenerate state? You are right in that for a JT distortion - a first-order JT distortion, to be precise - to occur, the ground state must be degenerate. This criterion is often rephrased in terms of "asymmetric occupancy of degenerate orbitals", but this is just a simplification for students who have not studied molecular symmetry and electronic states yet. Jahn and Teller made the meaning of their theorem very clear in their original paper; 2 already in the first paragraph they write that the stability of a polyatomic molecule is not possible when "its electronic state has orbital degeneracy, i.e. degeneracy not arising from the spin [...] unless the molecule is a linear one". There is no mention of "occupancy of orbitals". So, this is not a first-order JT distortion, which requires degeneracy of the ground state. Instead, it is a second-order JT distortion. Therefore, most of the other discussion so far on the topic has unfortunately been a little bit off-track. (Not that I knew any better.) Albright et al. describe this as a "pseudo-JT effect" (p 136). For a better understanding I would recommend reading the section on JT distortions (pp 134ff.), it is very thorough, but you would need to have some understanding of perturbation theory in quantum mechanics. Please note that there is a typo on p 136, as is pointed out in the comments to this answer. (Alternatively, there is also a discussion of second-order JT effects in a paper by Pearson. 3 ) The idea is that, the distortion reduces the symmetry of the molecule and therefore allows ground and excited states to mix. This leads to a stabilisation of the ground state and hence an electronic driving force for the distortion. In slightly more detail, we can use a group theoretical treatment to do this. The second-order correction to the energy is given by $$\sum_{j\neq i}\frac{\langle i |(\partial H/\partial q)| j \rangle}{E_i^{(0)} - E_j^{(0)}}$$ where $\ket{i}$ is the ground state (which in this case is described by $2^{-1/2}(\ket{aa} - \ket{bb})$) and $\{\ket{j}\}$ are the excited states. $E_i^{(0)}$ is the unperturbed energy of the ground state $\ket{i}$ and likewise for $E_j^{(0)}$. $q$ is a vibrational coordinate of the molecule. Note that the denominator in this term, $E_i^{(0)} - E_j^{(0)}$, is negative. So, this correction to the energy is always negative, i.e. stabilising. (There is another second-order term which isn't relevant to the discussion here.) The idea is that in order to have a significant stabilisation upon distortion, two criteria must be fulfilled: The denominator must be small, i.e. the excited state must be low-lying. The numerator must be non-zero, i.e. there must be appropriate symmetry such that $\Gamma_i \otimes \Gamma_j \otimes \Gamma_H \otimes \Gamma_q$ contains the totally symmetric irreducible representation. Since $H$ transforms as the TSIR, this is equivalent to the criterion that $\Gamma_q = \Gamma_i \otimes \Gamma_j$. In cyclobutadiene, the two SOMOs in $D_\mathrm{4h}$ symmetry transform as $\mathrm{E_u}$. To find the irreps of the resultant states we take $$\mathrm{E_u \otimes E_u = A_{1g} + [A_{2g}] + B_{1g} + B_{2g}}$$ (Note that there is no spatially degenerate term in this direct product, so there cannot be a first-order JT distortion!) The $\mathrm{A_{2g}}$ state in square brackets corresponds to the triplet case and we can ignore it. In our case (presented without further proof, because I'm not sure why myself) the ground state is $\mathrm{B_{1g}}$. The two low-lying singlet excited states transform as $\mathrm{A_{1g} \oplus \mathrm{B_{2g}}}$. Therefore if there exists a vibrational mode transforming as either $\mathrm{B_{1g}} \otimes \mathrm{A_{1g}} = \mathrm{B_{1g}}$ or $\mathrm{B_{1g}} \otimes \mathrm{B_{2g} = \mathrm{A_{2g}}}$, then the numerator will be nonzero and we can expect this vibrational mode to lead to a distortion with concomitant stabilisation. In our case, it happens that there is a vibrational mode with symmetry $\mathrm{B_{1g}}$, which leads to the distortion from a square to a rectangle. This therefore allows for a mixing of the ground $\mathrm{B_{1g}}$ state with the excited $\mathrm{A_{1g}}$ state upon distortion, leading to electronic stabilisation. Another way of looking at it is that upon lowering of the symmetry from square ($D_\mathrm{4h}$) to rectangular ($D_\mathrm{2h}$), both the $\mathrm{B_{1g}}$ ground state and $\mathrm{A_{1g}}$ excited state adopt the same symmetry $\mathrm{A_g}$. You can prove this by looking at a descent in symmetry table. Therefore, mixing between these two states will be allowed in the new geometry. Because we are talking about the mixing of states here, the orbital picture is not quite enough to describe what is going on, sadly. The best representation we can get is probably this: However, Nakamura et al. have produced potential energy curves for the distortion of cyclobutadiene from square to rectangular and rhomboidal geometry. 4 Here is the relevant one (rectangular geometry): Solid lines indicate singlet states, and dashed lines triplet states. Note that this diagram depicts what I have described above: the singlet ground state ($\mathrm{{}^1B_{1g}}$), as well as the first excited singlet state ($\mathrm{{}^1A_{1g}}$), both have the same irrep upon distortion. The ground state is stabilised upon the lowering of symmetry, precisely due to this mixing. References (1) Voter, A. F.; Goddard, W. A. The generalized resonating valence bond description of cyclobutadiene. J. Am. Chem. Soc. 1986, 108 (11), 2830–2837. DOI: 10.1021/ja00271a008 . (2) Jahn, H. A.; Teller, E. Stability of Polyatomic Molecules in Degenerate Electronic States. I. Orbital Degeneracy. Proc. R. Soc. A 1937, 161 (905), 220–235. DOI: 10.1098/rspa.1937.0142 . (3) Pearson, R. G. The second-order Jahn–Teller effect. J. Mol. Struct.: THEOCHEM 1983, 103, 25–34. DOI: 10.1016/0166-1280(83)85006-4 . (4) Nakamura, K.; Osamura, Y.; Iwata, S. Second-order Jahn–Teller effect of cyclobutadiene in low-lying states. An MCSCF study. Chem. Phys. 1989, 136 (1), 67–77. DOI: 10.1016/0301-0104(89)80129-6 . $\endgroup$
{ "source": [ "https://chemistry.stackexchange.com/questions/71249", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/7159/" ] }
71,252
The question due to which my doubt arose $\ce{->}$ The following system is in equilibrium: $\ce{SO2Cl2 + {Heat} -> SO2 +Cl2}$ What will happen to the temperature of the system initially if some $\ce{Cl2}$ is added into it at constant volume ? My attempt at the question : Since if we add more $\ce{Cl2}$, concentration of products will increase which will in turn shift the equilibrium in backward direction, which will in turn release more heat. And since the heat is being released to the surrounding, temperature of the system should decrease. Though according to my textbook the answer is temperature of the system will increase . So please do explain where my understanding is wrong.
Very interesting question, and it kept me up despite daylight saving time cheating me of one hour of sleep last night... A good reference is Albright, Burdett and Whangbo, Orbital Interactions in Chemistry 2nd ed. pp 282ff. which explains this in much greater detail than I can. (In general, that book is fantastic.) I will try my best to summarise what they have written - please point out any mistakes that I may have made. I think there is also much more information out there in the literature (a search for "Jahn–Teller cyclobutadiene" should give lots of good results), although it is of course not always easy to read. $\require{begingroup}\begingroup\newcommand{\ket}[1]{|#1\rangle}$ 1. What is the ground state of undistorted $\ce{C4H4}$? Based on simple Huckel theory as you have shown above, one would expect the ground state to be a triplet state with the two highest electrons in different orbitals. This is simply because of exchange energy (Hund's first rule). However, it turns out that the singlet state with the two electrons in different orbitals is more stable. That explains the lack of signal in ESR experiments. (Note that this orbital picture is a simplification, as I will describe later.) In short, this is due to dynamic spin polarisation ; the idea is that there are low-lying excited singlet states which mix with the ground state to stabilise it. This is the basic idea underlying so-called configuration interaction , which mixes in excited molecular states into the wavefunction in order to obtain a better estimate of the ground-state wavefunction. If you are interested in the theoretical aspect of this, I think Szabo and Ostlund's Modern Quantum Chemistry should cover it. The topic of dynamic spin polarisation is treated in depth in pp 285ff. of Albright et al. Your question, "how to predict the singlet state a priori"? is a good one. Albright et al. write that (p 289) In general, high-level configuration interaction calculations are needed to determine the relative stabilities of the singlet and triplet states of a diradical. So, it does not seem to be something that you can figure out a priori, and it is most certainly not something you can figure out by just using Huckel theory (which is an incredibly simplistic treatment). 2. What is the degeneracy of the ground state? You also wrote that "the singlet state will be doubly degenerate". However, in the case where both electrons are in different orbitals, there is only one possible spatial wavefunction, due to the quantum mechanical requirement of indistinguishability. (As we will see in the group theoretical treatment, there is actually no doubly degenerate case, not even when the electrons are paired in the same orbital.) If I label the two SOMOs above as $\ket{a}$ and $\ket{b}$ then the combination $\ket{ab}$ is not a valid spatial wavefunction. Neither is $\ket{ba}$ a valid wavefunction. You have to take linear combinations of these two states, such that your spatial wavefunction is symmetric with respect to interchange of the two electrons. In this case the appropriate combination is $$2^{-1/2}(\ket{ab} + \ket{ba}) \qquad \mathrm{{}^1B_{2g}}$$ There are two more possible symmetric wavefunctions: $$2^{-1/2}(\ket{aa} + \ket{bb}) \qquad \mathrm{{}^1A_{1g}}$$ $$2^{-1/2}(\ket{aa} - \ket{bb}) \qquad \mathrm{{}^1B_{1g}}$$ (Simply based on the requirement for indistinguishability, one might expect $\ket{aa}$ and $\ket{bb}$ to be admissible states. However, this is not true. If anybody is interested to know more, please feel free to ask a new question.) The orbital picture is actually woefully inadequate at dealing with quantum mechanical descriptions of bonding. However, Voter and Goddard tried to make it work out, by adding and subtracting electron configurations: 1 Ignore the energy ordering in this diagram. It is based on Hartree–Fock calculations, which do not include the effects of excited state mixing (CI is a "post-Hartree–Fock method"), and therefore the triplet state is predicted to be the lowest in energy. Instead, just note the form of the three singlet states; they are the same as what I have written above, excluding the normalisation factor. The point of this section is that, this singlet state is (spatially) singly degenerate . The orbital diagrams that organic chemists are used to are inaccurate; the actual electronic terms are in fact linear combinations of electronic configurations. So, using orbital diagrams to predict degeneracy can fail! 3. Why is there a Jahn–Teller (JT) distortion in a singly degenerate state? You are right in that for a JT distortion - a first-order JT distortion, to be precise - to occur, the ground state must be degenerate. This criterion is often rephrased in terms of "asymmetric occupancy of degenerate orbitals", but this is just a simplification for students who have not studied molecular symmetry and electronic states yet. Jahn and Teller made the meaning of their theorem very clear in their original paper; 2 already in the first paragraph they write that the stability of a polyatomic molecule is not possible when "its electronic state has orbital degeneracy, i.e. degeneracy not arising from the spin [...] unless the molecule is a linear one". There is no mention of "occupancy of orbitals". So, this is not a first-order JT distortion, which requires degeneracy of the ground state. Instead, it is a second-order JT distortion. Therefore, most of the other discussion so far on the topic has unfortunately been a little bit off-track. (Not that I knew any better.) Albright et al. describe this as a "pseudo-JT effect" (p 136). For a better understanding I would recommend reading the section on JT distortions (pp 134ff.), it is very thorough, but you would need to have some understanding of perturbation theory in quantum mechanics. Please note that there is a typo on p 136, as is pointed out in the comments to this answer. (Alternatively, there is also a discussion of second-order JT effects in a paper by Pearson. 3 ) The idea is that, the distortion reduces the symmetry of the molecule and therefore allows ground and excited states to mix. This leads to a stabilisation of the ground state and hence an electronic driving force for the distortion. In slightly more detail, we can use a group theoretical treatment to do this. The second-order correction to the energy is given by $$\sum_{j\neq i}\frac{\langle i |(\partial H/\partial q)| j \rangle}{E_i^{(0)} - E_j^{(0)}}$$ where $\ket{i}$ is the ground state (which in this case is described by $2^{-1/2}(\ket{aa} - \ket{bb})$) and $\{\ket{j}\}$ are the excited states. $E_i^{(0)}$ is the unperturbed energy of the ground state $\ket{i}$ and likewise for $E_j^{(0)}$. $q$ is a vibrational coordinate of the molecule. Note that the denominator in this term, $E_i^{(0)} - E_j^{(0)}$, is negative. So, this correction to the energy is always negative, i.e. stabilising. (There is another second-order term which isn't relevant to the discussion here.) The idea is that in order to have a significant stabilisation upon distortion, two criteria must be fulfilled: The denominator must be small, i.e. the excited state must be low-lying. The numerator must be non-zero, i.e. there must be appropriate symmetry such that $\Gamma_i \otimes \Gamma_j \otimes \Gamma_H \otimes \Gamma_q$ contains the totally symmetric irreducible representation. Since $H$ transforms as the TSIR, this is equivalent to the criterion that $\Gamma_q = \Gamma_i \otimes \Gamma_j$. In cyclobutadiene, the two SOMOs in $D_\mathrm{4h}$ symmetry transform as $\mathrm{E_u}$. To find the irreps of the resultant states we take $$\mathrm{E_u \otimes E_u = A_{1g} + [A_{2g}] + B_{1g} + B_{2g}}$$ (Note that there is no spatially degenerate term in this direct product, so there cannot be a first-order JT distortion!) The $\mathrm{A_{2g}}$ state in square brackets corresponds to the triplet case and we can ignore it. In our case (presented without further proof, because I'm not sure why myself) the ground state is $\mathrm{B_{1g}}$. The two low-lying singlet excited states transform as $\mathrm{A_{1g} \oplus \mathrm{B_{2g}}}$. Therefore if there exists a vibrational mode transforming as either $\mathrm{B_{1g}} \otimes \mathrm{A_{1g}} = \mathrm{B_{1g}}$ or $\mathrm{B_{1g}} \otimes \mathrm{B_{2g} = \mathrm{A_{2g}}}$, then the numerator will be nonzero and we can expect this vibrational mode to lead to a distortion with concomitant stabilisation. In our case, it happens that there is a vibrational mode with symmetry $\mathrm{B_{1g}}$, which leads to the distortion from a square to a rectangle. This therefore allows for a mixing of the ground $\mathrm{B_{1g}}$ state with the excited $\mathrm{A_{1g}}$ state upon distortion, leading to electronic stabilisation. Another way of looking at it is that upon lowering of the symmetry from square ($D_\mathrm{4h}$) to rectangular ($D_\mathrm{2h}$), both the $\mathrm{B_{1g}}$ ground state and $\mathrm{A_{1g}}$ excited state adopt the same symmetry $\mathrm{A_g}$. You can prove this by looking at a descent in symmetry table. Therefore, mixing between these two states will be allowed in the new geometry. Because we are talking about the mixing of states here, the orbital picture is not quite enough to describe what is going on, sadly. The best representation we can get is probably this: However, Nakamura et al. have produced potential energy curves for the distortion of cyclobutadiene from square to rectangular and rhomboidal geometry. 4 Here is the relevant one (rectangular geometry): Solid lines indicate singlet states, and dashed lines triplet states. Note that this diagram depicts what I have described above: the singlet ground state ($\mathrm{{}^1B_{1g}}$), as well as the first excited singlet state ($\mathrm{{}^1A_{1g}}$), both have the same irrep upon distortion. The ground state is stabilised upon the lowering of symmetry, precisely due to this mixing. References (1) Voter, A. F.; Goddard, W. A. The generalized resonating valence bond description of cyclobutadiene. J. Am. Chem. Soc. 1986, 108 (11), 2830–2837. DOI: 10.1021/ja00271a008 . (2) Jahn, H. A.; Teller, E. Stability of Polyatomic Molecules in Degenerate Electronic States. I. Orbital Degeneracy. Proc. R. Soc. A 1937, 161 (905), 220–235. DOI: 10.1098/rspa.1937.0142 . (3) Pearson, R. G. The second-order Jahn–Teller effect. J. Mol. Struct.: THEOCHEM 1983, 103, 25–34. DOI: 10.1016/0166-1280(83)85006-4 . (4) Nakamura, K.; Osamura, Y.; Iwata, S. Second-order Jahn–Teller effect of cyclobutadiene in low-lying states. An MCSCF study. Chem. Phys. 1989, 136 (1), 67–77. DOI: 10.1016/0301-0104(89)80129-6 . $\endgroup$
{ "source": [ "https://chemistry.stackexchange.com/questions/71252", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/40301/" ] }
71,277
Correct me if I'm wrong, I'd like to think of vapor pressure of water as the number of water molecules escaping from the water surface per unit area. Greater the leaving number, greater the vapor pressure. If this is right, can I conclude that the rate of evaporation is directly proportional to the vapor pressure? My textbook says that the vapor pressure of water depends only on the temperature of the water. It doesn't depend on external pressure or humidity. But we know that the rate of evaporation decreases with increase in humidity.(Wet clothes dry faster in less humid air.) Doesn't this imply a decrease in vapor pressure with increase in humidity?
Very interesting question, and it kept me up despite daylight saving time cheating me of one hour of sleep last night... A good reference is Albright, Burdett and Whangbo, Orbital Interactions in Chemistry 2nd ed. pp 282ff. which explains this in much greater detail than I can. (In general, that book is fantastic.) I will try my best to summarise what they have written - please point out any mistakes that I may have made. I think there is also much more information out there in the literature (a search for "Jahn–Teller cyclobutadiene" should give lots of good results), although it is of course not always easy to read. $\require{begingroup}\begingroup\newcommand{\ket}[1]{|#1\rangle}$ 1. What is the ground state of undistorted $\ce{C4H4}$? Based on simple Huckel theory as you have shown above, one would expect the ground state to be a triplet state with the two highest electrons in different orbitals. This is simply because of exchange energy (Hund's first rule). However, it turns out that the singlet state with the two electrons in different orbitals is more stable. That explains the lack of signal in ESR experiments. (Note that this orbital picture is a simplification, as I will describe later.) In short, this is due to dynamic spin polarisation ; the idea is that there are low-lying excited singlet states which mix with the ground state to stabilise it. This is the basic idea underlying so-called configuration interaction , which mixes in excited molecular states into the wavefunction in order to obtain a better estimate of the ground-state wavefunction. If you are interested in the theoretical aspect of this, I think Szabo and Ostlund's Modern Quantum Chemistry should cover it. The topic of dynamic spin polarisation is treated in depth in pp 285ff. of Albright et al. Your question, "how to predict the singlet state a priori"? is a good one. Albright et al. write that (p 289) In general, high-level configuration interaction calculations are needed to determine the relative stabilities of the singlet and triplet states of a diradical. So, it does not seem to be something that you can figure out a priori, and it is most certainly not something you can figure out by just using Huckel theory (which is an incredibly simplistic treatment). 2. What is the degeneracy of the ground state? You also wrote that "the singlet state will be doubly degenerate". However, in the case where both electrons are in different orbitals, there is only one possible spatial wavefunction, due to the quantum mechanical requirement of indistinguishability. (As we will see in the group theoretical treatment, there is actually no doubly degenerate case, not even when the electrons are paired in the same orbital.) If I label the two SOMOs above as $\ket{a}$ and $\ket{b}$ then the combination $\ket{ab}$ is not a valid spatial wavefunction. Neither is $\ket{ba}$ a valid wavefunction. You have to take linear combinations of these two states, such that your spatial wavefunction is symmetric with respect to interchange of the two electrons. In this case the appropriate combination is $$2^{-1/2}(\ket{ab} + \ket{ba}) \qquad \mathrm{{}^1B_{2g}}$$ There are two more possible symmetric wavefunctions: $$2^{-1/2}(\ket{aa} + \ket{bb}) \qquad \mathrm{{}^1A_{1g}}$$ $$2^{-1/2}(\ket{aa} - \ket{bb}) \qquad \mathrm{{}^1B_{1g}}$$ (Simply based on the requirement for indistinguishability, one might expect $\ket{aa}$ and $\ket{bb}$ to be admissible states. However, this is not true. If anybody is interested to know more, please feel free to ask a new question.) The orbital picture is actually woefully inadequate at dealing with quantum mechanical descriptions of bonding. However, Voter and Goddard tried to make it work out, by adding and subtracting electron configurations: 1 Ignore the energy ordering in this diagram. It is based on Hartree–Fock calculations, which do not include the effects of excited state mixing (CI is a "post-Hartree–Fock method"), and therefore the triplet state is predicted to be the lowest in energy. Instead, just note the form of the three singlet states; they are the same as what I have written above, excluding the normalisation factor. The point of this section is that, this singlet state is (spatially) singly degenerate . The orbital diagrams that organic chemists are used to are inaccurate; the actual electronic terms are in fact linear combinations of electronic configurations. So, using orbital diagrams to predict degeneracy can fail! 3. Why is there a Jahn–Teller (JT) distortion in a singly degenerate state? You are right in that for a JT distortion - a first-order JT distortion, to be precise - to occur, the ground state must be degenerate. This criterion is often rephrased in terms of "asymmetric occupancy of degenerate orbitals", but this is just a simplification for students who have not studied molecular symmetry and electronic states yet. Jahn and Teller made the meaning of their theorem very clear in their original paper; 2 already in the first paragraph they write that the stability of a polyatomic molecule is not possible when "its electronic state has orbital degeneracy, i.e. degeneracy not arising from the spin [...] unless the molecule is a linear one". There is no mention of "occupancy of orbitals". So, this is not a first-order JT distortion, which requires degeneracy of the ground state. Instead, it is a second-order JT distortion. Therefore, most of the other discussion so far on the topic has unfortunately been a little bit off-track. (Not that I knew any better.) Albright et al. describe this as a "pseudo-JT effect" (p 136). For a better understanding I would recommend reading the section on JT distortions (pp 134ff.), it is very thorough, but you would need to have some understanding of perturbation theory in quantum mechanics. Please note that there is a typo on p 136, as is pointed out in the comments to this answer. (Alternatively, there is also a discussion of second-order JT effects in a paper by Pearson. 3 ) The idea is that, the distortion reduces the symmetry of the molecule and therefore allows ground and excited states to mix. This leads to a stabilisation of the ground state and hence an electronic driving force for the distortion. In slightly more detail, we can use a group theoretical treatment to do this. The second-order correction to the energy is given by $$\sum_{j\neq i}\frac{\langle i |(\partial H/\partial q)| j \rangle}{E_i^{(0)} - E_j^{(0)}}$$ where $\ket{i}$ is the ground state (which in this case is described by $2^{-1/2}(\ket{aa} - \ket{bb})$) and $\{\ket{j}\}$ are the excited states. $E_i^{(0)}$ is the unperturbed energy of the ground state $\ket{i}$ and likewise for $E_j^{(0)}$. $q$ is a vibrational coordinate of the molecule. Note that the denominator in this term, $E_i^{(0)} - E_j^{(0)}$, is negative. So, this correction to the energy is always negative, i.e. stabilising. (There is another second-order term which isn't relevant to the discussion here.) The idea is that in order to have a significant stabilisation upon distortion, two criteria must be fulfilled: The denominator must be small, i.e. the excited state must be low-lying. The numerator must be non-zero, i.e. there must be appropriate symmetry such that $\Gamma_i \otimes \Gamma_j \otimes \Gamma_H \otimes \Gamma_q$ contains the totally symmetric irreducible representation. Since $H$ transforms as the TSIR, this is equivalent to the criterion that $\Gamma_q = \Gamma_i \otimes \Gamma_j$. In cyclobutadiene, the two SOMOs in $D_\mathrm{4h}$ symmetry transform as $\mathrm{E_u}$. To find the irreps of the resultant states we take $$\mathrm{E_u \otimes E_u = A_{1g} + [A_{2g}] + B_{1g} + B_{2g}}$$ (Note that there is no spatially degenerate term in this direct product, so there cannot be a first-order JT distortion!) The $\mathrm{A_{2g}}$ state in square brackets corresponds to the triplet case and we can ignore it. In our case (presented without further proof, because I'm not sure why myself) the ground state is $\mathrm{B_{1g}}$. The two low-lying singlet excited states transform as $\mathrm{A_{1g} \oplus \mathrm{B_{2g}}}$. Therefore if there exists a vibrational mode transforming as either $\mathrm{B_{1g}} \otimes \mathrm{A_{1g}} = \mathrm{B_{1g}}$ or $\mathrm{B_{1g}} \otimes \mathrm{B_{2g} = \mathrm{A_{2g}}}$, then the numerator will be nonzero and we can expect this vibrational mode to lead to a distortion with concomitant stabilisation. In our case, it happens that there is a vibrational mode with symmetry $\mathrm{B_{1g}}$, which leads to the distortion from a square to a rectangle. This therefore allows for a mixing of the ground $\mathrm{B_{1g}}$ state with the excited $\mathrm{A_{1g}}$ state upon distortion, leading to electronic stabilisation. Another way of looking at it is that upon lowering of the symmetry from square ($D_\mathrm{4h}$) to rectangular ($D_\mathrm{2h}$), both the $\mathrm{B_{1g}}$ ground state and $\mathrm{A_{1g}}$ excited state adopt the same symmetry $\mathrm{A_g}$. You can prove this by looking at a descent in symmetry table. Therefore, mixing between these two states will be allowed in the new geometry. Because we are talking about the mixing of states here, the orbital picture is not quite enough to describe what is going on, sadly. The best representation we can get is probably this: However, Nakamura et al. have produced potential energy curves for the distortion of cyclobutadiene from square to rectangular and rhomboidal geometry. 4 Here is the relevant one (rectangular geometry): Solid lines indicate singlet states, and dashed lines triplet states. Note that this diagram depicts what I have described above: the singlet ground state ($\mathrm{{}^1B_{1g}}$), as well as the first excited singlet state ($\mathrm{{}^1A_{1g}}$), both have the same irrep upon distortion. The ground state is stabilised upon the lowering of symmetry, precisely due to this mixing. References (1) Voter, A. F.; Goddard, W. A. The generalized resonating valence bond description of cyclobutadiene. J. Am. Chem. Soc. 1986, 108 (11), 2830–2837. DOI: 10.1021/ja00271a008 . (2) Jahn, H. A.; Teller, E. Stability of Polyatomic Molecules in Degenerate Electronic States. I. Orbital Degeneracy. Proc. R. Soc. A 1937, 161 (905), 220–235. DOI: 10.1098/rspa.1937.0142 . (3) Pearson, R. G. The second-order Jahn–Teller effect. J. Mol. Struct.: THEOCHEM 1983, 103, 25–34. DOI: 10.1016/0166-1280(83)85006-4 . (4) Nakamura, K.; Osamura, Y.; Iwata, S. Second-order Jahn–Teller effect of cyclobutadiene in low-lying states. An MCSCF study. Chem. Phys. 1989, 136 (1), 67–77. DOI: 10.1016/0301-0104(89)80129-6 . $\endgroup$
{ "source": [ "https://chemistry.stackexchange.com/questions/71277", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/42349/" ] }
71,852
I'm in AP Chemistry and we are learning about the Brønsted-Lowry model and my teacher mentioned that "for the most part" acids have hydrogen, could there be such a thing as an acid that has no hydrogen in it?
It depends on which definition of acids and bases you are using. According to the Arrhenius theory, acids are defined as a compound or element that releases hydrogen (H+) ions into the solution. Therefore, there are no Arrhenius acids without a hydrogen atom. According to the Brønsted–Lowry acid–base theory, an acid is any substance that can donate a proton and a base as any substance that can accept a proton. Hence, there are no acids without a hydrogen atom according to this theory. But according to the Lewis theory of acids and bases, an acid is any substance that can accept a pair of nonbonding electrons. In other words, a Lewis acid accepts a lone pair of electrons. According to this theory, acids without a hydrogen atom can exist. (A coordinate bond is formed between the Lewis acid and base. The compound formed by the Lewis acid and base is called a Lewis adduct) A great example of this would be $\ce{BF3}$. It is neither an Arrhenius acid nor a Brønsted–Lowry acid but it is a Lewis acid. The boron atom accepts a pair of nonbonding electrons from another atom or ion to complete its octet. Here $\ce{BF3}$ is a Lewis acid as it accepts a pair of nonbonding electrons. The fluorine ion here is a Lewis base as it donates a pair of electrons. If you want to get more rigorous about the definition of Lewis acids: "a Lewis acid is a type of chemical that reacts with a Lewis base to form a Lewis adduct". More about Lewis acids and bases here .
{ "source": [ "https://chemistry.stackexchange.com/questions/71852", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/39726/" ] }
71,894
I believe I saw this claim somewhere on the internet a long time ago. Specifically, it was claimed that the difference could be observed by filling one long, straight tube with light water and one with heavy water, and looking through both tubes lengthwise (so that light has to travel through the tubes' lengths before reaching the eye), whereupon the light water would appear blue as it does in the oceans, and the heavy water would not. The explanation given was that heavy water has a different vibrational spectrum because of the greater mass of the $^2$H atom, which seemed perfectly plausible. However, I am no longer able to find a source for this claim, which is strange because if it were true, surely it would not be so difficult to find a source?
Based on your description, I may have found the article you originally saw, or at least one very similar. Researchers from Dartmouth College published a paper $\mathrm{^1}$ in which they report, among other things, the results of viewing sunlit white paper through two 3 meter lengths of plexiglass; one filled with $\ce{H2O}$ and one with $\ce{D2O}$ . Sure enough, because of the lower frequency of the maximum absorption of $\ce{D2O}$ in the red to near IR wavelengths, the blue color that is characteristic of $\ce{H2O}$ is far less pronounced in $\ce{D2O}$ . This website is based on the published paper and additionally shows a photograph of the blue colored $\ce{H2O}$ on the left with the far less colored $\ce{D2O}$ on the right: " Why is Water Blue ", Charles L. Braun and Sergei N. Smirnov, J. Chem. Edu., 1993, 70(8), 612
{ "source": [ "https://chemistry.stackexchange.com/questions/71894", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/5490/" ] }
72,452
I was watching this YouTube video in which the original experiment was to make a small rocket out of liquid propane and Coke. When that failed, the person doing the experiment decided to try pouring liquid propane into the Coke bottle and closing the cap on it to see what would happen. I would have expected (and I believe he did too) that the bottle eventually would build up enough pressure to explode. However, that did not happen and along with that not happening, the liquid appeared to warm up to about air temperature and stopped boiling. When he opened the lid of the bottle, it quickly depressurized, the liquid began boiling and returned to its normal, cold state. My guess is that once the propane vapors filled the bottle and expanded sufficiently, it put enough pressure on the liquid to keep it from boiling. So my question here is, is my guess correct and, along those lines, if it had been warmer outside, would that bottle have built up enough pressure to explode? Note: Even if my guess is correct, I want to see the hows and whys of it in an answer.
Yes, based on what we can see in the video, your guess appears to be correct: as the propane-filled bottle warmed up, just enough propane evaporated to keep the pressure inside the bottle equal to the equilibrium vapor pressure of the liquid propane. According to the video , the ambient temperature outside at the time it was recorded was "about 45 °F", or about 7 °C. Using the formula given here , I calculate the vapor pressure of propane at that temperature to be about 4400 mm Hg , or about 590 kPa or about 5.9 bar. Meanwhile, according to this page , the pressure inside a warm can or bottle of Coke can reach at least 380 kPa, or about two thirds of the vapor pressure of propane on a cold day. As the bottles are certainly designed with a considerable safety margin, to make sure that they won't burst even if handled carelessly or slightly damaged, it's not surprising that they can easily withstand the pressure of the propane in the video. BTW, this is exactly how aerosol spray cans work: they contain a mixture of the liquid being sprayed and a propellant substance (quite often propane) that has a boiling point at 1 atm only slightly below room temperature (or, equivalently, that has an equilibrium vapor pressure only slightly above 1 atm at room temperature). Thus, as the can is drained, the partial boiling of the propellant maintains the pressure inside the can at the propellant's vapor pressure, which is high enough to propel the spray out of the nozzle, but not so high that it would require an excessively sturdy and expensive can to contain it. As for what would happen at higher temperatures, at room temperature (i.e. 25 °C), the vapor pressure of propane would be about 7100 mm Hg or 950 kPa (according to the formula, or about 7600 mm Hg or 1000 kPa according to the table, which seems to be taken from a different source). According to this random forum post , the small ½ liter Coke bottles used in the video can apparently withstand at least 180 psi, or 1250 kPa, so the propane-filled bottle probably wouldn't burst even at room temperature (unless it happened to be damaged or otherwise particularly weak). If the temperature was raised to, say, 45 °C (113 °F, a very hot day), the vapor pressure of the propane would rise further to about 1500 kPa, which just might be enough to make the bottle fail. Also, elevated temperatures will soften and weaken the plastic somewhat, making failure more likely. In any case, in my personal experience, the weakest point of such bottles seems to be the relatively thin and weak cap, which would likely fail at some point before the bottle itself did. I wouldn't be surprised if that was by design, to make the typical failure mode relatively safe and predictable.
{ "source": [ "https://chemistry.stackexchange.com/questions/72452", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/5137/" ] }
72,700
If I have a liter of water fully saturated with sucrose would it be possible to dissolve something like salt or any other substance in the water? Or when the solution is saturated, is it impossible to dissolve another solute in it?
Saturating a liquid with one solute does not mean that the liquid will no longer dissolve another solute. However, you can expect the solubility of the second solute to be different, generally lower, than in the neat solvent. One relevant concept here (though not specifically applicable to sucrose), in the case of ionic solutes, is the common-ion effect. According to this Wikipedia article : The common ion effect is responsible for the reduction in the solubility of an ionic precipitate when a soluble compound containing one of the ions of the precipitate is added to the solution in equilibrium with the precipitate. It states that if the concentration of any one of the ions is increased, then, according to Le Chatelier's principle, some of the ions in excess should be removed from solution, by combining with the oppositely charged ions. Regardless of whether the solute is ionic or not, when you are given a solubility value, for example, the solubility of some compound in water as $\pu{g solute/100mL water}$, this value is only relevant to the solubility in pure water. Once you have saturated water with sucrose, you then have a very different solvent system. There are no simple means of determining what the solubility of a particular solute will be in a solution saturated with another solute. However, there will, in general, be some solubility of additional solvents except as in the case described for the common ion effect. Please don't hesitate to ask for clarifications in the comments below if I have misunderstood your question or left anything out.
{ "source": [ "https://chemistry.stackexchange.com/questions/72700", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/38365/" ] }
72,968
Today we have huge computational power (which is even significantly larger with supercomputers). I know that computational chemistry is used sometimes to predict particle properties. As I read on Wikipedia: Present algorithms in computational chemistry can routinely calculate the properties of molecules that contain up to about 40 electrons with sufficient accuracy. If that's so, why bother to try to find chemical interactions and properties experimentally, at least up to 40 electrons? For example, every year new drugs are being discovered. Wouldn't be it easier at least to find new chemical compounds, if not their properties, simply by computer simulation? What are the constraints and where do they come from? (I know that such constraints exist, but I'd like to know why).
Forty electrons is tiny . Even if we limit ourselves to just the valence electons, cyclohexane already has 36 electrons. Anything drug-like has way more electrons that 40. For example, viagra has 178 valence electrons, and that's not necessarily a "large" drug. (Compare with vancomycin , for example.) Even if you're dealing with things inorganic compounds, where the total number of atoms in the formula unit is small, the properties of the material don't come from a single formula unit, but come from the interaction of a large number of atoms. -- That's an example of a more general principle. The important properties for most materials you use (including drugs) don't come the molecule in isolation, but come from the interactions of the molecule with other molecules, either of the same chemical or of different chemicals. To be accurate, all of those interactions need a system with much more than 40 electrons. The 40 electron limit comes from the implict assumption here that you're talking about quantum mechanical calculations. QM calculations are rather computationally expensive, as you have to account for all the interactions of all the electrons with each other at all positions in their delocalized superposition. There's various tricks (like DFT ) which make the calculations for large numbers of electrons easier, but note that "easier" doesn't mean "easy". Even with DFT and other approaches, large systems take a lot of computer time to calculate accurately. There are other approaches which don't suffer from the same limit as QM does, but they are able to make their gains in efficiency because they make approximations. For example, molecular mechanics approaches are able to simulate systems in the hundreds of thousands of atoms region. But they're able to do so because they don't actually calculate the position of electrons. Instead they treat the system "classically", and experimentally fit interaction potentials which approximate the underlying quantum effects. (For example, they don't exactly calculate the bond stretching potential, but instead approximate it as a harmonic one. That's "close enough" to the true bond stretching potential for the range of bond lengths typically seen in such simulations, but not 100% quantum mechanically accurate.) There's many groups and companies which do use molecular mechanics and other similar approaches to inform their drug and material development process. The issue is that because the energetic potentials being used are only approximate the results from the simulation are also only approximate. Depending on what you're trying to simulate, the results of the simulation may or may not be accurate. As such, these simulations are treated mostly as a first step, to find potential leads/hypotheses, and then the scientists actually have to go into the lab and test the results to confirm.
{ "source": [ "https://chemistry.stackexchange.com/questions/72968", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/44182/" ] }
73,103
We all know about IUPAC nomenclature. There are rules to name straight chain compounds, cyclic ones, polycyclic ones, ones with functional groups, and so on. But I have come across several examples of compounds which I simply cannot name, most of them being pharmaceutical drugs. Take the heme group complex in hemoglobin: I'm not sure, but systematic nomenclature appears not suitable to name this structure. A popular antacid, Zantac, also known as ranitidine has this structure: Can compounds like these even be named systematically using IUPAC rules?
Definitely not. You got yourself in trouble specifying all organic compounds, because there is a truly immense, mind-boggling number of possible compounds. No one even knows how to accurately determine such a quantity. A very rough estimate , making some incredible simplifications such as the use of only carbon, hydrogen, oxygen, nitrogen and sulfur atoms, is that there are some $10^{63}$ "reasonable" unique structures for compounds with a molecular weight below $\rm{500\ g\ mol^{-1}}$. Thanks to combinatorics, chemical space is enormous . We (or all sapient species in our observable volume, for that matter) will never come close to scratching its surface. Furthermore, IUPAC nomenclature is largely created a posteriori . That is, though there are many rules trying to cover as many bases as possible (in the process becoming quite unwieldy at times), eventually some unexpected compound with unusual connectivity is discovered and becomes of wide interest. Thus standardising its nomenclature and that for closely related structures becomes fundamental to allow communication between scientists. A recent example of this occurred with fullerenes , which quickly jumped to prominence after 1985. IUPAC just had to create an entire new section of nomenclature for this class of compounds, which is not at all uncommon. The closest thing to an absolute method of describing a compound's structure is to have a table of positional data (XYZ coordinates) giving the relative positions between the atoms determined from X-ray/neutron diffraction. Any attempt at simplifying this data will be lossy, whether the structures are drawn (not too lossy) or named (very lossy). The structures you show have comparatively simple IUPAC names, in fact. Heme is a type of porphyrin , which is a widely occurring framework in biomolecules. The central framework can have its positions numbered and the substituent in each one read off separately. Regarding Zantac, the Wikipedia page for the compound states its IUPAC name in the "Identifier" section in the box at the right, namely N-(2-[(5-[(dimethylamino)methyl]furan-2-yl)methylthio]ethyl)-N'-methyl-2-nitroethene-1,1-diamine . As some interesting examples of the relations between available nomenclature and chemical space, consider the following: 1,1,1,2,2,2-Hexaphenylethane : A molecule with a simple structure and a simple IUPAC name which likely cannot exist in reasonable conditions. Maitotoxin : an awe-inspiring biomolecule with a rather large structure but containing fairly simple connectivity between atoms, whose IUPAC exists but is quite complex - disodium (2S,3R,4R,4aS,5aR,6aS,7aS,8R,9R,10R,11aR,12R,12aR,13aS,14aR)-10-[(2R,3R,4R,4aS,6S,7R,8aS)-6-[(1R,3R)-4-[(2S,3R,4R,4aS,6R,7R,8aS)-6-[(1R,3S,5R,7S,9R,10R,12R,13S,14S,16R,19S,21R,23S,25S,28R,30S)-25-[(1S,3R,5S,7R,9S,11S,14R,16S,18R,20S,21Z,24R,26S,28R,30S,32R,34R,35R,37S,39R,42S,44R)-11-[(1S,2R,4R,5S)-1,2-dihydroxy-4,5-dimethyloct-7-en-1-yl]-35-hydroxy-14,16,18,32,34,39,42,44-octamethyl-2,6,10,15,19,25,29,33,38,43-decaoxadecacyclo[22.21.0.0³,²⁰.0⁵,¹⁸.0⁷,¹⁶.0⁹,¹⁴.0²⁶,⁴⁴.0²⁸,⁴².0³⁰,³⁹.0³²,³⁷]pentatetracont-21-en-34-yl]-9,13-dihydroxy-3,7,14,19,30-pentamethyl-2,6,11,15,20,24,29-heptaoxaheptacyclo[17.12.0.0³,¹⁶.0⁵,¹⁴.0⁷,¹².0²¹,³⁰.0²³,²⁸]hentriacontan-10-yl]-3,4,7-trihydroxy-octahydropyrano[3,2-b]pyran-2-yl]-1,3-dihydroxybutyl]-3,4,7-trihydroxy-octahydropyrano[3,2-b]pyran-2-yl]-2-[(2S,3R)-2,3-dihydroxy-3-[(1S,3R,5S,6S,7R,8S,10R,11R,13S,15R,17S,19R,21R,22S,24S,25S,26R)-6,7,11,21,25-pentahydroxy-13,17-dimethyl-8-[(2R,3R,4R,7S,8R,9R,11R,13E)-3,8,11,15-tetrahydroxy-4,9,13-trimethyl-12-methylidene-7-(sulfonatooxy)pentadec-13-en-2-yl]-4,9,14,18,23,27-hexaoxahexacyclo[13.12.0.0³,¹³.0⁵,¹⁰.0¹⁷,²⁶.0¹⁹,²⁴]heptacosan-22-yl]propyl]-4,8,9,12-tetrahydroxy-hexadecahydro-2H-1,5,7,11,13-pentaoxapentacen-3-yl sulfate This hydrocarbon : A molecule which likely exists, with a seemingly very simple structure, but with slightly quirky connectivity which makes naming it a challenge. Add a few more bridges and I'm sure you can break any existent nomenclature rules.
{ "source": [ "https://chemistry.stackexchange.com/questions/73103", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/42741/" ] }
73,876
Can isotopes of any given element be represented using a completely different chemical symbol? What's the IUPAC's take on this? Sure, ordinarily you would add the isotope's mass as a superscript to the element's symbol to differentiate it from other isotopes: For example, carbon-12 ($\ce{^{12}C}$) and carbon-14 ($\ce{^{14}C}$); however the base-symbol $\ce{C}$, for carbon, doesn't change. But the isotopes of hydrogen don't seem to follow this strictly. Often, I see deuterium ($\ce{^{2}H}$) and tritium ($\ce{^{3}H}$) represented by $\ce{D}$ (I see this one in organic chem textbooks a lot) and $\ce{T}$ respectively. Does this "convention" fit in with IUPAC norms? If so, can isotopes of other elements be represented differently as well?
IR-3.3.1 Isotopes of an element The isotopes of an element all bear the same name (but see Section IR-3.3.2) and are designated by mass numbers (see Section IR-3.2). For example, the atom of atomic number 8 and mass number 18 is named oxygen-18 and has the symbol $\ce{^{18}_{}O}$ . IR-3.3.2 Isotopes of hydrogen Hydrogen is an exception to the rule in Section IR-3.3.1 in that the three isotopes $\ce{^{1}_{}H}$ , $\ce{^{2}_{}H}$ and $\ce{^{3}_{}H}$ can have the alternative names protium, deuterium and tritium, respectively. The symbols D and T may be used for deuterium and tritium but $\ce{^{2}_{}H}$ and $\ce{^{3}_{}H}$ are preferred because D and T can disturb the alphabetical ordering in formulae (see Section IR-4.5). The combination of a muon and an electron behaves like a light isotope of hydrogen and is named muonium, symbol $\ce{Mu}$ .⁵ These names give rise to the names proton, deuteron, triton and muon for the cations $\ce{^{1}_{}H+}$ , $\ce{^{2}_{}H+}$ , $\ce{^{3}_{}H+}$ and $\ce{Mu+}$ , respectively. Because the name proton is often used in contradictory senses, i.e. for isotopically pure $\ce{^{1}_{}H+}$ ions on the one hand, and for the naturally occurring undifferentiated isotope mixture on the other, it is recommended that the undifferentiated mixture be designated generally by the name hydron, derived from hydrogen. Source : N.G. Connelly, T. Damhus, R.M. Hartshorn, A.T. Hutton (eds) (2005). Nomenclature of Inorganic Chemistry (PDF). RSC–IUPAC. ISBN 0-85404-438-8. Addendum: The small subscript ⁵ present in the source is a reference to Names for Muonium and Hydrogen Atoms and Their Ions, W.H. Koppenol, Pure Appl. Chem., 73, 377–379 (2001) which can be viewed over at: https://www.iupac.org/publications/pac/pdf/2001/pdf/7302x0377.pdf $\cdots$ A particle consisting of a positive muon and an electron ( $\pu{\mu^+ e^–}$ ) is named “muonium” and has the symbol $\ce{Mu}$ . Examples: “muonium chloride,” $\ce{MuCl}$ , is the equivalent of deuterium chloride $\cdots$
{ "source": [ "https://chemistry.stackexchange.com/questions/73876", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/33991/" ] }
73,888
The IUPAC name for this is ethylcyclobutane . I know that the formula is $\ce{C6H12}$ . I have no problems drawing the skeletal structure, but I have a little difficulty drawing the condensed structure. I know this is basic, but I just started learning how to name them. I tried it below. For $\ce{CHCH2CH3}$ , am I right to write it in a straight chain?
IR-3.3.1 Isotopes of an element The isotopes of an element all bear the same name (but see Section IR-3.3.2) and are designated by mass numbers (see Section IR-3.2). For example, the atom of atomic number 8 and mass number 18 is named oxygen-18 and has the symbol $\ce{^{18}_{}O}$ . IR-3.3.2 Isotopes of hydrogen Hydrogen is an exception to the rule in Section IR-3.3.1 in that the three isotopes $\ce{^{1}_{}H}$ , $\ce{^{2}_{}H}$ and $\ce{^{3}_{}H}$ can have the alternative names protium, deuterium and tritium, respectively. The symbols D and T may be used for deuterium and tritium but $\ce{^{2}_{}H}$ and $\ce{^{3}_{}H}$ are preferred because D and T can disturb the alphabetical ordering in formulae (see Section IR-4.5). The combination of a muon and an electron behaves like a light isotope of hydrogen and is named muonium, symbol $\ce{Mu}$ .⁵ These names give rise to the names proton, deuteron, triton and muon for the cations $\ce{^{1}_{}H+}$ , $\ce{^{2}_{}H+}$ , $\ce{^{3}_{}H+}$ and $\ce{Mu+}$ , respectively. Because the name proton is often used in contradictory senses, i.e. for isotopically pure $\ce{^{1}_{}H+}$ ions on the one hand, and for the naturally occurring undifferentiated isotope mixture on the other, it is recommended that the undifferentiated mixture be designated generally by the name hydron, derived from hydrogen. Source : N.G. Connelly, T. Damhus, R.M. Hartshorn, A.T. Hutton (eds) (2005). Nomenclature of Inorganic Chemistry (PDF). RSC–IUPAC. ISBN 0-85404-438-8. Addendum: The small subscript ⁵ present in the source is a reference to Names for Muonium and Hydrogen Atoms and Their Ions, W.H. Koppenol, Pure Appl. Chem., 73, 377–379 (2001) which can be viewed over at: https://www.iupac.org/publications/pac/pdf/2001/pdf/7302x0377.pdf $\cdots$ A particle consisting of a positive muon and an electron ( $\pu{\mu^+ e^–}$ ) is named “muonium” and has the symbol $\ce{Mu}$ . Examples: “muonium chloride,” $\ce{MuCl}$ , is the equivalent of deuterium chloride $\cdots$
{ "source": [ "https://chemistry.stackexchange.com/questions/73888", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/44806/" ] }
75,597
( Yes, I know the question sounds super-trivial... but bear with me here ) Most chocolates (especially milk-chocolate) tend to melt into this sticky (albeit delicious) mess at temperatures slightly above room temperature. Why does this happen? From my experience (as a high-school student), most organic solids (at room temperature) tend to either sublime or decompose at moderate temperatures; by "moderate", I mean a temperature range between 25 degrees Celsius (typical room temp) and 50 degrees Celsius (a very hot day in Arizona). Now why do I find this particular property (ease of melting) of chocolate intriguing all of a sudden? Because chocolate is an organic cocktail of all sorts of compounds such as sugars, lipids, proteins, alkaloids, blah blah blah; yet in the moderate temperature range, it doesn't sublime nor does it decompose (chill the molten chocolate, and you'll still get chocolate)... it first softens and then melts, unlike most other organic substances in that temperature range. So, what makes chocolate so special? And why does milk chocolate (tend to) melt faster than dark chocolate (higher-cocoa content).
tl;dr The main structural component of chocolate is cocoa butter , which is a blend of fatty acids (primarily oleic, palmitic, and stearic acids). Cocoa butter has multiple crystal structures, and manufacturers target a specific form which melts at around 33 °C. The fact that chocolate is about 90% sugar and cocoa butter is why chocolate is (chemically) insensitive to temperature changes. While chocolate itself is a giant mess of hundreds of different compounds, the primary structural material of chocolate is fat. Wikipedia lists a typical composition (by mass) of about 60% sugar, 30% fats, and 10% proteins/other. So it's not terribly surprising that chocolate doesn't sublime or decompose. You probably wouldn't expect a ball of butter + sugar to suddenly decompose at 50 °C, and for the most part, chocolate is chemically pretty similar. Now, what's interesting about cocoa butter is that it can form multiple crystal structures, and it's actually a bit tricky to get the right crystal structure out. You can read about the process in this RSC article if you'd like. The one that most chocolatiers are after is polymorph V, which is glossy and melts just below body temperature. However, a different polymorph, polymorph, VI is the most stable form of cocoa butter, and it has a melting point that's a bit higher. It also tends to have a faint white crust on it (known as "fat bloom"). As a result, it's not as pleasant to look at, not as pleasant to eat (since it takes some chewing on to get it to melt), and generally undesirable. Since form VI is the most stable, all chocolate eventually goes "bad" and becomes not as delicious as the maker originally intended. If you've ever bitten into an old chocolate bar and thought it was a bit waxy and unmelty, you've experienced this interconversion. I've been told by someone who did research on crystal structure kinetics that chocolate manufacturers spend lots and lots of money on research in hopes of being able to slow down or stop this process. Now, I'd like to take stab at addressing two points in the question: chill the molten chocolate, and you'll still get chocolate Yes, but is it the exact same chocolate as you got before? What if you take the molten chocolate and put it in the freezer, or, heaven forbid, drop it in liquid nitrogen? Does the resulting chocolate still melt at the same temperature? My suspicion (based on this chart ) is that you'll have generated some of the lower-melting cocoa forms, and thus the chocolate will get soft and melt at a much lower temperature than previously. And why does milk chocolate (tend to) melt faster than dark chocolate (higher-cocoa content)? I don't know. If I were to hazard a guess, I would say it's because milk chocolate tends to contain more sugar than dark chocolate, and so you're seeing some form of freezing-point depression. However, if it turns out that, at similar concentrations of sugar, milk chocolate still melts more quickly, I'd have no idea.
{ "source": [ "https://chemistry.stackexchange.com/questions/75597", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/33991/" ] }
76,505
It seems that nitrous oxide $(\ce{N2O})$ is frequently used to create whipped cream. But why can't just regular nitrogen gas $(\ce{N2})$ be used instead?
There are two ways to efficiently make an aerosol product: Use a gas that liquifies under the pressure inside the can. For example, butane lighters. Nitrogen is one of the "fixed gases", meaning it's a gas under most conditions (but take a look at the temperatures and pressures needed for liquid nitrogen—it's not going to ever be found in consumer products). or Use a gas that is highly soluble in the liquid (carrier) and that will "substantially" vaporize when the higher pressure inside the can is reduced to atmospheric. The US government restricts the pressures that can be used in aerosol cans (and requires 100% quality control testing—when's the last time you heard of an aerosol can exploding? [although it does happen]). If you cut up a can, you'll notice that it's pretty flimsy. The higher the pressure (and a gas that has been dissolved in a liquid doesn't exert much pressure), the more expensive it will be to build the container (aerosol can). Thus, the fixed gases are almost never used, except in some medical products. Why? Because they just aren't soluble enough to help move the liquid (or, in other cases, solid) out of the can and also to disperse it into a very fine mist. The customer wants basically one thing when using an aerosol: uniform, consistent spray from first to last drop. Using a very soluble gas helps, and using one with a boiling point near room temperature also helps. But the laws of thermodynamics say that the temperature of the can will drop as you spray out its contents. This may dramatically interfere with the liquid-to-gas phase change, while solubility is less sensitive to temperature. The trick is to make a product (and I've made some) that sprays out consistently and also doesn't leave so much left behind in the can that the customer feels ripped off.
{ "source": [ "https://chemistry.stackexchange.com/questions/76505", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/43057/" ] }
76,726
I have asked a lot of questions on coordination chemistry here before and I have gone through a lot others here as well. Students, including me, attempt to answer those questions using the concept of hybridization because that's what we are taught in class and of course it's easier and more intuitive than crystal field theory/molecular orbital theory. But almost all of the times that I attempted to use the concept of hybridization to explain bonding, somebody comes up and tells that it's wrong. How do you determine the hybridisation state of a coordinate complex? This is a link to one such question and the first thing that the person who answered it says: "Again, I feel a bit like a broken record. You should not use hybridization to describe transition metal complexes." I need to know: Why is it wrong? Is it wrong because it's oversimplified? Why does it work well while explaining bonding in other compounds? What goes wrong in the case of transition metals?
Tetrahedral complexes Let's consider, for example, a tetrahedral $\ce{Ni(II)}$ complex ( $\mathrm{d^8}$ ), like $\ce{[NiCl4]^2-}$ . According to hybridisation theory, the central nickel ion has $\mathrm{sp^3}$ hybridisation, the four $\mathrm{sp^3}$ -type orbitals are filled by electrons from the chloride ligands, and the $\mathrm{3d}$ orbitals are not involved in bonding. Already there are several problems with this interpretation. The most obvious is that the $\mathrm{3d}$ orbitals are very much involved in (covalent) bonding: a cursory glance at a MO diagram will show that this is the case. If they were not involved in bonding at all, they should remain degenerate, which is obviously untrue; and even if you bring in crystal field theory (CFT) to say that there is an ionic interaction, it is still not sufficient. If accuracy is desired, the complex can only really be described by a full MO diagram. One might ask why we should believe the MO diagram over the hybridisation picture. The answer is that there is a wealth of experimental evidence, especially electronic spectroscopy ( $\mathrm{d-d^*}$ transitions being the most obvious example), and magnetic properties, that is in accordance with the MO picture and not the hybridisation one. It is simply impossible to explain many of these phenomena using this $\mathrm{sp^3}$ model. Lastly, hybridisation alone cannot explain whether a complex should be tetrahedral ( $\ce{[NiCl4]^2-}$ ) or square planar ( $\ce{[Ni(CN)4]^2-}$ , or $\ce{[PtCl4]^2-}$ ). Generally the effect of the ligand, for example, is explained using the spectrochemical series. However, hybridisation cannot account for the position of ligands in the spectrochemical series! To do so you would need to bring in MO theory. Octahedral complexes Moving on to $\ce{Ni(II)}$ octahedral complexes, like $\ce{[Ni(H2O)6]^2+}$ , the typical explanation is that there is $\mathrm{sp^3d^2}$ hybridisation. But all the $\mathrm{3d}$ orbitals are already populated, so where do the two $\mathrm{d}$ orbitals come from? The $\mathrm{4d}$ set, I suppose. The points raised above for tetrahedral case above still apply here. However, here we have something even more criminal: the involvement of $\mathrm{4d}$ orbitals in bonding. This is simply not plausible, as these orbitals are energetically inaccessible. On top of that, it is unrealistic to expect that electrons will be donated into the $\mathrm{4d}$ orbitals when there are vacant holes in the $\mathrm{3d}$ orbitals. For octahedral complexes where there is the possibility for high- and low-spin forms (e.g., $\mathrm{d^5}$ $\ce{Fe^3+}$ complexes), hybridisation theory becomes even more misleading: Hybridisation theory implies that there is a fundamental difference in the orbitals involved in metal-ligand bonding for the high- and low-spin complexes. However, this is simply not true (again, an MO diagram will illustrate this point). And the notion of $\mathrm{4d}$ orbitals being involved in bonding is no more realistic than it was in the last case, which is to say, utterly unrealistic. In this situation, one also has the added issue that hybridisation theory provides no way of predicting whether a complex is high- or low-spin, as this again depends on the spectrochemical series. Summary Hybridisation theory, when applied to transition metals, is both incorrect and inadequate . It is incorrect in the sense that it uses completely implausible ideas ( $\mathrm{3d}$ metals using $\mathrm{4d}$ orbitals in bonding) as a basis for describing the metal complexes. That alone should cast doubt on the entire idea of using hybridisation for the $\mathrm{3d}$ transition metals. However, it is also inadequate in that it does not explain the rich chemistry of the transition metals and their complexes, be it their geometries, spectra, reactivities, or magnetic properties. This prevents it from being useful even as a predictive model. What about other chemical species? You mentioned that hybridisation works well for "other compounds." That is really not always the case, though. For simple compounds like water, etc. there are already issues associated with the standard VSEPR/hybridisation theory. Superficially, the $\mathrm{sp^3}$ hybridisation of oxygen is consistent with the observed bent structure, but that's just about all that can be explained. The photoelectron spectrum of water shows very clearly that the two lone pairs on oxygen are inequivalent, and the MO diagram of water backs this up. Apart from that, hybridisation has absolutely no way of explaining the structures of boranes; Wade's rules do a much better job with the delocalised bonding. And these are just Period 2 elements - when you go into the chemistry of the heavier elements, hybridisation generally becomes less and less useful a concept. For example, hypervalency is a huge problem: $\ce{SF6}$ is claimed to be $\mathrm{sp^3d^2}$ hybridised, but in fact $\mathrm{d}$ -orbital involvement in bonding is negligible . On the other hand, non-hypervalent compounds, such as $\ce{H2S}$ , are probably best described as unhybridised - what happened to the theory that worked so well for $\ce{H2O}$ ? It just isn't applicable here, for reasons beyond the scope of this post. There is probably one scenario in which it is really useful, and that is when describing organic compounds. The reason for this is because tetravalent carbon tends to conform to the simple categories of $\mathrm{sp}^n$ $(n \in \{1, 2, 3\})$ ; we don't have the same teething issues with $\mathrm{d}$ -orbitals that have been discussed above. But there are caveats. For example, it is important to recognise that it is not atoms that are hybridised, but rather orbitals : for example, each carbon in cyclopropane uses $\mathrm{sp^5}$ orbitals for the $\ce{C-C}$ bonds and $\mathrm{sp^2}$ orbitals for the $\ce{C-H}$ bonds. The bottom line is that every model that we use in chemistry has a range of validity, and we should be careful not to use a model in a context where it is not valid. Hybridisation theory is not valid in the context of transition metal complexes, and should not be used as a means of explaining their structure, bonding, and properties.
{ "source": [ "https://chemistry.stackexchange.com/questions/76726", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/35341/" ] }
76,736
Suppose that I fill a glass with ice water. As the ice melts, it cools the water around it. Given that cold water is denser than hot water, I would presume that the cold water would sink to the bottom … but it would warm as it sinks, reducing the density. Meanwhile, the ice is still melting and giving off its cold to the surrounding water. So is drinking ice water with a straw going to get you cooler or warmer water than drinking from the lip of the glass?
Interesting question! A few things first: As the ice melts, it cools the water around it. Technically, the ice cube melts because the water cools down. This may sound ridiculous at first, but you must consider the fact that the ice melts because it has drawn "heat" (energy) from its surroundings. The "surroundings" being the air and water that surround it (but the water's more important since it's a better conductor of thermal energy). Given that cold water is denser than hot water, I would presume that the cold water would sink to the bottom...but it would warm as it sinks, reducing the density. You're right, cold water is denser than hot water. It is helpful to note that it shouldn't be too cold though. As the temperature of water drops to 4 °C, the density of water gradually increases. However, as the temperature drops below 4 °C the density of water actually begins to decrease and water in this range easily "floats" over water in the room temperature range. Meanwhile, the ice is still melting and giving off its cold to the surrounding water. Ice isn't giving off its "cold", rather, it takes in the water's "heat" (thermal energy). Back to your question. As Max mentions in his answer, you have done a particularly good job of indicating what physical parameters we're dealing with; the really important ones being the temperature of ice, temperature of water (at the time you put the ice in) and the quantity of ice used (at least with respect to the water). But assuming you're drinking water (originally at room temperature) out of a 250 ml styrofoam or plastic cup, and you used two (normal-sized) ice cubes and that you began drinking the water a minute after you plonk in the ice cubes, the water should be colder at the top than at the bottom. Consider minute, imaginary layers/regions/packets of water in the cup (thinking about this in terms of water "packets" rather than water molecules is easier to comprehend). Also, think of the cup as having three (crudely demarcated) regions: Top, middle, and bottom. Packets of water immediately adjacent to the ice cubes are in thermal equilibrium with the outermost regions of the ice. However, these packets soon gain some thermal energy from other water packets that are adjacent to them . So as these packets slowly rise in temperature, from zero degrees to past 4 °C, they sink and new packets occupy locations adjacent to the ice. The cycle repeats for as long as the ice is there. Now, as those packets of ice sink, they gain more thermal energy from the packets of water they come in contact with on their way down. This, coupled with the viscous effects of water results in the mild "warming up" of the sinking packets. Now since they warm up a bit, they tend to rise back up. Back at the top, they're get cooled and sink again. This process repeats for as long as the ice remains in the water. Take a step back, and you'll see that the middle of the cup ought to be cold, the bottom of the cup ought to be colder , and the top of the cup is the coldest . So even if the ice cubes aren't actually touching your lips, you'll find sipping the water at the top to be colder than sucking out water from the bottom through a straw.
{ "source": [ "https://chemistry.stackexchange.com/questions/76736", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/46867/" ] }
77,171
Having looked at the various definitions of acids and bases and having refined my understanding of it after learning about the inadequacies of pKa and the novel use of the Hammett acidity function, I would like to ask if the bare proton is the strongest acid? I would like to define "acid strength" not as the extent of dissociation but more simply as the ability to protonate other chemical species. I have come to this conclusion after reading through a post on the explanation behind the strength of fluorantimonic acid, being that the bare proton is liberated and that the conjugate base is so well coordinated, allowing the charge to be spread out over a large structure, stabilising it to a great extent. There is no doubt about the proton being the strongest acid in the Brønsted-Lowry sense. Similarly, in the Lewis sense, this should also be logical as what could possibly more electrophilic than a bare proton?
No Brønsted theory In Brønsted theory $\ce{H+}$ isn't an acid at all. Acids lose protons, becoming conjugate bases, and $\ce{H+}$ is the proton itself. Arrhenius theory $\ce{H+}$ isn't an acid , because in this theory acids dissociate in water to form hydrogen ions. Lewis theory $\ce{H+}$ is an incredibly strong acid , but nuclei of other, heavier elements, for example alpha particles, are arguably stronger . I haven't found hard data for this and it may be rather difficult to get, these aren't your friendly neighbourhood Lewis acids ;) Protonating agent Bare $\ce{H+}$ might be the ultimate protonating agent. In proton transfer, with any Brønsted acid you could always try to find an acceptor weak enough that reaction constant would be lower then 1, proton would "prefer" to stay with acid then protonate base. That's not the case with bare proton, which is unbound. Therefore it may beat any Brønsted acid. Why only "might"? Because whether bare proton can bind to a species depends on it's energy, which needs to be lower then proton affinity of a molecule to which it's supposed to bind. Otherwise proton may ionise the molecule instead, and even fuse with one of its nuclei, if energy is high enough. Another thing is that acids like $\ce{H4O^2+}$ , which are endothermic molecules, could beat a bare proton, because of their repulsive nature - they spontaneously lose protons and "throw them away" with positive charge of their conjugate base!
{ "source": [ "https://chemistry.stackexchange.com/questions/77171", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/44877/" ] }
77,174
Classification of organic compounds have open and closed chain compounds. Closed chain further have homocyclic and heterocyclic componds. In homocyclic compounds there are two types: alicyclic and aromatic compounds. In alicyclic compounds, only singles bond are allowed, whereas in the aromatic category the ring of carbon atoms has alternate single and double bonds (i.e. conjugated double bonds). So what happens to the compounds which have a lone double or triple bond? For example, cyclohexane is classified under alicyclic, but where would cyclohexene or cyclohexyne be classified?
No Brønsted theory In Brønsted theory $\ce{H+}$ isn't an acid at all. Acids lose protons, becoming conjugate bases, and $\ce{H+}$ is the proton itself. Arrhenius theory $\ce{H+}$ isn't an acid , because in this theory acids dissociate in water to form hydrogen ions. Lewis theory $\ce{H+}$ is an incredibly strong acid , but nuclei of other, heavier elements, for example alpha particles, are arguably stronger . I haven't found hard data for this and it may be rather difficult to get, these aren't your friendly neighbourhood Lewis acids ;) Protonating agent Bare $\ce{H+}$ might be the ultimate protonating agent. In proton transfer, with any Brønsted acid you could always try to find an acceptor weak enough that reaction constant would be lower then 1, proton would "prefer" to stay with acid then protonate base. That's not the case with bare proton, which is unbound. Therefore it may beat any Brønsted acid. Why only "might"? Because whether bare proton can bind to a species depends on it's energy, which needs to be lower then proton affinity of a molecule to which it's supposed to bind. Otherwise proton may ionise the molecule instead, and even fuse with one of its nuclei, if energy is high enough. Another thing is that acids like $\ce{H4O^2+}$ , which are endothermic molecules, could beat a bare proton, because of their repulsive nature - they spontaneously lose protons and "throw them away" with positive charge of their conjugate base!
{ "source": [ "https://chemistry.stackexchange.com/questions/77174", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/47160/" ] }
77,475
I always prefer to drink water without sipping, like this: I do this so that my saliva doesn't contaminate the water contents, which would make the water impossible to share with my friends. But lately, I have been thinking: Can the saliva diffuse through the water stream that pours into my mouth ? Saliva is mainly composed of water, some salts, and amylase. Amylase is a rather bulky protein, so I don't expect it to diffuse so quickly, but I'm a little concerned about the electrolytes. Ions like $\ce{Na+, K+, Ca^2+, Mg^2+, Cl-, HCO3-, PO4^3-}$ are quite small and might diffuse into the bottle. I don't have access to the values of the ionic speeds, so I'd like to ask a chemist for confirmation whether these ions can diffuse into the bottle. To make the question a bit more informative, lets assume I keep the bottle about $\pu{3cm}$ away from my mouth, so the water would be flowing at a speed of approximately $\pu{77cm s^-1}$ .
This question reminds me of a germaphobic classmate I had back in Middle-school. He would always stay clear of public lavatories/urinals because he was under the impression that urinating at one would result in bacteria from the urinal climbing up the stream of urine and into his urethra...needless to say, he was a very lonely boy. The situation you described ("Drinking without sipping") is a modification (albeit, a more reasonable one) of the "Bacteria-climbing-up-a-stream-of-urine" theory. I'm not a chemist myself, and like you, I am faced with... ahem , "issues", while searching for speeds of common ions in water (at a given temperature, and in the absence of an external electric field). But I feel this question can still be satisfactorily approached even without these values. From the wording of your question, I suppose you assumed that potassium, sodium, magnesium and other ions would diffuse from a region of higher concentration (the saliva in your mouth) to a region of lower concentration of those ions (the stream of water entering your mouth). Now this would seem perfectly plausible if it weren't for a few things: 1) You're dealing with flowing water This isn't a simple case of spitting into a glass of water and waiting for the contents of your saliva to (passively) diffuse into the water (and eventually form a dilute saliva solution ). By drinking it the way you do, the passive diffusion of electrolytes in your saliva won't be able to match the speed of the water flowing into your mouth, much less exceed it in order to enter the bottle. 2) All the electrolytes in your mouth are trapped in a viscous protein-soup Human saliva contains a variety of proteins (among other things). Common experience tells us that saliva is (a little more) viscous than water, and doesn't "instantly" dissolve in it (<----- Coming from a guy who had to spit in test tubes for his Biology lab assignments). Now not only do the electrolytes have to actively diffuse upstream to get into the bottle, but they also have to "free" themselves from the saliva first. 3) What do you mean by "saliva"? This is the philosophically inclined conclusion to my answer. You're worried that the entry of electrolytes from your mouth into the bottle (won't happen) counts as contaminating it with your saliva. At first glance, even if the electrolytes seem to be able to make it to the bottle, you're pretty sure that larger molecules (such as proteins) won't be able to do the same. If you don't count the proteins, epithelial cells and bacteria that normally constitute human saliva ...how would this make saliva any different from a simple laboratory-prepared cocktail of sodium, potassium, magnesium, calcium, etc? Besides, ordinary (un-branded) bottled water would already contain these electrolytes; so by that logic, the water's already contaminated with your "saliva" even before you opened it!
{ "source": [ "https://chemistry.stackexchange.com/questions/77475", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/42741/" ] }
78,662
The heating and AC system in the building where I occasionally work, works like this (basically a thermostat): The administrators set some temperature that is maintained automatically. Say, in the winter, the outside temperature is 0 °C. The admins will set the desired inside temperature to 20 °C. There is some thermometer behind a little steel panel. It measures it to be cold in the room; therefore the heat comes on. I can trick the heat into coming on even more, by rubbing some alcohol or acetone on the steel panel. The acetone evaporates endothermically, making the steel panel REALLY cold; then the heater thinks that it's REALLY cold in the room and pumps the heat in. The same system is in place in the summer, but in reverse. However, I can't trick the panel in the same way. If I put acetone on the panel, it'll cool off, and then the A/C unit thinks the room is just fine, because the panel is cool. I'm wondering if there is some liquid that will evaporate exothermically, and heat up the panel, in order to make the system think I need more A/C? I don't believe that exothermic evaporate exists; here is a list of heat of vaporization for various substances; all positive (endothermic). http://www.engineeringtoolbox.com/fluids-evaporation-latent-heat-d_147.html But maybe I'll be surprised and it does exist! I have been holding my laptop's hot backside up to the panel instead, but that's a bit too manual for my taste. Edit: To be a bit clearer, let me specify that this is just a thought experiment. I'm not actually expecting to find something which doesn't make sense like an exothermically evaporating liquid.
No such liquid, safe or otherwise, can exist. Evaporation is a strictly endothermic process in all cases. The change in state from liquid to gas is marked by the individual particles gaining enough translational kinetic energy to overcome the mutual attractions present in the liquid phase to "fly free" in the gas phase. It is logically inconsistent for a substance to increase its internal energy and release energy to the surroundings as heat in the same process. In order to achieve both evaporation and a release of energy, one would have to find a liquid that reacts to (a) release heat and (b) form gaseous products. The energy required to move from the liquid to the gas phase is substantial ; more than likely the only reactions exothermic enough to provide a net release of heat are combustion reactions. I somehow doubt dumping a flammable liquid into your thermostat, inserting a wick, and lighting it on fire is a satisfactory solution for you.
{ "source": [ "https://chemistry.stackexchange.com/questions/78662", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/41935/" ] }
80,780
This has been bugging me for a while now... Obviously, to calculate the volume/space occupied by a mole of (an ideal) gas, you'll have to specify temperature ($T$) and pressure ($P$), find the gas constant ($R$) value with the right units and plug them all in the ideal gas equation $$PV = nRT.$$ The problem? It seems to be some sort of common "wisdom" all over the Internet, that one mole of gas occupies $22.4$ liters of space. But the standard conditions (STP, NTP, or SATP) mentioned lack consistency over multiple sites/books. Common claims: A mole of gas occupies, $\pu{22.4 L}$ at STP $\pu{22.4 L}$ at NTP $\pu{22.4 L}$ at SATP $\pu{22.4 L}$ at both STP and NTP Even Chem.SE is rife with the "fact" that a mole of ideal gas occupies $\pu{22.4 L}$, or some extension thereof. Being so utterly frustrated with this situation, I decided to calculate the volumes occupied by a mole of ideal gas (based on the ideal gas equation) for each of the three standard conditions; namely: Standard Temperature and Pressure (STP), Normal Temperature and Pressure (NTP) and Standard Ambient Temperature and Pressure (SATP). Knowing that, STP: $\pu{0 ^\circ C}$ and $\pu{1 bar}$ NTP: $\pu{20 ^\circ C}$ and $\pu{1 atm}$ SATP: $\pu{25 ^\circ C}$ and $\pu{1 bar}$ And using the equation, $$V = \frac {nRT}{P},$$ where $n = \pu{1 mol}$, by default (since we're talking about one mole of gas). I'll draw appropriate values of the gas constant $R$ from this Wikipedia table : The volume occupied by a mole of gas should be: At STP \begin{align} T &= \pu{273.0 K},& P &= \pu{1 bar},& R &= \pu{8.3144598 \times 10^-2 L bar K^-1 mol^-1}. \end{align} Plugging in all the values, I got $$V = \pu{22.698475 L},$$ which to a reasonable approximation, gives $$V = \pu{22.7 L}.$$ At NTP \begin{align} T &= \pu{293.0 K},& P &= \pu{1 atm},& R &= \pu{8.2057338 \times 10^-2 L atm K^-1 mol^-1}. \end{align} Plugging in all the values, I got $$V = \pu{24.04280003 L},$$ which to a reasonable approximation, gives $$V = \pu{24 L}.$$ At SATP \begin{align} T &= \pu{298.0 K},& P &= \pu{1 bar},& R &= \pu{8.3144598 \times 10^-2 L bar K^-1 mol^-1}. \end{align} Plugging in all the values, I got $$V = \pu{24.7770902 L},$$ which to a reasonable approximation, gives $$V = \pu{24.8 L}.$$ Nowhere does the magical "$\pu{22.4 L}$" figure in the three cases I've analyzed appear. Since I've seen the "one mole occupies $\pu{22.4 L}$ at STP/NTP" dictum so many times, I'm wondering if I've missed something. My question(s): Did I screw up with my calculations? (If I didn't screw up) Why is it that the "one mole occupies $\pu{22.4 L}$" idea is so widespread, in spite of not being close (enough) to the values that I obtained?
The common saying is a hold over from when STP was defined to be $\pu{273.15 K}$ and $\pu{1 atm}$. However, IUPAC changed the definition in 1982 so that $\pu{1 atm}$ became $\pu{1 bar}$. I think the main issue is a lot of educators didn't get the memo and went right along either teaching STP as $\pu{1 atm}$ or continuing with the line they were taught ("$\pu{1 mol}$ of any gas under STP occupies $\pu{22.4 L}$") without realizing it didn't hold under the new conditions. Just as a "proof" of this working for the old definition. \begin{align} V &=\frac{nRT}{P}\\ &=\frac{\pu{1 mol} \times \pu{8.2057338 \times 10^-2 L * atm//K * mol} \times \pu{273.15 K}}{\pu{1 atm}}\\ &=\pu{22.41396 L}\\ &\approx \pu{22.4 L} \end{align}
{ "source": [ "https://chemistry.stackexchange.com/questions/80780", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/33991/" ] }
81,242
Why is it that heat from the Earth reflects back off carbon dioxide and other greenhouse gases but not gases like nitrogen or oxygen?
According to the Intergovernmental Panel on Climate Change (IPCC): "Greenhouse gases are those that absorb and emit infrared radiation in the wavelength range emitted by Earth." In order for a molecule to absorb and emit in the infrared (IR) region, its chemical bonds must rotate and vibrate in a manner that affects something called the molecule's dipole moment . It turns out that due to the symmetry of diatomic molecules like $\ce{O2}$ and $\ce{N2}$, this process cannot happen, and thus these types of molecules cannot absorb in the infrared, which is the wavelength range in which heat is radiated. Because $\ce{CO2}$ and other greenhouse gases do not have this kind of symmetry, they can vibrate at specific frequencies within the IR in a manner that affects the molecule's dipole moment, and thus absorb this radiation resulting in the transfer of heat. Much of the heat radiated from the Earth's surface is of the proper wavelength (energy) to be absorbed by these gases, so the criteria given by the IPCC for a greenhouse gas are thus met. As I'm not certain of your level of understanding of chemistry, I tried to direct my answer somewhere between purely technical and purely non-technical. If I've left anything unclear to you, please don't hesitate to leave a question in the comments.
{ "source": [ "https://chemistry.stackexchange.com/questions/81242", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/48875/" ] }
81,751
Oxford dictionary online gives etymology of alanine as: Coined in German as Alanin, from al dehyde + - an (for ease of pronunciation) + - ine . But I see no resemblance to the aldehyde structure in alanine. Is this etymology incorrect, or am I mistaken about the resemblance?
In the original German paper [1] Adolf Strecker used Aldehyd-Ammoniak or aldehyde-ammonia as a precursor, that's where the name derives from: Vor einigen Jahren habe ich gezeigt, daſs Aldehyd-Ammoniak und Blausäure beim Erwärmen mit verdünnter Chlorwasserstoffsäure sich zu einer schwachen Basis, Alanin genannt, vereinigen [...]: $$\ce{\underset{\text{Aldehyd-Ammoniak}}{C4H4O2 * NH3} + HCl + \underset{Blausäure}{C2NH} +2 HO = \underset{Alanin}{C6H7NO4} + NH4Cl}$$ As David Richerby mentioned in the comments, Strecker's brutto-formula ($\ce{C6H7NO4}$) deviates from the moden one ($\ce{C3H7NO2}$), also the reaction scheme is a bit different . Strecker, A. Annalen der Chemie und Pharmacie 1854 , 91 (3), 349–351. DOI 10.1002/jlac.18540910309 .
{ "source": [ "https://chemistry.stackexchange.com/questions/81751", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/-1/" ] }
81,770
I am using a (1,1,1) gold electrode for electron transport experiments using Transiesta. I have never used it before. I am wondering if when I do the electrode run, do I build a system where it is the left and right electrodes connected to each other? or do I use the entire device (molecule AND electrodes) and just set a flag in the file? (In response to andselisk's request for more information about Transiesta: it is a computational chemistry program that is a part of the Siesta Suite. It is used mainly for electron transport calculations.)
In the original German paper [1] Adolf Strecker used Aldehyd-Ammoniak or aldehyde-ammonia as a precursor, that's where the name derives from: Vor einigen Jahren habe ich gezeigt, daſs Aldehyd-Ammoniak und Blausäure beim Erwärmen mit verdünnter Chlorwasserstoffsäure sich zu einer schwachen Basis, Alanin genannt, vereinigen [...]: $$\ce{\underset{\text{Aldehyd-Ammoniak}}{C4H4O2 * NH3} + HCl + \underset{Blausäure}{C2NH} +2 HO = \underset{Alanin}{C6H7NO4} + NH4Cl}$$ As David Richerby mentioned in the comments, Strecker's brutto-formula ($\ce{C6H7NO4}$) deviates from the moden one ($\ce{C3H7NO2}$), also the reaction scheme is a bit different . Strecker, A. Annalen der Chemie und Pharmacie 1854 , 91 (3), 349–351. DOI 10.1002/jlac.18540910309 .
{ "source": [ "https://chemistry.stackexchange.com/questions/81770", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/51206/" ] }
82,935
Jet engines can run on almost any fuel, and the operating temperatures of modern jet engines' hottest sections are anywhere between 3000 and 3150 degrees F (1648 and 1732 degrees Celsius). Does that mean that a hydrogen on-demand system could work on modern jets? Water is pumped and heated first by the exhaust section, then directed towards the hotter sections of the engine (when hot enough to not cause cooling and lower engine efficiency) where it's broken down into hydrogen and oxygen at a heat above 1472 degrees F (800 degrees Celsius), then those gases are pumped into the engine for combustion. The advantages are that firstly, water is abundant and therefore cheap. Even sea water could be used because at those temperatures it's easy to design a system that would get rid of the impurities that would otherwise corrode critical engine parts. Secondly, it would save on manufacturing costs given that non-heat critical parts in the exhaust section would not need to be made of sophisticated and expensive materials and alloys given the cooling effect of water. Thirdly, the costs of the fuel weight would be reduced given that the energy density of hydrogen is twice that of fossil fuels, so less would need to be carried. And most importantly, the environment problem would be solved in aviation given that there would be little or no carbon dioxide emissions.
It can't work because of the fundamental thermodynamics What you are proposing is, basically, the plane carries water; the water is broken down into its components, hydrogen and oxygen; the components are recombined by burning them as fuel. Burning hydrogen and oxygen is a perfectly good way to create a lot of heat. But it doesn't much matter how you break the water apart into hydrogen and oxygen, the thermodynamics of the reaction won't work. The problem is simple: you need to have a source of energy to split the water apart. In chemistry we know the energy levels of the reactants and the products and we can work out whether energy is released or stored in a reaction. Burning hydrogen and oxygen releases a lot of energy, but by the rule of thermodynamics, breaking water apart to its components requires the input of exactly the same amount of energy. You can't get round this. Worse, in the real world, there are losses at every conversion step so you can't even break even (ain't things unfair!) In your plane you could, in principle, split water and burn it in the engine for propulsion. But you would need to have some other source of the vast amount of energy required to split the water. That implies both another fuel and another engine. In reality they would vastly outweigh any imagined savings in weight and cost. Even if you could build some sort of SciFi engine that both splits water and then burns it again you would still be nowhere: the entire output of the burning would be required for the splitting with nothing left over to generate thrust (and in the real world there are losses so your plane would rapidly drop out of the sky).
{ "source": [ "https://chemistry.stackexchange.com/questions/82935", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/52212/" ] }
83,845
I bought different kinds of acid for experiments and home usage that I stored in secured containers into an IKEA cabinet. I just realized that despite the fact that all the containers are properly closed, the metal parts of the cabinets are completely rotten. I am quite surprised because there shouldn't be much vapor in there. What is the correct way to store acids at home? Should I purchase some specific container? How can I prevent vapors from escaping? Perhaps the problem is somewhere else and my containers are not good enough for my acids. The acids I have: $\ce{H2SO4}$ (30%), $\ce{HCl}$ (32%), $\ce{H3PO4}$ (85%), $\ce{HNO3}$ (10%), $\ce{CH3COOH}$ (98%). Note: the chemicals are actually stored in my electric/electronic workshop which is a separate room at home and always locked when I have visitors.
First I'd locate the bottle which causes the problem. Usually HCl is #1 suspect, but to be sure you can put a vial with smelling salts (aqueous solution of $\ce{(NH4)2CO3}$) or ammonia in the box with acids; white coating of $\ce{NH4Cl}$ on the bottle signifies the leak. It's also a good practice to store acids in glass bottles with a proper joint (teflon ring) and a screw cap (e.g. Merck's SafetyCap). I would strongly recommend to get the proper bottles as soon as possible. Plastic bottles are only used to reduce the production and transportation costs, they are a poor choice for a long-term storing of chemicals. Even thick plastic is prone to diffusion, whereas glass is not; also plastic is, well, plastic , and is prone to mechanical deformations, so it's also tricky to maintain an impermeability of a bottleneck-cap joint over time. In the meantime I'd wrap the necks of the bottles with parafilm , and/or pour some baking soda (sodium bicarbonate) on the bottom of the container where you store the bottles. This should help to neutralize the vapors before they reach the furniture and your nose, but it's a temporary solution.
{ "source": [ "https://chemistry.stackexchange.com/questions/83845", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/17572/" ] }
85,425
I’m aware that elements like $\ce{^14C}$ have a known half-life, which means that over a span of roughly $5730$ years, half of the $\ce{^14C}$ atoms decay into $\ce{^14N}$. Are there any substances known to speed up this process, that applying it to $\ce{^14C}$ will catalyze a chemical reaction resulting in $\ce{^14N}$, or sticking them in a particle accelerator and slamming them together will result in $\ce{^14N}$, or otherwise transmute elements?
It is possible to modify nuclear decay rates using chemistry, though it is rare and the effect is usually very small. Here I summarize the information available in this link . You may want to see the references within. There is a type of nuclear decay called electron capture , where a nuclide directly captures an electron from the innermost electron shells and transforms a proton into a neutron. Therefore, there is coupling between the nucleus and the wavefunctions of the innermost electrons in this form of radioactive decay. Usually the core electrons are very weakly disturbed by changes in chemical environment, but they are changed slightly. For especially light atoms, where the core electrons are very close to the valence shell, such as $\ce{^7_4 Be}$, this change results in a measurable difference in decay rate, varying by 0.1-1% compared to the isolated atom. In a more dramatic case, there is a subset of beta decays called bound-state beta decay , where the electron released by the decaying neutron is immediately captured by the nucleus after decay. Apparently, if the parent atom is stripped completely bare of its electrons, and if the energy involved in the nuclear decay is comparatively low, once again there is a meaningful coupling between nuclear and electronic states. For the case of rhenium-187, the neutral atom $\ce{^187_75 Re}$ has a half-life of 42 billion years, but upon full ionization to $\ce{^187_75 Re^75+}$, the half-life is reduced to 32.9 years! For dysprosium-163, the neutral atom $\ce{^163_66 Dy}$ has half-life so high it has not been measured to decay, but when ionized to $\ce{^163_66 Dy^{66+}}$ its half-life is reduced to 47 days!
{ "source": [ "https://chemistry.stackexchange.com/questions/85425", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/54511/" ] }
86,158
The mass spectrum in Bromine, with the molecules $\ce{^{158}Br2+}$, $\ce{^{160}Br2+}$ and $\ce{^{162}Br2+}$: As you can see, the $\ce{^{160}Br2+}$ is almost double in intensity compared to the $\ce{^{158}Br2+}$ and the $\ce{^{162}Br2+}$ peak. The book I am reading simply states that this is because The probability of two different isotopes occurring in a $\ce{Br2}$ molecule are twice that of the same isotope appearing in a $\ce{Br2}$ molecule. This is supported by the $\ce{^{160}Br2+}$ peak, formed from the $\ce{^{79}Br}$ and $\ce{^{81}Br}$ isotopes. Likewise, $\ce{^{158}Br2+}$ peak is formed from two $\ce{^{79}Br}$ isotopes and $\ce{^{162}Br2+}$ is formed from two $\ce{^{81}Br}$ isotopes. However, I am confused by the explanation given by the book above. Why is the probability of two different isotopes occurring in a $\ce{Br2}$ molecule twice that of the same isotope appearing in a $\ce{Br2}$ molecule?
All possible arrangements of $\ce{Br2}$ molecule: $\displaystyle 79 + 79 = 158$ $\displaystyle \color{red}{79 + 81} = 160$ $\displaystyle \color{red}{81 + 79} = 160$ $\displaystyle 81 + 81 = 162$ The amount of $\ce{^{79}Br}$ and $\ce{^{81}Br}$ in nature is roughly the same, thus each permutation is equally probable. There are two arrangements that lead to $160$ . While $158$ and $162$ each have only one arrangement. Therefore $160$ is twice as likely to be found compared to other masses.
{ "source": [ "https://chemistry.stackexchange.com/questions/86158", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/55213/" ] }
87,672
The Haber–Bosch process is used to synthesis ammonia from atmospheric nitrogen and hydrogen derived from a hydrocarbon such as methane. The process requires high temperatures and pressures and also requires repeated cycling of the gasses. I would have thought that atmospheric argon and other noble gases would have slowly built up in the reactor. How are these inert gases removed?
Inert gaseous components such as methane and argon are indeed should be eliminated from the system in order not to lower the partial pressure of the reactants too much. Technically there is usually a standalone gas separation plant where the extraction of argon from the recycle gas is performed basically using modified Linde process [1 , pp. 428–430]. Suitable cryoprocesses [...] are available for production of noble gases from synthesis purge gas. Initially, extensive separation of nitrogen, argon, and methane is carried out by partial condensation. Purge argon is then recovered from the condensate in a two-stage condensation process. If helium is to be recovered it can be concentrated by liquefaction of the hydrogen in the hydrogen-rich gas phase, followed by purification. The heavier noble gases, krypton and xenon, pass into the methane fraction. The purge gas at pressures of up to $\pu{70 bar}$ is first introduced into the adsorber (a) where traces of water and ammonia are removed. It is then transferred to the heat exchangers (b1) and (b2) for cooling. The gas is then fed into separators (c1) and (c2) where separation of liquefied nitrogen, argon, and methane from gaseous hydrogen takes place, and dissolved hydrogen flashes off. The liquid bottom product is fed into the fractionating column (d1) where methane (bottom) is separated from the nitrogen fraction (top). A methane-free nitrogen – argon mixture (liquid) is withdrawn from the middle of this column and fed as reflux into the $\ce{Ar}$ purification column (d2), where nitrogen is separated from argon. The bottom product is argon of product purity and is transferred to a vacuum-insulated storage tank. Both columns (d1) and (d2) are operated within an $\ce{N2}$ cycle in which cold is produced by expanding high- or medium-pressure nitrogen. The $\ce{CH4}$ bottom stream from (d1), compressed by liquid pump (g), is evaporated against feed gas and normally led to battery limits as fuel gas. The higher boiling noble gases krypton and xenon are contained in this stream and, in principle, can be isolated. Reference Häussinger, P.; Glatthaar, R.; Rhode, W.; Kick, H.; Benkmann, C.; Weber, J.; Wunschel, H.-J.; Stenke, V.; Leicht, E.; Stenger, H. In Ullmann’s Encyclopedia of Industrial Chemistry; Wiley-VCH Verlag GmbH & Co. KGaA, Ed.: Weinheim, Germany, 2001 . DOI: 10.1002/14356007.a17_485 .
{ "source": [ "https://chemistry.stackexchange.com/questions/87672", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/56629/" ] }
87,803
Since air is ionized using X-rays, in the 'observation chamber', there must be several electrons in the chamber, so the oil drops are exposed to several electrons so it seems intuitive that a single oil drop should catch more than one electron. Then why is it that there's just one electron on a droplet or do they selectively choose those droplets which have a single electron residing on them?
There does not have to be just one electron per drop. Say you have a drop which, in reality, picked up four electrons, another with five and a third drop with seven. None has one electronic charge, but when you measure charges you find they have a common factor; the first drop has four times that factor, the second has five times that factor and the third drop shows a multiplier of seven. It was this common factor, not necessarily the charge on any specific drop, that was recognized as one electronic charge.
{ "source": [ "https://chemistry.stackexchange.com/questions/87803", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/56756/" ] }
88,849
How are poisons discovered? Does someone have to die/be poisoned from it first? Or are there other ways of discovering the harmfulness of a substance? Perhaps everything is tested on other animals prior to testing them on humans?
Alle Dinge sind Gift, und nichts ist ohne Gift, allein die Dosis macht dass ein Ding kein Gift ist (The dose makes the poison) - Paracelsus Poisons (I'm going to use this as an umbrella term for "toxins" and "venom" as well. Bear in mind though, they are not the same thing) have been known since antiquity. Back in the good old days, you figured out if something was poisonous or not by eating/touching it (or getting someone else to do it), i.e- "discovering" a poison was simply a matter of chance. These chance encounters alone, led to the discovery of numerous poisons . With the advent of Chemistry, the gents in white lab-coats figured out that compounds that bear resemblance to the already well-known poisons, in terms of their functional groups and structures, are also toxic (albeit, to vastly different degrees). Poisons could now be identified a priori (you could tell it would end badly if you were exposed to such a substance, but not how badly). And no, you can't really quantify a poison's effects without testing it on something. Nor can you tell how much of something would be needed to kill or severely maim. This is where we bring in the idea of a median lethal dose $\pu{LD_{50}}$ which is, simply put, the minimum amount of a substance required to kill off half of all the animals in a particular test group. $\pu{LD_{50}}$ values for a particular substance depend on the animal used. Of course, the only way to get an accurate $\pu{LD_{50}}$ (which itself, is really a "mean/average" value of sorts) for a human is to actually poison someone, which doesn't sound very nice. So you do the next best thing, you measure it for a rat/chimp, couple it with your knowledge of the poison's mechanism of action, and extrapolate the value to something that would kill a person. Another way to establish degree of toxicity, which doesn't involve killing animals, would be by exposing a cell/tissue culture (and not the entire animal) to the potential poison .
{ "source": [ "https://chemistry.stackexchange.com/questions/88849", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/36222/" ] }
88,922
2,3-Dimethyl-2,3-diphenylbutane (Dicumyl) (63). A solution of 2-bromo-2-phenylpropane (10.0 g, 50 mmol) in anhydrous diethyl ether (25 mL) was stirred with magnesium turnings (0.60 g, 0.025 g atom) overnight. The reaction mixture was poured into aqueous $\ce{NH4Cl}$ solution (100 mL, 5%) and extracted with dichloromethane (100 mL). The solvent was evaporated, and the solid residue was recrystallized from 95% ethanol to give 2,3-dimethyl-2,3-diphenylbutane (2.4 g, 47%). Does anyone have any intuition as to how this reaction is working mechanistically? Is it a Grignard formation and then homocoupling event? Can anyone see a reason for why an ortho methyl substituent on the phenyl ring might interfere with this reaction?
Alle Dinge sind Gift, und nichts ist ohne Gift, allein die Dosis macht dass ein Ding kein Gift ist (The dose makes the poison) - Paracelsus Poisons (I'm going to use this as an umbrella term for "toxins" and "venom" as well. Bear in mind though, they are not the same thing) have been known since antiquity. Back in the good old days, you figured out if something was poisonous or not by eating/touching it (or getting someone else to do it), i.e- "discovering" a poison was simply a matter of chance. These chance encounters alone, led to the discovery of numerous poisons . With the advent of Chemistry, the gents in white lab-coats figured out that compounds that bear resemblance to the already well-known poisons, in terms of their functional groups and structures, are also toxic (albeit, to vastly different degrees). Poisons could now be identified a priori (you could tell it would end badly if you were exposed to such a substance, but not how badly). And no, you can't really quantify a poison's effects without testing it on something. Nor can you tell how much of something would be needed to kill or severely maim. This is where we bring in the idea of a median lethal dose $\pu{LD_{50}}$ which is, simply put, the minimum amount of a substance required to kill off half of all the animals in a particular test group. $\pu{LD_{50}}$ values for a particular substance depend on the animal used. Of course, the only way to get an accurate $\pu{LD_{50}}$ (which itself, is really a "mean/average" value of sorts) for a human is to actually poison someone, which doesn't sound very nice. So you do the next best thing, you measure it for a rat/chimp, couple it with your knowledge of the poison's mechanism of action, and extrapolate the value to something that would kill a person. Another way to establish degree of toxicity, which doesn't involve killing animals, would be by exposing a cell/tissue culture (and not the entire animal) to the potential poison .
{ "source": [ "https://chemistry.stackexchange.com/questions/88922", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/40250/" ] }
90,573
As nitrogen has $1$ lone pair and $3$ electrons, either it should have maximum covalency of $5$ or $3$ . But why does it have a maximum covalency of $4$ instead? Why did it leave $1$ electron? Why did it have to break a lone pair?
Recall that covalency is the number of shared electron pairs formed by an atom of that element. Nitrogen's maximum covalency is indeed $4$. And no, it does not break up its lone pair. I'll give you a simple example. Have a look at the Lewis structure of the ammonium ion: ( source ) Notice that nitrogen's octet is complete as soon it bonds with three $\ce{H}$ atoms (aka forms ammonia). The fourth covalent bond is actually a coordinate covalent bond, formed when that nitrogen atom's lone pair gets donated to a proton. This is also the maximum covalency for the nitrogen atom, since it has no more unpaired electrons that could be paired up with other atoms, to form more covalent bonds. Homework exercise: Can you now deduce the maximum covalency of nitrogen's elder brother, oxygen? Caution: extra stuff ahead this will help those with knowledge beyond high school level. If you're at or below high school level, go back home and play with your cat. After-thought: Valency is a useless term that doesn't help you do any chemistry. Different sources define it differently (two definitions on Wikipedia ). It just helps fill high school textbooks with more pages, but it becomes irrelevant with the introduction of better terms like coordination number, which actually tells you something about the structure of the molecule. While "valency" maybe useful in basic beginners course at getting a grip, there are limitations to this point of view. Here's an example of such a contradiction. You may expect elements of period $3$ and above to display higher covalencies. One such example is phosphorus. Though it belongs to the same group as nitrogen, it can form compounds like $\ce{PCl5}$, (apparently) increasing its maximum covalency to $5$ instead. The reasons for these are usually attributed to hypervalency/octet-expansion, but they are wrong and obsolete concepts , having been superseded by newer concepts . In fact, $\ce{P}$ still has a covalency of four in $\ce{PCl5}$, since there are only four shared pair of electrons (the non-bonding electrons don't count). This reinforces the idea that coordination numbers are a better and more useful term than valency. ($\ce{P}$ now has a coordination number of $5$ in $\ce{PCl5}$)
{ "source": [ "https://chemistry.stackexchange.com/questions/90573", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/58961/" ] }
90,586
My book says "The reason for this difference is that the electron pair in a bond is further from the nucleus of the central atom than the electron pair in a lone pair." I don't get it , how does the distance make any difference ? I'm , taking AS level chemistry
Recall that covalency is the number of shared electron pairs formed by an atom of that element. Nitrogen's maximum covalency is indeed $4$. And no, it does not break up its lone pair. I'll give you a simple example. Have a look at the Lewis structure of the ammonium ion: ( source ) Notice that nitrogen's octet is complete as soon it bonds with three $\ce{H}$ atoms (aka forms ammonia). The fourth covalent bond is actually a coordinate covalent bond, formed when that nitrogen atom's lone pair gets donated to a proton. This is also the maximum covalency for the nitrogen atom, since it has no more unpaired electrons that could be paired up with other atoms, to form more covalent bonds. Homework exercise: Can you now deduce the maximum covalency of nitrogen's elder brother, oxygen? Caution: extra stuff ahead this will help those with knowledge beyond high school level. If you're at or below high school level, go back home and play with your cat. After-thought: Valency is a useless term that doesn't help you do any chemistry. Different sources define it differently (two definitions on Wikipedia ). It just helps fill high school textbooks with more pages, but it becomes irrelevant with the introduction of better terms like coordination number, which actually tells you something about the structure of the molecule. While "valency" maybe useful in basic beginners course at getting a grip, there are limitations to this point of view. Here's an example of such a contradiction. You may expect elements of period $3$ and above to display higher covalencies. One such example is phosphorus. Though it belongs to the same group as nitrogen, it can form compounds like $\ce{PCl5}$, (apparently) increasing its maximum covalency to $5$ instead. The reasons for these are usually attributed to hypervalency/octet-expansion, but they are wrong and obsolete concepts , having been superseded by newer concepts . In fact, $\ce{P}$ still has a covalency of four in $\ce{PCl5}$, since there are only four shared pair of electrons (the non-bonding electrons don't count). This reinforces the idea that coordination numbers are a better and more useful term than valency. ($\ce{P}$ now has a coordination number of $5$ in $\ce{PCl5}$)
{ "source": [ "https://chemistry.stackexchange.com/questions/90586", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/58971/" ] }
91,399
I came across this structure which has got pi bonds only in its canonical forms (which are also unstable compared to the Ist structure), is it aromatic ? Also, is it compulsory for an aromatic compound to contain a carbon ring? Can't a conjugate system without carbon be aromatic (as in this case)?
The modern definition of aromaticity from deep theoreticians is that the π-system needs to support aromatic ring currents . Borazine can support it, so it is technically aromatic. Aromatic systems that do not contain carbon are not really all that common, but they are known. Pentazole was detected, for example. The problem is, that for aromaticity to be a thing we need a π-system, and π-systems effectively restrict us to BCNOS elements, since other elements are not eager to form π-bonds. Aromatic systems are usually 5- or 6-membered rings. Add in the fact that rings with a lot of N/O atoms often happily go kaboom after a funny look, and it's not surprising we have very few actual examples here. Still, some people insist on doing unreasonable things with results from as far back as the 50s .
{ "source": [ "https://chemistry.stackexchange.com/questions/91399", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/59558/" ] }
93,844
Since region 1 is closer to the source, I presume it to be the hottest as complete combustion takes place there. Also, this is the part where the gas mixture(responsible for flame) reacts with oxygen first(as soon as valve is open). So, the reaction should take place much quicker and more heat should be released. But in my book, the answer is given as region 2 . Where am I wrong? I know there exists as question similar to this in our community : Bunsen burner and hottest part but the answers don’t answer my question.
The question itself is poorly written as given. The diagram should have "regions" and "points." So what seems to be labeled "Region 2" and "Region 4" I would name as "Point 2" and "Point 3" respectively. So the innermost conical area would be "Region 1", the next conical area would be "Region 2" and the third conical area would be "Region 3." Region 1 is where the mostly unburned gas and oxygen mixture are pushing above the lip of the Bunsen Burner. Region 1 exists because the gas coming out of the tube is cool. If the air port is open and the gas flow is too low then the gas will start to burn down the tube and you'll get a "strike back" where the flame is either (1) blown out or (2) burns at the jet. If the gas flow is too great you can blow the burning region off contact with the upper tube. If you increase the gas flow even more then you can in fact blow the flame out. Region 2 would be a reducing region within the flame. This region is hot and burns the fuel and oxygen coming out of the tube. Region 3 would be a oxidizing region of the flame. Here oxygen from "outside air" (oxygen which didn't come up the tube) is migrating into the flame to burn the excess fuel which is not bunt in region 2. You'd use the reducing and oxidizing regions when doing bead tests for identification. Point 2 would be the hottest part of the flame as shown by the composite image of a paperclip in the flame from the YouTube video.
{ "source": [ "https://chemistry.stackexchange.com/questions/93844", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/56308/" ] }
93,876
Which among these hydrocarbons is more acidic? Here, I thought about hyperconjugation . For this, I counted the alpha H i.e. the number of hydrogens attached to the carbon attached to a sp2 hybridized carbon. What I get is, (C) has most no. of those Hydrogens. So, my assumption is that it should be more acidic. But that's not the case. Where am I going wrong?
The question itself is poorly written as given. The diagram should have "regions" and "points." So what seems to be labeled "Region 2" and "Region 4" I would name as "Point 2" and "Point 3" respectively. So the innermost conical area would be "Region 1", the next conical area would be "Region 2" and the third conical area would be "Region 3." Region 1 is where the mostly unburned gas and oxygen mixture are pushing above the lip of the Bunsen Burner. Region 1 exists because the gas coming out of the tube is cool. If the air port is open and the gas flow is too low then the gas will start to burn down the tube and you'll get a "strike back" where the flame is either (1) blown out or (2) burns at the jet. If the gas flow is too great you can blow the burning region off contact with the upper tube. If you increase the gas flow even more then you can in fact blow the flame out. Region 2 would be a reducing region within the flame. This region is hot and burns the fuel and oxygen coming out of the tube. Region 3 would be a oxidizing region of the flame. Here oxygen from "outside air" (oxygen which didn't come up the tube) is migrating into the flame to burn the excess fuel which is not bunt in region 2. You'd use the reducing and oxidizing regions when doing bead tests for identification. Point 2 would be the hottest part of the flame as shown by the composite image of a paperclip in the flame from the YouTube video.
{ "source": [ "https://chemistry.stackexchange.com/questions/93876", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/59289/" ] }