title
stringlengths
13
247
url
stringlengths
35
578
text
stringlengths
197
217k
__index_level_0__
int64
1
8.68k
1.14: Character Tables
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Symmetry_(Vallance)/01%3A_Chapters/1.14%3A_Character_Tables
A character table summarizes the behavior of all of the possible irreducible representations of a group under each of the symmetry operations of the group. The character table for \(C_{3v}\) is shown below.\[\begin{array}{lllll} \hline C_{3v},3m & E & 2C_3 & 3\sigma_v & h=6 \\ \hline A_1 & 1 & 1 & 1 & z, z^2, x^2+y^2 \\ A_2 & 1 & 1 & -1 & R_z \\ E & 2 & -1 & 0 & \begin{pmatrix} x, y \end{pmatrix}, \begin{pmatrix} xy, x^2+y^2 \end{pmatrix}, \begin{pmatrix} xz, yz \end{pmatrix}, \begin{pmatrix} R_x, R_y \end{pmatrix} \\ \hline \end{array} \label{14.1}\]The various sections of the table are as follows:The functions listed in the final column of the table are important in many chemical applications of group theory, particularly in spectroscopy. For example, by looking at the transformation properties of \(x\), \(y\) and \(z\) (sometimes given in character tables as \(T_x\), \(T_y\), \(T_z\)) we can discover the symmetry of translations along the \(x\), \(y\), and \(z\) axes. Similarly, \(R_x\), \(R_y\) and \(R_z\) represent rotations about the three Cartesian axes. As we shall see later, the transformation properties of \(x\), \(y\), and \(z\) can also be used to determine whether or not a molecule can absorb a photon of \(x\)-, \(y\)-, or \(z\)-polarized light and undergo a spectroscopic transition. The Cartesian products play a similar role in determining selection rules for Raman transitions, which involve two photons. Character tables for common point groups are given in Appendix B.In many applications of group theory, we only need to know the characters of the representative matrices, rather than the matrices themselves. Luckily, when each basis function transforms as a 1D irreducible representation (which is true in many cases of interest) there is a simple shortcut to determining the characters without having to construct the entire matrix representation. All we have to do is to look at the way the individual basis functions transform under each symmetry operation. For a given operation, step through the basis functions as follows:Try this for the \(s\) orbital basis we have been using for the \(C_{3v}\) group. You should find you get the same characters as we obtained from the traces of the matrix representatives.We can also work out the characters fairly easily when two basis functions transform together as a 2D irreducible representation. For example, in the \(C_{3v}\) point group \(x\) and \(y\) axes transform together as \(E\). If we carry out a rotation about \(z\) by an angle \(\theta\), our \(x\) and \(y\) axes are transformed onto new axes \(x'\) and \(y'\). However, the new axes can each be written as a linear combination of our original \(x\) and \(y\) axes. Using the rotation matrices introduced in Section 9, we see that:\[\begin{array}{ccc}x' & = & \cos\theta \: x + \sin\theta \: y \\ y' & = & -\sin\theta \: x + \cos\theta \: y \end{array} \label{14.2}\]For one-dimensional irreducible representations we asked if a basis function/axis was mapped onto itself, minus itself, or something different. For two-dimensional irreducible representations we need to ask how much of the ‘old’ axis is contained in the new one. From the above we see that the \(x'\) axis contains a contribution \(\cos\)\(\theta\) from the \(x\) axis, and the \(y'\) axis contains a contribution \(\cos\)\(\theta\) from the \(y\) axis. The characters of the \(x\) and \(y\) axes under a rotation through \(\theta\) are therefore \(\cos\)\(\theta\), and the overall character of the \(E\) irreducible representation is therefore \(\cos\)\(\theta\) \(+\) \(\cos\)\(\theta\) \(= 2\) \(\cos\)\(\theta\). For a \(C_3\) rotation through 120 degrees, the character of the \(E\) irreducible representation is therefore \(2\cos120\)° \(=\) \(-1\). In general, when an axis is rotated by an angle \(\theta\) by a symmetry operation, its contribution to the character for that operation is \(\cos\)\(\theta\).In many cases (see Appendix B), the characters for rotations \(C_n\) and improper rotations \(S_n\) are complex numbers, usually expressed in terms of the quantity \(\epsilon\) = exp(2\(\pi\)i/n). It is fairly straightforward to reconcile this with the fact that in chemistry we are generally using group theory to investigate physical problems in which all quantities are real. It turns out that whenever our basis spans an irreducible representation whose characters are complex, it will also span a second irreducible representation whose characters are the complex conjugates of the first irreducible representation i.e. complex irreducible representations occur in pairs. According to the strict mathematics of group theory, each irreducible representation in the pair should be considered as a separate representation. However, when applying such irreducible representations in physical problems, we add the characters for the two irreducible representations together to get a single irreducible representation whose characters are real.As an example, the ‘correct’ character table for the group \(C_3\) takes the form:\[\begin{array}{l|l} C_3 & E \: \: \: \: \: \: \: \: C_3 \: \: \: \: \: \: \: \: C_3^2 \\ \hline A & 1 \: \: \: \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: \: \: \: 1 \\ \hline E & \begin{Bmatrix} 1 & \epsilon & \epsilon* \\ 1 & \epsilon* & \epsilon \end{Bmatrix} \end{array} \label{14.3}\]Where \(\epsilon\) = exp(2\(\pi\)i/3). However, as chemists we would usually combine the two parts of the \(E\) irreducible representation to give:\[\begin{array}{l|lll} C_3 & E & C_3 & C_3^2 \\ \hline A & 1 & 1 & 1 \\ E & 2 & -1 & 1 \end{array} \label{14.4}\]This page titled 1.14: Character Tables is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Claire Vallance via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,798
1.15: Reduction of representations II
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Symmetry_(Vallance)/01%3A_Chapters/1.15%3A_Reduction_of_representations_II
By making maximum use of molecular symmetry, we often greatly simplify problems involving molecular properties. For example, the formation of chemical bonds is strongly dependent on the atomic orbitals involved having the correct symmetries. To make full use of group theory in the applications we will be considering, we need to develop a little more ‘machinery’. Specifically, given a basis set (of atomic orbitals, for example) we need to find out: It turns out that both of these problems can be solved using something called the ‘Great Orthogonality Theorem’ (GOT for short). The GOT summarizes a number of orthogonality relationships implicit in matrix representations of symmetry groups, and may be derived in a somewhat qualitative fashion by considering these relationships in turn. Some of you might find the next section a little hard going. In it, we will derive two important expressions that we can use to achieve the two goals we have set out above. It is not important that you understand every step in these derivations; they have mainly been included just so you can see where the equations come from. However, you will need to understand how to use the results. Hopefully you will not find this too difficult once we’ve worked through a few examples.You are probably already familiar with the geometric concept of orthogonality. Two vectors are orthogonal if their dot product (i.e. the projection of one vector onto the other) is zero. An example of a pair of orthogonal vectors is provided by the \(\textbf{x}\) and \(\textbf{y}\) Cartesian unit vectors.\[\textbf{x}, \textbf{y} = 0\label{15.1}\] A consequence of the orthogonality of \(\textbf{x}\) and \(\textbf{y}\) is that any general vector in the \(xy\) plane may be written as a linear combination of these two basis vectors. \[\textbf{r} = a\textbf{x} + b\textbf{y} \label{15.2}\]Mathematical functions may also be orthogonal. Two functions, \(f_1(x)\) and \(f_2(x)\), are defined to be orthogonal if the integral over their product is equal to zero i.e. \[\int f_1(x) f_2(x) dx = \delta_{12}\]This simply means that there must be ‘no overlap’ between orthogonal functions, which is the same as the orthogonality requirement for vectors, above. In the same way as for vectors, any general function may be written as a linear combination of a suitably chosen set of orthogonal basis functions. For example, the Legendre polynomials \(P_n(x)\) form an orthogonal basis set for functions of one variable \(x\).\[f(x) = \sum_n c_n P_n(x) \label{15.3}\]The irreducible representations of a point group satisfy a number of orthogonality relationships:1. If corresponding matrix elements in all of the matrix representatives of an irreducible representation are squared and added together, the result is equal to the order of the group divided by the dimensionality of the irreducible representation. i.e.\[\sum _g \Gamma_k(g)_{ij} \Gamma_k(g)_{ij} = \dfrac{h}{d_k} \label{15.4}\]where \(k\) labels the irreducible representation, \(i\) and \(j\) label the row and column position within the irreducible representation, \(h\) is the order of the group, and \(d_k\) is the order of the irreducible representation. e.g. The order of the group \(C_{3v}\) is 6. If we apply the above operation to the first element in the 2x2 (\(E\)) irreducible representation derived in Section 12, the result should be equal to \(\dfrac{h}{d_k}\) = \(\dfrac{6}{2}\) = 3. Carrying out this operation gives:\[^2 + (-\dfrac{1}{2})^2 + (-\dfrac{1}{2})^2 +^2 + (-\dfrac{1}{2})^2 +(-\dfrac{1}{2})^2 = 1 + \dfrac{1}{4} + \dfrac{1}{4} + 1 + \dfrac{1}{4} + \dfrac{1}{4} = 3 \label{15.5}\]2. If instead of summing the squares of matrix elements in an irreducible representation, we sum the product of two different elements from within each matrix, the result is equal to zero. i.e.\[\sum _g \Gamma_k(g)_{ij} \Gamma_k(g)_{i'j'} = 0 \label{15.6}\]where \(i \neq i'\) and/or \(j \neq j'\). E.g. if we perform this operation using the two elements in the first row of the 2D irreducible representation used in 1, we get:\[ + (-\dfrac{1}{2})(\dfrac{\sqrt{3}}{2}) + (-\dfrac{1}{2})(-\dfrac{\sqrt{3}}{2}) + + (-\dfrac{1}{2})(\dfrac{\sqrt{3}}{2}) + (-\dfrac{1}{2})(-\dfrac{\sqrt{3}}{2}) = 0 + \dfrac{\sqrt{3}}{4} - \dfrac{\sqrt{3}}{4} + 0 - \dfrac{\sqrt{3}}{4} + \dfrac{\sqrt{3}}{4} = 0 \label{15.7}\]3. If we sum the product of two elements from the matrices of two different irreducible representations \(k\) and \(m\), the result is equal to zero. i.e.\[\sum_g \Gamma_k(g)_{ij} \Gamma_m(g)_{i'j'} = 0 \label{15.8}\]where there is now no restriction on the values of the indices \(i\), \(i'\), \(j\), \(j'\) (apart from the rather obvious restriction that they must be less than or equal to the dimensions of the irreducible representation). e.g. Performing this operation on the first elements of the \(A_1\) and \(E\) irreducible representations we derived for \(C_{3v}\) gives:\[ +(-\dfrac{1}{2}) +(-\dfrac{1}{2}) + +(-\dfrac{1}{2}) +(-\dfrac{1}{2}) = 1 - \dfrac{1}{2} - \dfrac{1}{2} + 1 - \dfrac{1}{2} - \dfrac{1}{2} = 0 \label{15.9}\]We can combine these three results into one general equation, the Great Orthogonality Theorem\(^4\).\[\sum_g \Gamma_k(g)_{ij} \Gamma_m(g)_{i'j'} = \dfrac{h}{\sqrt{d_kd_m}} \delta_{km} \delta_{ii'} \delta_{jj'} \label{15.10}\]For most applications we do not actually need the full Great Orthogonality Theorem. A little mathematical trickery transforms Equation \(\ref{15.10}\) into the ‘Little Orthogonality Theorem’ (or LOT), which is expressed in terms of the characters of the irreducible representations rather than the irreducible representations themselves.\[\sum_g \chi_k(g) \chi_m(g) = h\delta_{km} \label{15.11}\]Since the characters for two symmetry operations in the same class are the same, we can also rewrite the sum over symmetry operations as a sum over classes.\[\sum_C n_C \chi_k(C) \chi_m(C) = h \delta_{km} \label{15.12}\]where \(n_C\) is the number of symmetry operations in class \(C\). In all of the examples we’ve considered so far, the characters have been real. However, this is not necessarily true for all point groups, so to make the above equations completely general we need to include the possibility of imaginary characters. In this case we have:\[\sum_C n_C \chi_k^*(C) \chi_m(C) = h \delta_{km} \label{15.13}\]where \(\chi_k^*(C)\) is the complex conjugate of \(\chi_k(C)\). Equation \(\ref{15.13}\) is of course identical to Equation \(\ref{15.12}\) when all the characters are real.In Section \(12\) we discovered that we can often carry out a similarity transform on a general matrix representation so that all the representatives end up in the same block diagonal form. When this is possible, each set of submatrices also forms a valid matrix representation of the group. If none of the submatrices can be reduced further by carrying out another similarity transform, they are said to form an irreducible representation of the point group. An important property of matrix representatives is that their character is invariant under a similarity transform. This means that the character of the original representatives must be equal to the sum of the characters of the irreducible representations into which the representation is reduced. e.g. if we consider the representative for the \(C_3^-\) symmetry operation in our \(NH_3\) example, we have:\[\begin{array}{ccccc} \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \end{pmatrix} & \begin{array}{c} \text{similarity transform} \\ \longrightarrow \end{array} & \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & -\dfrac{1}{2} & -\dfrac{\sqrt{3}}{2} \\ 0 & 0 & \dfrac{\sqrt{3}}{2} & -\dfrac{1}{2} \end{pmatrix} & = & \otimes \otimes \begin{pmatrix} -\dfrac{1}{2} -\dfrac{\sqrt{3}}{2} \\ \dfrac{\sqrt{3}}{2} -\dfrac{1}{2} \end{pmatrix} \\ \chi = 1 & & \chi = 1 & & \chi = 1 + 1 - 1 = 1 \end{array} \label{15.14}\]It follows that we can write the characters for a general representation \(\Gamma(g)\) in terms of the characters of the irreducible representations \(\Gamma_k(g)\) into which it can be reduced.\[\chi(g) = \sum_k a_k \chi_k(g) \label{15.15}\]where the coefficients \(a_k\) in the sum are the number of times each irreducible representation appears in the representation. This means that in order to determine the irreducible representations spanned by a given basis. all we have to do is determine the coefficients \(a_k\) in the above equation. This is where the Little Orthogonality Theorem comes in handy. If we take the LOT in the form of Equation \(\ref{15.15}\), and multiply each side through by \(a_k\), we get\[\Sigma_g a_k \chi_k(g) \chi_m(g) = h a_k \delta_{km} \label{15.16}\]Summing both sides of the above equation over \(k\) gives\[\Sigma_g \Sigma_k a_k \chi_k(g) \chi_m(g) = h \Sigma_k a_k \delta_{km} \label{15.17}\]We can use Equation \(\ref{15.15}\) to simplify the left hand side of this equation. Also, the sum on the right hand side reduces to \(a_m\) because \(\delta{km}\) is only non-zero (and equal to \(1\)) when \(k\) = \(m\)\[\Sigma_g \chi(g) \chi_m(g) = h a_m \label{15.18}\]Dividing both sides through by \(h\) (the order of the group), gives us an expression for the coefficients \(a_m\) in terms of the characters \(\chi(g)\) of the original representation and the characters \(\chi_m(g)\) of the \(m^{th}\) irreducible representation. \[ a_m = \dfrac{1}{h} \Sigma_g \chi(g) \chi_m(g) \label{15.19}\]We can of course write this as a sum over classes rather than a sum over symmetry operations.\[a_m = \dfrac{1}{h} \Sigma_C n_C \chi(g) \chi_m(g) \label{15.20}\]As an example, in Section \(12\) we showed that the matrix representatives we derived for the \(C_{3v}\) group could be reduced into two irreducible representations of \(A_1\) symmetry and one of \(E\) symmetry. i.e. \(\Gamma\) = 2\(A_1\) + \(E\). We could have obtained the same result using Equation \(\ref{15.20}\)). The characters for our original representation and for the irreducible representations of the \(C_{3v}\) point group (\(A_1\), \(A_2\) and \(E\)) are given in the table below.\[\begin{array}{llll} \hline C_{3v} & E & 2C_3 & 3\sigma_v \\ \hline \chi & 4 & 1 & 2 \\ \hline \chi(A_1) & 1 & 1 & 1 \\ \chi(A_2) & 1 & 1 & -1 \\ \chi(E) & 2 & -1 & 0 \\ \hline \end{array} \label{15.21}\]From Equation \(\ref{15.20}\), the number of times each irreducible representation occurs for our chosen basis \(\begin{pmatrix} s_n, s_1, s_2, s_3 \end{pmatrix}\) is therefore\[\begin{array}{l} a(A_1) = \dfrac{1}{6}(1x4x1 + 2x1x1 + 3x2x1) = 2 \\ a(A_2) = \dfrac{1}{6}(1x4x1 + 2x1x1 + 3x2x-1) = 0 \\ a(E) = \dfrac{1}{6}(1x4x2 + 2x1x-1 + 3x2x0) = 1 \end{array} \label{15.22}\]i.e. Our basis is spanned by \(2A_1\) + \(E\), as we found before.\(^4\)The \(\delta_{ij}\) appearing in Equation \(\ref{15.10}\) are called Dirac delta functions. They are equal to \(1\) if \(i\) = \(j\) and \(0\) otherwise.This page titled 1.15: Reduction of representations II is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Claire Vallance via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,799
1.16: Symmetry Adapted Linear Combinations (SALCs)
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Symmetry_(Vallance)/01%3A_Chapters/1.16%3A_Symmetry_Adapted_Linear_Combinations_(SALCs)
Once we know the irreducible representations spanned by an arbitrary basis set, we can work out the appropriate linear combinations of basis functions that transform the matrix representatives of our original representation into block diagonal form (i.e. the symmetry adapted linear combinations). Each of the SALCs transforms as one of the irreducible representations of the reduced representation. We have already seen this in our \(NH_3\) example. The two linear combinations of \(A_1\) symmetry were \(s_N\) and \(s_1 + s_2 + s_3\), both of which are symmetric under all the symmetry operations of the point group. We also chose another pair of functions, \(2s_1 - s_2 - s_3\) and \(s_2 - s_3\), which together transform as the symmetry species \(E\).To find the appropriate SALCs to reduce a matrix representation, we use projection operators. You will be familiar with the idea of operators from quantum mechanics. The operators we will be using here are not quantum mechanical operators, but the basic principle is the same. The projection operator to generate a SALC that transforms as an irreducible representation \(k\) is \(\Sigma_g \chi_k(g) g\). Each term in the sum means ‘apply the symmetry operation \(g\) and then multiply by the character of \(g\) in irreducible representation \(k'\). Applying this operator to each of our original basis functions in turn will generate a complete set of SALCs, i.e. to transform a basis function \(f_i\) into a SALC \(f_i'\), we use\[f_i' = \sum_g \chi_k(g) g f_i \tag{16.1}\]The way in which this operation is carried out will become much more clear if we work through an example. We can break down the above equation into a fairly straightforward ‘recipe’ for generating SALCs:Earlier (see Section \(10\)), we worked out the effect of all the symmetry operations in the \(C_{3v}\) point group on the \(\begin{pmatrix} s_N, s_1, s_2, s_3 \end{pmatrix}\) basis.\[\begin{array}{lccc} E & \begin{pmatrix} s_N, s_1, s_2, s_3 \end{pmatrix} & \rightarrow & \begin{pmatrix} s_N, s_1, s_2 , s_3 \end{pmatrix} \\ C_3^+ & \begin{pmatrix} s_N, s_1, s_2, s_3 \end{pmatrix} & \rightarrow & \begin{pmatrix} s_N, s_2, s_3, s_1 \end{pmatrix} \\ C_3^- & \begin{pmatrix} s_N, s_1, s_2, s_3 \end{pmatrix} & \rightarrow & \begin{pmatrix} s_N, s_3, s_1, s_2 \end{pmatrix} \\ \sigma_v & \begin{pmatrix} s_N, s_1, s_2, s_3 \end{pmatrix} & \rightarrow & \begin{pmatrix} s_N, s_1, s_3, s_2 \end{pmatrix} \\ \sigma_v' & \begin{pmatrix} s_N, s_1, s_2, s_3 \end{pmatrix} & \rightarrow & \begin{pmatrix} s_N, s_2, s_1, s_3 \end{pmatrix} \\ \sigma_v'' & \begin{pmatrix} s_N, s_1, s_2, s_3 \end{pmatrix} & \rightarrow & \begin{pmatrix} s_N, s_3, s_2, s_1 \end{pmatrix} \end{array} \tag{16.2}\]This is all we need to construct the table described in 1. above.\[\begin{array}{l|llll} & s_N & s_1 & s_2 & s_3 \\ \hline E & s_N & s_1 & s_2 & s_3 \\ C_3^+ & s_N & s_2 & s_3 & s_1 \\ C_3^- & s_N & s_3 & s_1 & s_2 \\ \sigma_v & s_N & s_1 & s_3 & s_2 \\ \sigma_v' & s_N & s_2 & s_1 & s_3 \\ \sigma_v'' & s_N & s_3 & s_2 & s_1 \end{array} \tag{16.3}\]To determine the SALCs of \(A_1\) symmetry, we multiply the table through by the characters of the \(A_1\) irreducible representation (all of which take the value \(1\)). Summing the columns gives\[\begin{array}{rcl} s_N + s_N + s_N + s_N + s_N + s_N & = & 6s_N \\ s_1 + s_2 + s_3 + s_1 + s_2 + s_3 & = & 2(s_1 + s_2 + s_3) \\ s_2 + s_3 + s_1 + s_3 + s_1 + s_2 & = & 2(s_1 + s_2 + s_3) \\ s_3 + s_1 + s_2 + s_2 + s_1 + s_3 & = & 2(s_1 + s_2 + s_3) \end{array} \tag{16.4}\]Apart from a constant factor (which doesn’t affect the functional form and therefore doesn’t affect the symmetry properties), these are the same as the combinations we determined earlier. Normalizing gives us two SALCs of \(A_1\) symmetry.\[\begin{array}{rcl} \phi_1 & = & s_N \\ \phi_2 & = & \frac{1}{\sqrt{3}}(s_1 + s_2 + s_3) \end{array} \tag{16.5}\]We now move on to determine the SALCs of \(E\) symmetry. Multiplying the table above by the appropriate characters for the \(E\) irreducible representation gives\[\begin{array}{l|llll} & s_N & s_1 & s_2 & s_3 \\ \hline E & 2s_N & 2s_1 & 2s_2 & 2s_3 \\ C_3^+ & -s_N & -s_2 & -s_3 & -s_1 \\ C_3^- & -s_N & -s_3 & -s_1 & -s_2 \\ \sigma_v & 0 & 0 & 0 & 0 \\ \sigma_v' & 0 & 0 & 0 & 0 \\ \sigma_v'' & 0 & 0 & 0 & 0 \end{array} \tag{16.6}\]Summing the columns yields\[\begin{array}{l} 2s_N - s_N - s_N = 0 \\ 2s_1 - s_2 - s_3 \\ 2s_2 - s_3 - s_1 \\ 2s_3 - s_1 - s_2 \end{array} \tag{16.7}\]We therefore get three SALCs from this procedure. This is a problem, since the number of SALCs must match the dimensionality of the irreducible representation, in this case two. Put another way, we should end up with four SALCs in total to match our original number of basis functions. Added to our two SALCs of \(A_1\) symmetry, three SALCs of \(E\) symmetry would give us five in total.The resolution to our problem lies in the fact that the three SALCs above are not linearly independent. Any one of them can be written as a linear combination of the other two e.g. \(\begin{pmatrix} 2s_1 - s_2 - s_3 \end{pmatrix} = -\begin{pmatrix} 2s_2 - s_3 s_1 \end{pmatrix} - \begin{pmatrix} 2s_3 - s_1 - s_2 \end{pmatrix}\). To solve the problem, we can either throw away one of the SALCs, or better, make two linear combinations of the three SALCs that are orthogonal to each other.\(^5\) e.g. if we take \(2s_1 - s_2 - s_3\) as one of our SALCs and find an orthogonal combination of the other two (which turns out to be their difference), we have (after normalization)\[\begin{array}{rcl} \phi_3 & = & \frac{1}{\sqrt{6}}(2s_1 - s_2 - s_3) \\ \phi_4 & = & \frac{1}{\sqrt{2}}(s_2 - s_3) \end{array} \tag{16.8}\]These are the same linear combinations used in Section \(12\).We now have all the machinery we need to apply group theory to a range of chemical problems. In our first application, we will learn how to use molecular symmetry and group theory to help us understand chemical bonding.\(^5\) If we write the coefficients of \(s_1\), \(s_2\) and \(s_3\) for each SALC as a vector \(\begin{pmatrix} a_1, a_2, a_3 \end{pmatrix}\), then when two SALCs are orthogonal, the dot product of their coefficient vectors \(\begin{pmatrix} a_1, a_2, a_3 \end{pmatrix} \cdot \begin{pmatrix} b_1, b_2, b_3 \end{pmatrix} = \begin{pmatrix} a_1b_1 + a_2b_2 + a_3b_3 \end{pmatrix}\) is equal to zero.This page titled 1.16: Symmetry Adapted Linear Combinations (SALCs) is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Claire Vallance via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,800
1.17: Determining whether an Integral can be Non-zero
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Symmetry_(Vallance)/01%3A_Chapters/1.17%3A_Determining_whether_an_Integral_can_be_Non-zero
As we continue with this course, we will discover that there are many times when we would like to know whether a particular integral is necessarily zero, or whether there is a chance that it may be non-zero. We can often use group theory to differentiate these two cases.You will have already used symmetry properties of functions to determine whether or not a one-dimensional integral is zero. For example, sin(x) is an ‘odd’ function (antisymmetric with respect to reflection through the origin), and it follows from this that\[\int^{\infty}_{-\infty} \cos(x) dx = 0\]In general, an integral between these limits for any other odd function will also be zero.In the general case we may have an integral of more than one dimension. The key to determining whether a general integral is necessarily zero lies in the fact that because an integral is just a number, it must be invariant to any symmetry operation. For example, bonding in a diatomic (see next section) depends on the presence of a non-zero overlap between atomic orbitals on adjacent atoms, which may be quantified by an overlap integral. You would not expect the bonding in a molecule to change if you rotated the molecule through some angle \(\theta\), so the integral must be invariant to rotation, and indeed to any other symmetry operation. In group theoretical terms, for an integral to be non-zero, the integrand must transform as the totally symmetric irreducible representation in the appropriate point group. In practice, the integrand may not transform as a single irreducible representation, but it must include the totally symmetric irreducible representation. These ideas should become more clear in the next section.It should be noted that even when the irreducible representations spanned by the integrand do include the totally symmetric irreducible representation, it is still possible for the integral to be zero. All group theory allows us to do is identify integrals that are necessarily zero based on the symmetry (or lack thereof) of the integrand.This page titled 1.17: Determining whether an Integral can be Non-zero is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Claire Vallance via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,801
1.18: Bonding in Diatomics
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Symmetry_(Vallance)/01%3A_Chapters/1.18%3A_Bonding_in_Diatomics
You will already be familiar with the idea of constructing molecular orbitals from linear combinations of atomic orbitals from previous courses covering bonding in diatomic molecules. By considering the symmetries of \(s\) and \(p\) orbitals on two atoms, we can form bonding and antibonding combinations labeled as having either \(\sigma\) or \(\pi\) symmetry depending on whether they resemble \(s\) or \(p\) orbitals when viewed along the bond axis (see diagram below). In all of the cases shown, only atomic orbitals that have the same symmetry when viewed along the bond axis \(z\) can form a chemical bond e.g. two \(s\) orbitals, two \(p_z\) orbitals , or an \(s\) and a \(p_z\) can form a bond, but a \(p_z\) and a \(p_x\) or an \(s\) and a \(p_x\) or a \(p_y\) cannot. It turns out that the rule that determines whether or not two atomic orbitals can bond is that they must belong to the same symmetry species within the point group of the molecule.We can prove this mathematically for two atomic orbitals \(\phi_i\) and \(\phi_j\) by looking at the overlap integral between the two orbitals. \[S_{ij} = \langle \phi_i|\phi_j \rangle = \int \phi_i^* \phi_j d\tau \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \text(18.1)\]In order for bonding to be possible, this integral must be non-zero. The product of the two functions \(\phi_1\) and \(\phi_2\) transforms as the direct product of their symmetry species i.e. \(\Gamma_{12}\) = \(\Gamma_1 \otimes \Gamma_2\). As explained above, for the overlap integral to be non-zero, \(\Gamma_{12}\) must contain the totally symmetric irreducible representation (\(A_{1g}\) for a homonuclear diatomic, which belongs to the point group \(D_{\infty h}\)). As it happens, this is only possible if \(\phi_1\) and\(\phi_2\) belong to the same irreducible representation. These ideas are summarized for a diatomic in the table below.\[\begin{array}{lllll} \hline \text{First Atomic Orbital} & \text{Second Atomic Orbital} & \Gamma_1 \otimes \Gamma_2 & \text{Overlap Integral} & \text{Bonding?} \\ \hline s \: (A_{1g}) & s \: (A_{1g}) & A_{1g} & \text{Non-zero} & \text{Yes} \\ s \: (A_{1g}) & p_x \: (E_{1u}) & E_{1u} & \text{Zero} & \text{No} \\ s \: (A_{1g}) & p_z \: (A_{1u}) & A_{1u} & \text{Zero} & \text{No} \\ p_x \: (E_{1u}) & p_x \: (E_{1u}) & A_{1g} + A_{2g} + E_{2g} & \text{Non-zero} & \text{Yes} \\ p_X \: (E_{1u}) & p_z \: (A_{1u}) & E_{1g} & \text{Zero} & \text{No} \\ p_z \: (A_{1u}) & p_z \: (A_{1u}) & A_{1g} & \text{Non-zero} & \text{Yes} \end{array} \tag{18.2}\]This page titled 1.18: Bonding in Diatomics is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Claire Vallance via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,802
1.19: Bonding in Polyatomics- Constructing Molecular Orbitals from SALCs
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Symmetry_(Vallance)/01%3A_Chapters/1.19%3A_Bonding_in_Polyatomics-_Constructing_Molecular_Orbitals_from_SALCs
In the previous section we showed how to use symmetry to determine whether two atomic orbitals can form a chemical bond. How do we carry out the same procedure for a polyatomic molecule, in which many atomic orbitals may combine to form a bond? Any SALCs of the same symmetry could potentially form a bond, so all we need to do to construct a molecular orbital is take a linear combination of all the SALCs of the same symmetry species. The general procedure is:\[\begin{array}{rcl} \Psi(A_1) & = & c_1 \phi_1 + c_2 \phi_2 \\ & = & c_1 s_N + c_2 \dfrac{1}{\sqrt{3}}(s_1 + s_2 + s_3) \end{array} \tag{19.1}\]Unfortunately, this is as far as group theory can take us. It can give us the functional form of the molecular orbitals but it cannot determine the coefficients \(c_1\) and \(c_2\). To go further and obtain the expansion coefficients and orbital energies, we must turn to quantum mechanics. The material we are about to cover will be repeated in greater detail in later courses on quantum mechanics and valence, but they are included here to provide you with a complete reference on how to construct molecular orbitals and determine their energies.This page titled 1.19: Bonding in Polyatomics- Constructing Molecular Orbitals from SALCs is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Claire Vallance via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,803
1.20: Calculating Orbital Energies and Expansion Coefficients
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Symmetry_(Vallance)/01%3A_Chapters/1.20%3A_Calculating_Orbital_Energies_and_Expansion_Coefficients
Calculation of the orbital energies and expansion coefficients is based on the variation principle, which states that any approximate wavefunction must have a higher energy than the true wavefunction. This follows directly from the fairly common-sense idea that in general any system tries to minimize its energy. If an ‘approximate’ wavefunction had a lower energy than the ‘true’ wavefunction, we would expect the system to try and adopt this ‘approximate’ lower energy state, rather than the ‘true’ state. That all approximations to the true wavefunction must have a higher energy than the true wavefunction is the only scenario that makes physical sense. A mathematical proof of the variation principle is given in the Appendix.We apply the variation principle as follows:Molecular energy levels, or orbital energies, are eigenvalues of the molecular Hamiltonian \(\hat{H}\). Using a standard result from quantum mechanics, it follows that the energy \(E\) of a molecular orbital \(\Psi\) is\[\begin{array}{lll} & E = \dfrac{\langle \Psi|\hat{H}|\Psi\rangle}{\langle \Psi|\Psi\rangle} & \text{(unnormalized} \: \Psi) \\ \text{or} & E = \langle \Psi|\hat{H}|\Psi\rangle & \text{(normalized} \: \Psi \text{, for which} \langle \Psi|\Psi\rangle = 1) \end{array} \label{20.1}\]If the true wavefunction has the lowest energy, then to find the closest approximation we can to the true wavefunction, all we have to do is find the coefficients in our expansion of SALCs that minimize the energy in the above expressions. In practice, we substitute our wavefunction and minimize the resulting expression with respect to the coefficients. To show how this is done, we’ll use our \(NH_3\) wavefunction of \(A_1\) symmetry from the previous section. Substituting into Equation \(\ref{20.1}\) gives:\[\begin{array}{rcl} E & = & \dfrac{\langle c_1\phi_1 + c_2\phi_2|\hat{H}|c_1\phi_1 + c_2\phi_2\rangle}{\langle c_1\phi_1 + c_2\phi_2| c_1\phi_1 + c_2\phi_2\rangle} \\ & = & \dfrac{\langle c_1\phi_1|\hat{H}|c_1\phi_1\rangle + \langle c_1\phi_1|\hat{H}|c_2\phi_2\rangle + \langle c_2\phi_2|\hat{H}|c_1\phi_1\rangle + \langle c_2\phi_2|\hat{H}|c_2\phi_2\rangle}{\langle c_1\phi_1|c_1\phi_1\rangle + \langle c_1\phi_1|c_2\phi_2\rangle + \langle c_2\phi_2|c_1\phi_1\rangle + \langle c_2\phi_2|c_2\phi_2\rangle} \\ & = & \dfrac{c_1^2\langle \phi_1|\hat{H}|\phi_1\rangle + c_1c_2\langle \phi_1|\hat{H}|\phi_2\rangle + c_2c_1\langle \phi_2|\hat{H}|\phi_1\rangle + c_2^2\langle \phi_2|\hat{H}|\phi_2\rangle}{c_1^2\langle \phi_1|\phi_1\rangle + c_1c_2\langle \phi_1|\phi_2\rangle + c_2c_1\langle \phi_2|\phi_1\rangle + c_2^2,\phi_2|\phi_2\rangle} \end{array} \label{20.2}\]If we now define a Hamiltonian matrix element \(H_{ij}\) = \(\langle \phi_i|\hat{H}|\phi_j\rangle\) and an overlap integral \(S_{ij}\) = \(\langle \phi_i|\phi_j\rangle\) and note that \(H_{ij}\) = \(H_{ji}\) and \(S_{ij}\) = \(S_{ji}\), this simplifies to\[E = \dfrac{c_1^2 H_{11} + 2c_1c_2 H_{12} + c_2^2 H_{22}}{c_1^2 S_{11} + 2c_1c_2 S_{12} + c_2^2 S_{22}} \label{20.3}\]To get this into a simpler form for carrying out the energy minimization, we multiply both sides through by the denominator to give\[E(c_1^2 S_{11} + 2c_1c_2 S_{12} + c_2^2 S_{22}) = c_1^2 H_{11} + 2c_1c_2 H_{12} + c_2^2 H_{22} \label{20.4}\]Now we need to minimize the energy with respect to \(c_1\) and \(c_2\), i.e., we require \[\dfrac{\partial E}{\partial c_1} = 0 \label{20.5a}\] and \[\dfrac{\partial E}{\partial c_2} = 0 \label{20.5b}\]If we differentiate the above equation through separately by \(c_1\) and \(c_2\) and apply this condition, we will end up with two equations in the two unknowns \(c_1\) and \(c_2\), which we can solve to determine the coefficients and the energy.Differentiating Equation \(\ref{20.4}\) with respect to \(c_1\) (via the product rule of differentiation) gives\[\dfrac{\partial E}{\partial c_1}(c_1^2 S_{11} + 2c_1c_2 S_{12} + c_2^2 S_{22}) + E(2c_1 S_{11} + 2c_2 S_{12}) = 2c_1 H_{11} + 2c_2 H_{12} \label{20.6}\]Differentiating Equation \(\ref{20.4}\) with respect to \(c_2\) gives\[\dfrac{\partial E}{\partial c_2}(c_1^2 S_{11} + 2c_1c_2 S_{12} + c_2^2 S_{22}) + E(2c_1 S_{12} + 2c_2 S_{22}) = 2c_1 H_{12} + 2c_2 H_{22} \label{20.7}\]Because\[\dfrac{\partial E}{\partial c_1} = \dfrac{\partial E}{\partial c_2} = 0 \label{20.8}\]the first term on the left hand side of both equations is zero, leaving us with\[\begin{array}{rcl} E(2c_1 S_{11} + 2c_2 S_{12}) & = & 2c_1 H_{11} + 2c_2 H_{12} \\ E(2c_1 S_{12} + 2c_2 S_{22}) & = & 2c_1 H_{12} + 2c_2 H_{22} \end{array} \label{20.9}\]These are normally rewritten slightly, in the form\[\begin{array}{rcl} c_1(H_{11} - ES_{11}) + c_2(H_{12} -ES_{12}) & = & 0 \\ c_1(H_{12} - ES_{12}) + c_2(H_{22} - ES_{22}) & = & 0 \end{array} \label{20.10}\]Equations \(\ref{20.10}\) are known as the secular equations and are the set of equations we need to solve to determine \(c_1\), \(c_2\), and \(E\). In the general case (derived in the Appendix), when our wavefunction is a linear combination of \(N\) SALCs (i.e. \(\Psi = \Sigma_{i=1}^N c_i\phi_i\)) we get \(N\) equations in \(N\) unknowns, with the \(k^{th}\) equation given by\[\sum_{i=1}^N c_i(H_{ki} - ES_{ki}) = 0 \label{20.11}\]Note that we can use any basis functions we like together with the linear variation method described here to construct approximate molecular orbitals and determine their energies, but choosing to use SALCs simplifies things considerably when the number of basis functions is large. An arbitrary set of \(N\) basis functions leads to a set of \(N\) equations in \(N\) unknowns, which must be solved simultaneously. Converting the basis into a set of SALCs separates the equations into several smaller sets of secular equations, one for each irreducible representation, which can be solved independently. It is usually easier to solve several sets of secular equations of lower dimensionality than one set of higher dimensionality. This page titled 1.20: Calculating Orbital Energies and Expansion Coefficients is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Claire Vallance via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,804
1.21: Solving the Secular Equations
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Symmetry_(Vallance)/01%3A_Chapters/1.21%3A_Solving_the_Secular_Equations
As we have seen already, any set of linear equations may be rewritten as a matrix equation \(A\textbf{x}\) = \(\textbf{b}\). Linear equations are classified as simultaneous linear equations or homogeneous linear equations, depending on whether the vector \(\textbf{b}\) on the RHS of the equation is non-zero or zero.For a set of simultaneous linear equations (non-zero \(\textbf{b}\)) it is fairly apparent that if a unique solution exists, it can be found by multiplying both sides by the inverse matrix \(A^{-1}\) (since \(A^{-1}A\) on the left hand side is equal to the identity matrix, which has no effect on the vector \(\textbf{x}\))\[\begin{array}{rcl} A\textbf{x} & = & \textbf{b} \\ A^{-1}A\textbf{x} & = & A^{-1}\textbf{b} \\ \textbf{x} & = & A^{-1}\textbf{b} \end{array} \label{21.1}\]In practice, there are easier matrix methods for solving simultaneous equations than finding the inverse matrix, but these need not concern us here. In Section 8.4, we discovered that in order for a matrix to have an inverse, it must have a non-zero determinant. Since \(A^{-1}\) must exist in order for a set of simultaneous linear equations to have a solution, this means that the determinant of the matrix \(A\) must be non-zero for the equations to be solvable. The reverse is true for homogeneous linear equations. In this case the set of equations only has a solution if the determinant of \(A\) is equal to zero. The secular equations we want to solve are homogeneous equations, and we will use this property of the determinant to determine the molecular orbital energies. An important property of homogeneous equations is that if a vector \(\textbf{x}\) is a solution, so is any multiple of \(\textbf{x}\), meaning that the solutions (the molecular orbitals) can be normalized without causing any problems.Recall the secular equations for the \(A_1\) orbitals of \(NH_3\) derived in the previous section\[\begin{array}{rcl} c_1(H_{11} - ES_{11}) + c_2(H_{12} - ES_{12}) & = & 0 \\ c_1(H_{12} - ES_{12}) + c_2(H_{22} - ES_{22}) & = & 0 \end{array} \label{21.2}\]where \(c_1\) and \(c_2\) are the coefficients in the linear combination of the SALCs \(\phi_1\) = \(s_N\) and \(\phi_2\) = \(\dfrac{1}{\sqrt{3}}(s_1 + s_2 + s_3)\) used to construct the molecular orbital. Writing this set of homogeneous linear equations in matrix form gives \[\begin{pmatrix} H_{11} - ES_{11} & H_{12} - ES_{12} \\ H_{12} - ES_{12} & H_{22} - ES_{22} \end{pmatrix} \begin{pmatrix} c_1 \\ c_2 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \end{pmatrix} \label{21.3}\]For the equations to have a solution, the determinant of the matrix must be equal to zero. Writing out the determinant will give us a polynomial equation in \(E\) that we can solve to obtain the orbital energies in terms of the Hamiltonian matrix elements \(H_{ij}\) and overlap integrals \(S_{ij}\). The number of energies obtained by ‘solving the secular determinant’ in this way is equal to the order of the matrix, in this case two. The secular determinant for Equation \(\ref{21.3}\) is (noting that \(S_{11}\) = \(S_{22} = 1\) since the SALCs are normalized)\[(H_{11} - E)(H_{22} - E) - (H_{12} - ES_{12})^2 = 0 \label{21.4}\]Expanding and collecting terms in \(E\) gives\[E^2(1-S_{12}^2) + E(2H_{12}S_{12} - H_{11} - H_{22}) + (H_{11}H_{22} - H_{12}^2) = 0 \label{21.5}\]which can be solved using the quadratic formula to give the energies of the two molecular orbitals.\[E_\pm = \dfrac{-(2H_{12}S_{12} - H_{11} - H_{22}) \pm \sqrt{(2H_{12}S_{12} - H_{11} - H_{22})^2 - 4(1-S_{12}^2)(H_{11}H_{22} - H_{12}^2)}}{2(1-S_{12}^2)} \label{21.6}\]To obtain numerical values for the energies, we need to evaluate the integrals \(H_{11}\), \(H_{22}\), \(H_{12}\), and \(S_{12}\). This would be quite a challenge to do analytically, but luckily there are a number of computer programs that can be used to calculate the integrals. One such program gives the following values.\[\begin{array}{rcl} H_{11} & = & -26.0000 \: eV \\ H_{22} & = & -22.2216 \: eV \\ H_{12} & = & -29.7670 \: eV \\ S_{12} & = & \: 0.8167 \: eV \end{array} \label{21.7}\]When we substitute these into our equation for the energy levels, we get:\[\begin{array}{rcl} E_+ & = & \: 29.8336 \: eV \\ E_- & = & -31.0063 \: eV \end{array} \label{21.8}\]We now have the orbital energies and the next step is to find the orbital coefficients. The coefficients for an orbital of energy \(E\) are found by substituting the energy into the secular equations and solving for the coefficients \(c_i\). Since the two secular equations are not linearly independent (i.e. they are effectively only one equation), when we solve them to find the coefficients what we will end up with is the relative values of the coefficients. This is true in general: in a system with \(N\) coefficients, solving the secular equations will allow all \(N\) of the coefficients \(c_i\) to be obtained in terms of, say, \(c_1\). The absolute values of the coefficients are found by normalizing the wavefunction.Since the secular equations for the orbitals of energy \(E_+\) and \(E_-\) are not linearly independent, we can choose to solve either one of them to find the orbital coefficients. We will choose the first.\[(H_{11} - E_{\pm})c_1 + (H_{12} - E_{\pm}S_{12})c_2 = 0 \label{21.9}\]For the orbital with energy \(E_-\) = -31.0063 eV, substituting numerical values into this equation gives\[\begin{array}{rcl} 5.0063 c_1 - 4.4442 c_2 & = & 0 \\ c_2 & = & 1.1265 c_1 \end{array} \label{21.10}\]The molecular orbital is therefore\[\Psi = c_1(\phi_1 + 1.1265\phi_2) \label{21.11}\]Normalizing to find the constant \(c_1\) (by requiring \(\label\Psi|\Psi \rangle\) = 1) gives\[\begin{array}{rcll} \Psi_1 & = & 0.4933\phi_1 + 0.5557\phi_2 & \\ & = & 0.4933s_N + 0.3208(s_1 + s_2 + s_3) & (\text{substituting the SALCs for} \: \phi_1 \: \text{and} \: \phi_2) \end{array} \label{21.12}\]For the second orbital, with energy \(E_+\) = 29.8336 eV, the secular equation is\[\begin{array}{rcl} -55.8336c_1 - 54.1321c_2 & = & 0 \\ c_2 & = & -1.0314c_1 \end{array} \label{21.13}\]giving \[\begin{array}{rcll} \Psi_2 & = & c_1(\phi_1 - 1.0314\phi_2) & \\ & = & 1.6242\phi_1 - 1.6752\phi_2 & \text{(after normalization)} \\ & = & 1.6242s_N -0.9672(s_1 + s_2 + s_3) \end{array} \label{21.14}\]These two \(A_1\) molecular orbitals \(\Psi_1\) and \(\Psi_2\), one bonding and one antibonding, are shown below. The remaining two SALCs arising from the \(s\) orbitals of \(NH_3\):\[\phi_3 = \dfrac{1}{\sqrt{6}}\begin{pmatrix} 2s_1 - s_2 - s_3 \end{pmatrix}\]and \[\phi_4 = \dfrac{1}{\sqrt{2}} \begin{pmatrix} s_2 - s_3 \end{pmatrix}\]form an orthogonal pair of molecular orbitals of \(E\) symmetry. We can show this by solving the secular determinant to find the orbital energies. The secular equations in this case are:\[\begin{array}{rcl} c_1(H_{33} - ES_{33}) + c_2(H_{34} -ES_{34}) & = & 0 \\ c_1(H_{34} -ES_{34}) + c_2(H_{44} - ES_{44}) & = & 0 \end{array} \label{21.15}\]Solving the secular determinant gives\[E_\pm = \dfrac{-(2H_{34}S_{34} - H_{33} - H_{44}) \pm \sqrt{(2H_{34}S_{34} - H_{33} - H_{44})^2 - 4(1-S_{34}^2)(H_{33}H_{44} - H_{34}^2)}}{2(1-S_{34}^2)} \label{21.16}\]The integrals required are\[\begin{array}{rcl} H_{33} & = & -9.2892 \: eV \\ H_{44} & = & -9.2892 \: eV \\ H_{34} & = & 0 \\ S_{34} & = & 0 \end{array} \label{21.17}\]Using the fact that \(H_{34}\) = \(S_{34} = 0\), the expression for the energies reduces to\[E_\pm = \dfrac{(H_{33} + H_{44}) \pm (H_{33} - H_{44})}{2} \label{21.18}\]giving \(E_+\) = \(H_{33}\) = -9.2892 eV and \(E_-\) = \(H_{44}\) = -9.2892 eV. Each SALC therefore forms a molecular orbital by itself, and the two orbitals have the same energy; the two SALCs form an orthogonal pair of degenerate orbitals. These two molecular orbitals of \(E\) symmetry are shown below.This page titled 1.21: Solving the Secular Equations is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Claire Vallance via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,805
1.23: A more complicated bonding example
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Symmetry_(Vallance)/01%3A_Chapters/1.23%3A_A_more_complicated_bonding_example
As another example, we will use group theory to construct the molecular orbitals of \(H_2O\) (point group \(C_{2v}\)) using a basis set consisting of all the valence orbitals. The valence orbitals are a \(1s\) orbital on each hydrogen, which we will label \(s_H\) and \(s_H'\), and a \(2s\) and three \(2p\) orbitals on the oxygen, which we will label \(s_O\), \(p_x\), \(p_y\), \(p_z\) giving a complete basis \(\begin{pmatrix} s_H, s_H', s_O, p_x, p_y, p_z \end{pmatrix}\). The first thing to do is to determine how each orbital transforms under the symmetry operations of the \(C_{2v}\) point group (\(E\), \(C_2\), \(\sigma_v\) and \(\sigma_v'\)), construct a matrix representation and determine the characters of each operation. The symmetry operations and axis system we will be using are shown below.The orbitals transform in the following way\[\begin{array}{lrcl} E & \begin{pmatrix} s_H, s_H', s_O, p_x, p_y, p_z \end{pmatrix} & \rightarrow & \begin{pmatrix} s_H, s_H', s_O, p_x, p_y, p_z \end{pmatrix} \\ C_2 & \begin{pmatrix} s_H, s_H', s_O, p_x, p_y, p_z \end{pmatrix} & \rightarrow & \begin{pmatrix} s_H', s_H, s_O, -p_x, -p_y, p_z \end{pmatrix} \\ \sigma_v(xz) & \begin{pmatrix} s_H, s_H', s_O, p_x, p_y, p_z \end{pmatrix} & \rightarrow & \begin{pmatrix} s_H, s_H', s_O, p_x, -p_y, p_z \end{pmatrix} \\ \sigma_v'(yz) & \begin{pmatrix} s_H, s_H', s_O, p_x, p_y, p_z \end{pmatrix} & \rightarrow & \begin{pmatrix} s_H', s_H, s_O, -p_x, p_y, p_z \end{pmatrix} \end{array} \label{23.1}\]After a little practice, you will probably be able to write matrix representatives straight away just by looking at the effect of the symmetry operations on the basis. However, if you are struggling a little the following procedure might help.Remember that the matrix representatives are just the matrices we would have to multiply the left hand side of the above equations by to give the right hand side. In most cases they are very easy to work out. Probably the most straightforward way to think about it is that each column of the matrix shows where one of the original basis functions ends up. For example, the first column transforms the basis function \(s_H\) to its new position. The first column of the matrix can be found by taking the result on the right hand side of the above expressions, replacing every function that isn’t \(s_H\) with a zero, putting the coefficient of \(s_H\) (\(1\) or \(-1\) in this example) in the position at which it occurs, and taking the transpose to give a column vector.Consider the representative for the \(C_2\) operation. The original basis \(\begin{pmatrix} s_H, s_H', s_O, p_x, p_y, p_z \end{pmatrix}\) transforms into \(\begin{pmatrix} s_H', s_H, s_O, -p_x, -p_y, p_z \end{pmatrix}\). The first column of the matrix therefore transforms \(s_H\) into \(s_H'\). Taking the result and replacing all the other functions with zeroes gives \(\begin{pmatrix} 0, s_H, 0, 0, 0, 0 \end{pmatrix}\). The coefficient of \(s_H\) is \(1\), so the first column of the \(C_2\) matrix representative is\[\begin{pmatrix} 0 \\ 1 \\ 0 \\ 0 \\ 0 \\ 0 \end{pmatrix} \label{23.2}\]The matrix representatives and their characters are\[\begin{array}{cccc} E & C_2 & \sigma_v & \sigma_v' \\ \scriptsize{\begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \end{pmatrix}} & \scriptstyle{\begin{pmatrix} 0 & 1 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & -1 & 0 & 0 \\ 0 & 0 & 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \end{pmatrix}} & \scriptstyle{\begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 &1 & 0 & 0 \\ 0 & 0 & 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \end{pmatrix}} & \scriptstyle{\begin{pmatrix} 0 & 1 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & -1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \end{pmatrix}} \\ \chi(E) = 6 & \chi(C_2) = 0 & \chi(\sigma_v) = 4 & \chi(\sigma_v') = 2 \end{array} \label{23.3}\]Now we are ready to work out which irreducible representations are spanned by the basis we have chosen. The character table for \(C_{2v}\) is:\[\begin{array}{l|cccc|l} C_{2v} & E & C_2 & \sigma_v & \sigma_v' & h = 4 \\ \hline A_1 & 1 & 1 & 1 & 1 & z, x^2, y^2, z^2 \\ A_2 & 1 & 1 & -1 & -1 & xy, R_z \\ B_1 & 1 & -1 & 1 & -1 & x, xz, R_y \\ B_2 & 1 & -1 & -1 & 1 & y, yz, R_x \\ \hline \end{array}\]As before, we use Equation (15.20) to find out the number of times each irreducible representation appears.\[a_k = \dfrac{1}{h}\sum_C n_C \chi(g) \chi_k(g) \label{23.4}\]We have\[\begin{array}{rcll} a(A_1) & = & \dfrac{1}{4}(1 \times 6 \times 1 + 1 \times 0 \times 1 + 1\times 4\times 1 + 1\times 2\times 1) & = 3 \\ a(A_2) & = & \dfrac{1}{4}(1\times 6\times 1 + 1\times 0\times 1 + 1\times 4\times -1 + 1\times 2\times -1) & = 0 \\ a(B_1) & = & \dfrac{1}{4}(1\times 6\times 1 + 1\times 0\times -1 + 1\times 4\times 1 + 1\times 2\times -1) & = 2 \\ a(B_2) & = & \dfrac{1}{4}(1\times 6\times 1 + 1\times 0\times -1 + 1\times 4\times -1 + 1\times 2\times 1) & = 1 \end{array} \label{23.5}\]so the basis spans \(3A_1 + 2B_1 + B_2\). Now we use the projection operators applied to each basis function \(f_i\) in turn to determine the SALCs \(\phi_i = \Sigma_g \chi_k(g) g f_i\)The SALCs of \(A_1\) symmetry are:\[\begin{array}{rclll} \phi(s_H) & = & s_H + s_H' + s_H + s_H' & = & 2(s_H + s_H') \\ \phi(s_H') & = & s_H' + s_H + s_H' + s_H & = & 2(s_H + s_H') \\ \phi(s_O) & = & s_O + s_O + s_O + s_O & = & 4s_O \\ \phi(p_x) & = & p_x - p_x + p_x - p_x & = & 0 \\ \phi(p_y) & = & p_y - p_y + p_y - p_y & = & 0 \\ \phi(p_z) & = & p_z + p_z + p_z + p_z & = & 4p_z \end{array} \label{23.6}\]The SALCs of \(B_1\) symmetry are:\[\begin{array}{rclll} \phi(s_H) & = & s_H - s_H' + s_H - s_H' & = & 2(s_H - s_H') \\ \phi(s_H') & = & s_H' - s_H + s_H' - s_H & = & 2(s_H' - s_H) \\ \phi(s_O) & = & s_O - s_O + s_O - s_O & = & 0 \\ \phi(p_x) & = & p_x + p_x + p_x + p_x & = & 4p_x \\ \phi(p_y) & = & p_y + p_y - p_y - p_y & = & 0 \\ \phi(p_z) & = & p_z - p_z + p_z - p_z & = & 0 \end{array} \label{23.7}\]The SALCs of \(B_2\) symmetry are:\[\begin{array}{rclll} \phi(s_H) & = & s_H - s_H' -s_H = s_H' & = & 0 \\ \phi(s_H') & = & s_H' - s_H - s_H' + s_H & = & 0 \\ \phi(s_O) & = & s_O - s_O - s_O + s_O & = & 0 \\ \phi(p_x) & = & p_x + p_x - p_x - p_x & = & 0 \\ \phi(p_y) & = & p_y + p_y + p_y + p_y & = & 4p_y \\ \phi(p_z) & = & p_z - p_z - p_z + p_z & = & 0 \end{array} \label{23.8}\]After normalization, our SALCs are therefore:A1 symmetry\[\begin{array}{rcl} \phi_1 & = & \dfrac{1}{\sqrt{2}}(s_H + s_H') \\ \phi_2 & = & s_O \\ \phi_3 & = & p_z \end{array} \label{23.9}\]B1 symmetry \[\begin{array}{rcl} \phi_4 & = & \dfrac{1}{\sqrt{2}}(s_H - s_H') \\ \phi_5 & = & p_x \end{array} \label{23.10}\]B2 symmetry\[\begin{array}{rcl} \phi_6 & = & p_y \end{array} \label{23.11}\]Note that we only take one of the first two SALCs generated by the \(B_1\) projection operator since one is a simple multiple of the other (i.e. they are not linearly independent). We can therefore construct three molecular orbitals of \(A_1\) symmetry, with the general form\[\begin{array}{rcll} \Psi(A_1) & = & c_1 \phi_1 + c_2 \phi_2 + c_3 \phi_3 & \\ & = & c_1'(s_H + s_H') + c_2 s_O + c_3 p_z & \text{where} \: c_1' = \dfrac{c_1}{\sqrt{2}} \end{array} \label{23.12}\]two molecular orbitals of \(B_1\) symmetry, of the form\[\begin{array}{rcl} \Psi(B_1) & = & c_4 \phi_4 + c_5 \phi_5 \\ & = & c_4'(s_H - s_H') + c_5 p_z \end{array} \label{23.13}\]and one molecular orbital of \(B_2\) symmetry\[\begin{array}{rcl} \Psi(B_2) & = & \phi_6 \\ & = & p_y \end{array} \label{23.14}\]To work out the coefficients \(c_1\) - \(c_5\) and determine the orbital energies, we would have to solve the secular equations for each set of orbitals in turn. We are not dealing with a conjugated \(p\) system, so in this case Hückel theory cannot be used and the various \(H_{ij}\) and \(S_{ij}\) integrals would have to be calculated numerically and substituted into the secular equations. This involves a lot of tedious algebra, which we will leave out for the moment. The LCAO orbitals determined above are an approximation of the true molecular orbitals of water, which are shown on the right. As we have shown using group theory, the \(A_1\) molecular orbitals involve the oxygen \(2s\) and \(2p_z\) atomic orbitals and the sum \(s_H + s_H'\) of the hydrogen \(1s\) orbitals. The \(B_1\) molecular orbitals involve the oxygen \(2p_x\) orbital and the difference \(s_H -s_H'\) of the two hydrogen \(1s\) orbitals, and the \(B_2\) molecular orbital is essentially an oxygen \(2p_y\) atomic orbital.Claire Vallance (University of Oxford)Claire Vallance (University of Oxford)This page titled 1.23: A more complicated bonding example is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Claire Vallance via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,807
1.24: Molecular Vibrations
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Symmetry_(Vallance)/01%3A_Chapters/1.24%3A_Molecular_Vibrations
Vibrational motion in diatomic molecules is often discussed within the context of the simple harmonic oscillator in quantum mechanics. A diatomic molecule has only a single bond that can vibrate; we say it has a single vibrational mode. As you may expect, the vibrational motions of polyatomic molecules are much more complicated than those in a diatomic. Firstly, there are more bonds that can vibrate; and secondly, in addition to stretching vibrations, the only type of vibration possible in a diatomic, we can also have bending and torsional vibrational modes. Since changing one bond length in a polyatomic will often affect the length of nearby bonds, we cannot consider the vibrational motion of each bond in isolation; instead we talk of normal modes involving the concerted motion of groups of bonds. As a simple example, the normal modes of a linear triatomic molecule are shown below.Once we know the symmetry of a molecule at its equilibrium structure, group theory allows us to predict the vibrational motions it will undergo using exactly the same tools we used above to investigate molecular orbitals. Each vibrational mode transforms as one of the irreducible representations of the molecule’s point group. Before moving on to an example, we will quickly review how to determine the number of vibrational modes in a molecule.An atom can undergo only translational motion, and therefore has three degrees of freedom corresponding to motion along the \(x\), \(y\), and \(z\) Cartesian axes. Translational motion in any arbitrary direction can always be expressed in terms of components along these three axes. When atoms combine to form molecules, each atom still has three degrees of freedom, so the molecule as a whole has \(3N\) degrees of freedom, where \(N\) is the number of atoms in the molecule. However, the fact that each atom in a molecule is bonded to one or more neighboring atoms severely hinders its translational motion, and also ties its motion to that of the atoms to which it is attached. For these reasons, while it is entirely possible to describe molecular motions in terms of the translational motions of individual atoms (we will come back to this in the next section), we are often more interested in the motions of the molecule as a whole. These may be divided into three types: translational; rotational and vibrational.Just as for an individual atom, the molecule as a whole has three degrees of translational freedom, leaving \(3N - 3\) degrees of freedom in rotation and vibration.The number of rotational degrees of freedom depends on the structure of the molecule. In general, there are three possible rotational degrees of freedom, corresponding to rotation about the \(x\), \(y\), and \(z\) Cartesian axes. A non-linear polyatomic molecule does indeed have three rotational degrees of freedom, leaving \(3N - 6\) degrees of freedom in vibration (i.e \(3N - 6\) vibrational modes). In a linear molecule, the situation is a little different. It is generally accepted that to be classified as a true rotation, a motion must change the position of one or more of the atoms. If we define the \(z\) axis as the molecular axis, we see that spinning the molecule about the axis does not move any of the atoms from their original position, so this motion is not truly a rotation. Consequently, a linear molecule has only two degrees of rotational freedom, corresponding to rotations about the \(x\) and \(y\) axis. This type of molecule has \(3N - 5\) degrees of freedom left for vibration, or \(3N - 5\) vibrational modes.In summary:We mentioned above that the procedure for determining the normal vibrational modes of a polyatomic molecule is very similar to that used in previous sections to construct molecular orbitals. In fact, virtually the only difference between these two applications of group theory is the choice of basis set.As we have already established, the motions of a molecule may be described in terms of the motions of each atom along the \(x\), \(y\) and \(z\) axis. Consequently, it probably won’t come as too much of a surprise to discover that a very useful basis for describing molecular motions comprises a set of \(\begin{pmatrix} x, y, z \end{pmatrix}\) axes centered on each atom. This basis is usually known as the \(\textit{3N}\) Cartesian basis (since there are \(3N\) Cartesian axes, \(3\) axes for each of the \(N\) atoms in the molecule). Note that each molecule will have a different \(3N\) Cartesian basis, just as every molecule has a different atomic orbital basis.Our first task in investigating motions of a particular molecule is to determine the characters of the matrix representatives for the \(3N\) Cartesian basis under each of the symmetry operations in the molecular point group. We will use the \(H_2O\) molecule, which has \(C_{2v}\) symmetry, as an example.\(H_2O\) has three atoms, so the \(3N\) Cartesian basis will have \(9\) elements. The basis vectors are shown in the diagram below.One way of determining the characters would be to construct all of the matrix representatives and take their traces. While you are more than welcome to try this approach if you want some practice at constructing matrix representatives, there is an easier way. Recall that we can also determine the character of a matrix representative under a particular symmetry operation by stepping through the basis functions and applying the following rules: For \(H_2O\), this gives us the following characters for the \(3N\) Cartesian basis (check that you can obtain this result using the rules above and the basis vectors as drawn in the figure):\[\begin{array}{lcccc} \text{Operation:} & E & C_2 & \sigma_v(xz) & \sigma_v'(yz) \\ \chi_{3N}: & 9 & -1 & 3 & 1 \end{array} \tag{24.1}\]There is an even quicker way to work out the characters of the \(3N\) Cartesian basis if you have a character table in front of you. The character for the Cartesian basis is simply the sum of the characters for the \(x\), \(y\), and \(z\) (or \(T_x\), \(T_y\), and \(T_z\)) functions listed in the character table. To get the character for the \(\textit{3N}\) Cartesian basis, simply multiply this by the number of atoms in the molecule that are unshifted by the symmetry operation.The \(C_{2v}\) character table is shown below.\[\begin{array}{l|cccc|l} C_{2v} & E & C_2 & \sigma_v & \sigma_v' & h = 4 \\ \hline A_1 & 1 & 1 & 1 & 1 & z, x^2, y^2, z^2 \\ A_2 & 1 & 1 & -1 & -1 & xy, R_z \\ B_1 & 1 & -1 & 1 & -1 & x, xz, R_y \\ B_2 & 1 & -1 & -1 & 1 & y, yz, R_x \\ \hline \end{array} \tag{24.2}\]\(x\) transforms as \(B_1\), \(y\) as \(B_2\), and \(z\) as \(A_1\), so the characters for the Cartesian basis are\[\begin{array}{lcccc} \text{Operation:} & E & C_2 & \sigma_v(xz) & \sigma_v'(yz) \\ \chi_{3N}: & 3 & -1 & 1 & 1 \end{array} \tag{24.3}\]We multiply each of these by the number of unshifted atoms (\(3\) for the identity operation, \(1\) for \(C_2\), \(3\) for \(\sigma_v\) and \(1\) for \(\sigma_v'\)) to obtain the characters for the \(3N\) Cartesian basis.\[\begin{array}{lcccc} \chi_{3N}: & 9 & -1 & 3 & 1 \end{array} \tag{24.4}\]Reassuringly, we obtain the same characters as we did previously. Which of the three methods you use to get to this point is up to you.We now have the characters for the molecular motions (described by the \(3N\) Cartesian basis) under each symmetry operation. At this point, we want to separate these characters into contributions from translation, rotation, and vibration. This turns out to be a very straightforward task. We can read the characters for the translational and rotational modes directly from the character table, and we obtain the characters for the vibrations simply by subtracting these from the \(3N\) Cartesian characters we’ve just determined. The characters for the translations are the same as those for \(\chi_{Cart}\). We find the characters for the rotations by adding together the characters for \(R_x\), \(R_y\), and \(R_z\) from the character table (or just \(R_x\) and \(R_y\) if the molecule is linear). For \(H_2O\), we have:\[\begin{array}{lcccc} \text{Operation:} & E & C_2 & \sigma_v(xz) & \sigma_v'(yz) \\ \chi_{3N}: & 9 & -1 & 3 & 1 \\ \chi_{\text{Trans}}: & 3 & -1 & 1 & 1 \\ \chi_{\text{Rot}}: & 3 & -1 & -1 & -1 \\ \chi_{\text{Vib}} = \chi_{3N} - \chi_{\text{Trans}} - \chi_{\text{Rot}}: & 3 & 1 & 3 & 1 \end{array} \tag{24.5}\]The characters in the final row are the sums of the characters for all of the molecular vibrations. We can find out the symmetries of the individual vibrations by using the reduction equation (Equation (15.20)) to determine the contribution from each irreducible representation. In many cases you won’t even need to use the equation, and can work out which irreducible representations are contributing just by inspection of the character table. In the present case, the only combination of irreducible representations that can give the required values for \(\chi_{\text{Vib}}\) is \(2A_1 + B_1\). As an exercise, you should make sure you are also able to obtain this result using the reduction equation.So far this may all seem a little abstract, and you probably want to know is what the vibrations of \(H_2O\) actually look like. For a molecule with only three atoms, it is fairly easy to identify the possible vibrational modes and to assign them to the appropriate irreducible representation.For a larger molecule, the problem may become much more complex, and in that case we can generate the SALCs of the \(3N\) Cartesian basis, which will tell us the atomic displacements associated with each vibrational mode. We will do this now for \(H_2O\).As before, we generate the SALCs of each symmetry by applying the appropriate projection operator to each of the basis functions (or in this case, basis vectors) \(f_i\) in turn.\[\phi_i = \sum_g \chi_k(g) g f_i \tag{24.6}\]In this case we have \(9\) basis vectors, which we will label \(x_H\), \(y_H\), \(z_H\), \(x_O\), \(y_O\), \(z_O\), \(x_{H'}\), \(y_{H'}\), \(z_{H'}\), describing the displacements of the two \(H\) atoms and the \(O\) atom along Cartesian axes. For the SALCs of \(A_1\) symmetry, applying the projection operator to each basis vector in turn gives (check that you can obtain this result):\[\begin{array}{rclll} \phi_1(x_H) & = & x_H - x_{H'} + x_H - x_{H'} & = & 2x_H - 2x_{H'} \\ \phi_2(y_H) & = & y_H - y_{H'} - y_H + y_{H'} & = & 0 \\ \phi_3(z_H) & = & z_H + z_{H'} + z_H + z_{H'} & = & 2z_H + 2z_{H'} \\ \phi_4(x_O) & = & x_O - x_O + x_O - x_O & = & 0 \\ \phi_5(y_O) & = & y_O - y_O - y_O + y_O & = & 0 \\ \phi_6(z_O) & = & z_O + z_O + z_O + z_O & = & 4z_O \\ \phi_7(x_{H'}) & = & x_{H'} - x_H + x_{H'} - x_H & = & 2x_{H'} - 2x_H \\ \phi_8(y_{H'}) & = & y_{H'} - y_H - y_{H'} + y_H & = & 0 \\ \phi_9(z_{H'}) & = & z_{H'} + z_H + z_{H'} + z_H & = & 2z_{H'} + 2z_H \end{array} \tag{24.7}\]We see that the motion characteristic of an \(A_1\) vibration (which we have identified as the symmetric stretch and the bending vibration) may be summarized as follows:The asymmetric stretch has \(B_1\) symmetry, and applying the projection operator in this case gives:\[\begin{array}{rclll} \phi_1(x_H) & = & x_H + x_{H'} + x_H + x_{H'} & = & 2x_H + 2x_{H'} \\ \phi_2(y_H) & = & y_H + y_{H'} - y_H - y_{H'} & = & 0 \\ \phi_3(z_H) & = & z_H - z_{H'} + z_H - z_{H'} & = & 2z_H - 2z_{H'} \\ \phi_4(x_O) & = & x_O + x_O + x_O + x_O & = & 4x_O \\ \phi_5(y_O) & = & y_O + y_O - y_O - y_O & = & 0 \\ \phi_6(z_O) & = & z_O - z_O + z_O - z_O & = & 0 \\ \phi_7(x_{H'}) & = & x_{H'} + x_H + x_{H'} + x_H & = & 2x_{H'} + 2x_H \\ \phi_8(y_{H'}) & = & y_{H'} + y_H - y_{H'} - y_H & = & 0 \\ \phi_(z_{H'}) & = & z_{H'} - z_H + z_{H'} - z_H & = & 2z_{H'} - 2z_H \end{array} \tag{24.8}\]In this vibrational mode, the two \(H\) atoms move in the same direction along the \(x\) axis and in opposite directions along the \(z\) axis.We have now shown how group theory may be used together with the \(3N\) Cartesian basis to identify the symmetries of the translational, rotational and vibrational modes of motion of a molecule, and also to determine the atomic displacements associated with each vibrational mode.While it was fairly straightforward to investigate the atomic displacements associated with each vibrational mode of \(H_2O\) using the \(3N\) Cartesian basis, this procedure becomes more complicated for larger molecules. Also, we are often more interested in how bond lengths and angles change in a vibration, rather than in the Cartesian displacements of the individual atoms. If we are only interested in looking at molecular vibrations, we can use a different procedure from that described above, and start from a basis of internal coordinates. Internal coordinates are simply a set of bond lengths and bond angles, which we can use as a basis for generating representations and, eventually, SALCs. Since bond lengths and angles do not change during translational or rotational motion, no information will be obtained on these types of motion.For \(H_2O\), the three internal coordinates of interest are the two \(OH\) bond lengths, which we will label \(r\) and \(r'\), and the \(HOH\) bond angle, which we will label \(\theta\). If we wanted to, we could separate our basis into two different bases, one consisting only of bond lengths, to describe stretching vibrations, and one consisting of only bond angles, to describe bending vibrations. However, the current example is simple enough to treat all the basis functions together.As usual, our first step is to work out the characters of the matrix representatives for this basis under each symmetry operation. The effects of the various transformations on our chosen basis, and the characters of the corresponding representatives, are: \[\begin{array}{lc} E(r, r', \theta) = (r, r', \theta) & \chi(E) = 3 \\ C_2(r, r', \theta) = (r', r, \theta) & \chi(C_2) = 1 \\ \sigma_v(xz)(r, r', \theta) = (r, r', \theta) & \chi(\sigma_v) = 3 \\ \sigma_v'(yz)(r, r', \theta) = (r', r, \theta) & \chi(\sigma_v') = 1 \end{array} \tag{24.9}\]These are the same characters as we found before using the \(3N\) Cartesian basis, and as before, we can see by inspection of the character table that the representation may be reduced down to the sum of irreducible representations \(2A_1 + B_1\). We can now work out the symmetry adapted linear combinations of our new basis set to see how the bond lengths and angle change as \(H_2O\) vibrates in each of the three vibrational modes.Again, we will use the projection operator \(\phi_i = \Sigma_g \chi_k(g) g f_i\) applied to each basis function in turn.Firstly, the \(A_1\) vibrations:\[\begin{array}{rclll} \phi_1(r) & = & r + r' + r + r' & = & 2(r + r') \\ \phi_2(r') & = & r' + r + r' + r & = & 2(r' + r) \\ \phi_3(\theta) & = & \theta + \theta + \theta + \theta & = & 4\theta \end{array} \tag{24.10}\]From these SALCs, we can identify \(\phi_1\) (and \(\phi_2\), which is identical) with the symmetric stretch, in which both bond lengths change in phase with each other, and \(\phi_3\) with the bend.Now for the \(B_1\) vibration:\[\begin{array}{rclll} \phi_4(r) & = & r - r' + r - r' & = & 2(r - r') \\ \phi_5(r') & = & r' - r + r' - r & = & 2(r' - r) \\ \phi_6(\theta) & = & \theta - \theta + \theta - \theta & = & 0 \end{array} \tag{24.11}\]\(\phi_4\) and \(\phi_5\) are not linearly independent, and either one may be chosen to describe the asymmetric stretch, in which one bond lengthens as the other shortens.Note: When using internal coordinates, it is important that all of the coordinates in the basis are linearly independent. If this is the case then the number of internal coordinates in the basis will be the same as the number of vibrational modes (\(3N - 5\) or \(3N - 6\), depending on whether the molecule is linear or non-linear). This requirement is satisfied in the \(H_2O\) example above. For a less straightforward example, consider the methane molecule, \(CH_4\). It might appear that we could choose a basis made up of the four \(C\)-\(H\) bond lengths and the six \(H\)-\(C\)-\(H\) bond angles. However, this would give us \(10\) basis functions, and \(CH_4\) has only \(9\) vibrational modes. This is due to the fact that the bond angles are not all independent of each other. It can be tricky to come up with the appropriate internal coordinate basis to describe all of the molecular motions, but all is not lost. Even if you can’t work out the appropriate bond angles to choose, you can always take a basis of bond lengths to investigate the stretching vibrations of a molecule. If you want to know the symmetries of the bending vibrations, you can use the \(3N\) Cartesian basis method to determine the symmetries of all of the vibrational modes and compare these with the stretching mode symmetries to identify the bending modes.Claire Vallance (University of Oxford)Claire Vallance (University of Oxford)This page titled 1.24: Molecular Vibrations is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Claire Vallance via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,808
1.26: Group theory and Molecular Electronic States
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Symmetry_(Vallance)/01%3A_Chapters/1.26%3A_Group_theory_and_Molecular_Electronic_States
Firstly, it is important that you understand the difference between a molecular orbital and an electronic state.A strict definition of a molecular orbital is that it is a ‘one electron wavefunction’, i.e. a solution to the Schrödinger equation for the molecule. A complete one electron wavefunction (orbital) is a product of a spatial function, describing the orbital angular momentum and ‘shape’ of the orbital, and a spin function, describing the spin angular momentum.\[\Psi = \Psi_{\text{spatial}} \Psi_{\text{spin}} \tag{26.1}\]In common usage, the word ‘orbital’ is often used to refer only to the spatial part of the ‘true’ orbital. For example, in atoms we generally talk about ‘\(s\) orbitals’ or ‘\(p\) orbitals’ rather than ‘\(s\) spatial wavefunctions’ and ‘\(p\) spatial wavefunctions’. In this context, two electrons with opposite spins may occupy one spatial orbital. A more rigorous way of saying this would be to state that a given spatial wavefunction may be paired with two different spin wavefunctions (one corresponding to a ‘spin up’ electron and one to a ‘spin down’ electron).An electronic state is defined by the electron configuration of the system, and by the quantum numbers of each electron contributing to that configuration. Each electronic state corresponds to one of the energy levels of the molecule. These energy levels will obviously depend on the molecular orbitals that are occupied, and their energies, but they also depend on the way in which the electrons within the various molecular orbitals interact with each other. Interactions between the electrons are essentially determined by the relative orientations of the magnetic moments associated with their orbital and spin angular momenta, which is where the dependence on quantum numbers comes in. A given electron configuration will often give rise to a number of different electronic states if the electrons may be arranged in different ways (with different quantum numbers) within the occupied orbitals. Last year you were introduced to the idea of atomic states, and learnt how to label the states arising from a given electron configuration using term symbols of the form \(^{2S+1}L_J\). Term symbols of this form define the spin, orbital and total angular momenta of the state, which in turn determine its energy. Molecular states, containing contributions from a number of molecular orbitals, are more complicated. For example, a given molecular orbital will generally contain contributions from several different atomic orbitals, and as a result, electrons cannot easily be assigned an l quantum number. Instead of using term symbols, molecular states are usually labeled according to their symmetry (the exception to this is linear molecules, for which conventional term symbols may still be used, albeit with a few modifications from the atomic case).We can determine the symmetry of an electronic state by taking the direct product of the irreducible representations for all of the electrons involved in that state (the irreducible representation for each electron is simply the irreducible representation for the molecular orbital that it occupies). Usually we need only consider unpaired electrons. Closed shell species, in which all electrons are paired, almost always belong to the totally symmetric irreducible representation in the point group of the molecule.An example is the molecular orbitals of butadiene, which belongs to the \(C_{2h}\) point group. Since all electrons are paired, the overall symmetry of the state is \(A_g\), and the label for the state once the spin multiplicity is included is \(^1A_g\). We could have arrived at the same result by taking the direct product of the irreducible representations for each electron. There are two electrons in orbitals with \(A_u\) symmetry, and two in orbitals with \(B_g\) symmetry, so overall we have:\[A_u \otimes A_u \otimes B_g \otimes B_g = A_g \tag{26.2}\]This page titled 1.26: Group theory and Molecular Electronic States is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Claire Vallance via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,810
1.27: Spectroscopy - Interaction of Atoms and Molecules with Light
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Symmetry_(Vallance)/01%3A_Chapters/1.27%3A_Spectroscopy_-_Interaction_of_Atoms_and_Molecules_with_Light
In our final application of group theory, we will investigate the way in which symmetry considerations influence the interaction of light with matter. We have already used group theory to learn about the molecular orbitals in a molecule. In this section we will show that it may also be used to predict which electronic states may be accessed by absorption of a photon. We may also use group theory to investigate how light may be used to excite the various vibrational modes of a polyatomic molecule.Last year, you were introduced to spectroscopy in the context of electronic transitions in atoms. You learned that a photon of the appropriate energy is able to excite an electronic transition in an atom, subject to the following selection rules:\[\begin{array}{rcl} \Delta n & = & \text{integer} \\ \Delta l & = & \pm 1 \\ \Delta L & = & 0, \pm 1 \\ \Delta S & = & 0 \\ \Delta J & = & 0, \pm 1; J=0 \not \leftrightarrow J=0 \end{array} \tag{27.1}\]What you may not have learned is where these selection rules come from. In general, different types of spectroscopic transition obey different selection rules. The transitions you have come across so far involve changing the electronic state of an atom, and involve absorption of a photon in the UV or visible part of the electromagnetic spectrum. There are analogous electronic transitions in molecules, which we will consider in more detail shortly. Absorption of a photon in the infrared (IR) region of the spectrum leads to vibrational excitation in molecules, while photons in the microwave (MW) region produce rotational excitation. Each type of excitation obeys its own selection rules, but the general procedure for determining the selection rules is the same in all cases. It is simply to determine the conditions under which the probability of a transition is not identically zero.The first step in understanding the origins of selection rules must therefore be to learn how transition probabilities are calculated. This requires some quantum mechanics. Last year, you learned about operators, eigenvalues and eigenfunctions in quantum mechanics. You know that if a function is an eigenfunction of a particular operator, then operating on the eigenfunction with the operator will return the observable associated with that state, known as the eigenvalue (i.e. \(\hat{A} \Psi = a \Psi\)). What you may not know is that operating on a function that is NOT an eigenfunction of the operator leads to a change in state of the system. In the transitions we will be considering, the molecule interacts with the electric field of the light (as opposed to NMR spectroscopy, in which the nuclei interact with the magnetic field of the electromagnetic radiation). These transitions are called electric dipole transitions, and the operator we are interested in is the electric dipole operator, usually given the symbol \(\hat{\boldsymbol{\mu}}\), which describes the electric field of the light. If we start in some initial state \(\Psi_i\), operating on this state with \(\hat{\boldsymbol{\mu}}\) gives a new state, \(\Psi = \hat{\boldsymbol{\mu}} \Psi\). If we want to know the probability of ending up in some particular final state \(\Psi_f\), the probability amplitude is simply given by the overlap integral between \(\Psi\) and \(\Psi_f\). This probability amplitude is called the transition dipole moment, and is given the symbol \(\boldsymbol{\mu}_{fi}\)..\[\hat{\boldsymbol{\mu}}_{fi} = \langle\Psi_f | \Psi\rangle = \langle\Psi_f | \hat{\boldsymbol{\mu}} | \Psi_i\rangle \tag{27.2}\]Physically, the transition dipole moment may be thought of as describing the ‘kick’ the electron receives or imparts to the electric field of the light as it undergoes a transition. The transition probability is given by the square of the probability amplitude.\[P_{fi} = \hat{\boldsymbol{\mu}}_{fi}^2 = |\langle\Psi_f | \hat{\boldsymbol{\mu}} | \Psi_i\rangle|^2 \tag{27.3}\]Hopefully it is clear that in order to determine the selection rules for an electric dipole transition between states \(\Psi_i\) and \(\Psi_f\), we need to find the conditions under which \(\boldsymbol{\mu}_{fi}\) can be non-zero. One way of doing this would be to write out the equations for the two wavefunctions (which are functions of the quantum numbers that define the two states) and the electric dipole moment operator, and just churn through the integrals. By examining the result, it would then be possible to decide what restrictions must be imposed on the quantum numbers of the initial and final states in order for a transition to be allowed, leading to selection rules of the type listed above for atoms. However, many selection rules may be derived with a lot less work, based simply on symmetry considerations. In section \(17\), we showed how to use group theory to determine whether or not an integral may be non-zero. This forms the basis of our consideration of selection rules.Assume that we have a molecule in some initial state \(\Psi_i\). We want to determine which final states \(\Psi_f\) can be accessed by absorption of a photon. Recall that for an integral to be non-zero, the representation for the integrand must contain the totally symmetric irreducible representation. The integral we want to evaluate is\[\hat{\boldsymbol{\mu}}_{fi} = \int \Psi_f^* \hat{\boldsymbol{\mu}} \Psi_i d\tau \tag{27.4}\]so we need to determine the symmetry of the function \(\Psi_f^* \hat{\boldsymbol{\mu}} \Psi_i\). As we learned in Section \(18\), the product of two functions transforms as the direct product of their symmetry species, so all we need to do to see if a transition between two chosen states is allowed is work out the symmetry species of \(\Psi_f\), \(\hat{\boldsymbol{\mu}}\) and \(\Psi_i\) , take their direct product, and see if it contains the totally symmetric irreducible representation for the point group of interest. Equivalently (as explained in Section \(18\)), we can take the direct product of the irreducible representations for \(\hat{\boldsymbol{\mu}}\) and \(\Psi_i\) and see if it contains the irreducible representation for \(\Psi_f\). This is best illustrated using a couple of examples.Earlier in the course, we learned how to determine the symmetry molecular orbitals. The symmetry of an electronic state is found by identifying any unpaired electrons and taking the direct product of the irreducible representations of the molecular orbitals in which they are located. The ground state of a closed-shell molecule, in which all electrons are paired, always belongs to the totally symmetric irreducible representation\(^7\). As an example, the electronic ground state of \(NH_3\), which belongs to the \(C_{3v}\) point group, has \(A_1\) symmetry. To find out which electronic states may be accessed by absorption of a photon, we need to determine the irreducible representations for the electric dipole operator \(\hat{\boldsymbol{\mu}}\). Light that is linearly polarized along the \(x\), \(y\), and \(z\) axes transforms in the same way as the functions \(x\), \(y\), and \(z\) in the character table\(^8\). From the \(C_{3v}\) character table, we see that \(x\)- and \(y\)-polarized light transforms as \(E\), while \(z\)-polarized light transforms as \(A_1\). Therefore:Of course, the photons must also have the appropriate energy, in addition to having the correct polarization to induce a transition.We can carry out the same analysis for \(H_2O\), which belongs to the \(C_{2v}\) point group. We showed previously that \(H_2O\) has three molecular orbitals of \(A_1\) symmetry, two of \(B_1\) symmetry, and one of \(B_2\) symmetry, with the ground state having \(A_1\) symmetry. In the \(C_{2v}\) point group, \(x\)-polarized light has \(B_1\) symmetry, and can therefore be used to excite electronic states of this symmetry; \(y\)-polarized light has \(B_2\) symmetry, and may be used to access the \(B_2\) excited state; and \(z\)-polarized light has \(A_1\) symmetry, and may be used to access higher lying \(A_1\) states. Consider our previous molecular orbital diagram for \(H_2O\). The electronic ground state has two electrons in a \(B_2\) orbital, giving a state of \(A_1\) symmetry (\(B_2 \otimes B_2 = A_1\)). The first excited electronic state has the configuration \((1B_2)^1(3A_1)^1\) and its symmetry is \(B_2 \otimes A_1 = B_2\). It may be accessed from the ground state by a \(y\)-polarized photon. The second excited state is accessed from the ground state by exciting an electron to the \(2B_1\) orbital. It has the configuration \((1B_2)^1(2B_1)^1\), its symmetry is \(B_2 \otimes B_1 = A_2\). Since neither \(x\)-, \(y\)- or \(z\)-polarized light transforms as \(A_2\), this state may not be excited from the ground state by absorption of a single photon.Similar considerations apply for vibrational transitions. Light polarized along the \(x\), \(y\), and \(z\) axes of the molecule may be used to excite vibrations with the same symmetry as the \(x\), \(y\) and \(z\) functions listed in the character table.For example, in the \(C_{2v}\) point group, \(x\)-polarized light may be used to excite vibrations of \(B_1\) symmetry, \(y\)-polarized light to excite vibrations of \(B_2\) symmetry, and \(z\)-polarized light to excite vibrations of \(A_1\) symmetry. In \(H_2O\), we would use \(z\)-polarized light to excite the symmetric stretch and bending modes, and \(x\)-polarized light to excite the asymmetric stretch. Shining \(y\)-polarized light onto a molecule of \(H_2O\) would not excite any vibrational motion.If there are vibrational modes in the molecule that may not be accessed using a single photon, it may still be possible to excite them using a two-photon process known as Raman scattering\(^9\). An energy level diagram for Raman scattering is shown below.The first photon excites the molecule to some high-lying intermediate state, known as a virtual state. Virtual states are not true stationary states of the molecule (i.e. they are not eigenfunctions of the molecular Hamiltonian), but they can be thought of as stationary states of the ‘photon + molecule’ system. These types of states are extremely short lived, and will quickly emit a photon to return the system to a stable molecular state, which may be different from the original state. Since there are two photons (one absorbed and one emitted) involved in Raman scattering, which may have different polarizations, the transition dipole for a Raman transition transforms as one of the Cartesian products \(x^2\), \(y^2\), \(z^2\), \(xy\), \(xz\), \(yz\) listed in the character tables.Vibrational modes that transform as one of the Cartesian products may be excited by a Raman transition, in much the same way as modes that transform as \(x\), \(y\), or \(z\) may be excited by a one-photon vibrational transition.In \(H_2O\), all of the vibrational modes are accessible by ordinary one-photon vibrational transitions. However, they may also be accessed by Raman transitions. The Cartesian products transform as follows in the \(C_{2v}\) point group.\[\begin{array}{clcl} A_1 & x^2, y^2, z^2 & B_1 & xz \\ A_2 & sy & B_2 & yz \end{array} \tag{27.5}\]The symmetric stretch and the bending vibration of water, both of \(A_1\) symmetry, may therefore be excited by any Raman scattering process involving two photons of the same polarization (\(x\)-, \(y\)- or \(z\)-polarized). The asymmetric stretch, which has \(B_1\) symmetry, may be excited in a Raman process in which one photon is \(x\)-polarized and the other \(z\)-polarized.\(^7\)It is important not to confuse molecular orbitals (the energy levels that individual electrons may occupy within the molecule) with electronic states (arising from the different possible arrangements of all the molecular electrons amongst the molecular orbitals, e.g. the electronic states of \(NH_3\) are NOT the same thing as the molecular orbitals we derived earlier in the course. These orbitals were an incomplete set, based only on the valence \(s\) electrons in the molecule. Inclusion of the \(p\) electrons is required for a full treatment of the electronic states. The \(H_2O\) example above should hopefully clarify this point.\(^8\)‘\(x\)-polarized’ means that the electric vector of the light (an electromagnetic wave) oscillates along the direction of the \(x\) axis.\(^9\)You will cover Raman scattering (also known as Raman spectroscopy) in more detail in later courses. The aim here is really just to alert you to its existence and to show how it may be used to access otherwise inaccessible vibrational modes.This page titled 1.27: Spectroscopy - Interaction of Atoms and Molecules with Light is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Claire Vallance via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,811
1.28: Summary
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Symmetry_(Vallance)/01%3A_Chapters/1.28%3A_Summary
Hopefully this course has given you a reasonable introduction to the qualitative description of molecular symmetry, and also to the way in which it can be used quantitatively within the context of group theory to predict important molecular properties. These main things you should have learnt in this text are:Claire Vallance (University of Oxford)Claire Vallance (University of Oxford)This page titled 1.28: Summary is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Claire Vallance via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,812
1.29: Appendix A
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Symmetry_(Vallance)/01%3A_Chapters/1.29%3A_Appendix_A
A property of traces of matrix products is that they are invariant under cyclic permutation of the matrices.i.e. \(tr \begin{bmatrix} ABC \end{bmatrix} = tr \begin{bmatrix} BCA \end{bmatrix} = tr \begin{bmatrix} CAB \end{bmatrix}\). For the character of a matrix representative of a symmetry operation \(g\), we therefore have: \[\chi(g) = tr \begin{bmatrix} \Gamma(g) \end{bmatrix} = tr \begin{bmatrix} C \Gamma'(g) C^{-1} \end{bmatrix} = tr \begin{bmatrix} \Gamma'(g) C^{-1} C \end{bmatrix} = tr \begin{bmatrix} \Gamma'(g) \end{bmatrix} = \chi'(g) \tag{29.1}\]The trace of the similarity transformed representative is therefore the same as the trace of the original representative.The formal requirement for two symmetry operations \(g\) and \(g'\) to be in the same class is that there must be some symmetry operation \(f\) of the group such that \(g' = f^{-1} gf\) (the elements \(g\) and \(g'\) are then said to be conjugate). If we consider the characters of \(g\) and \(g'\) we find:\[\chi(g') = tr \begin{bmatrix} \Gamma(g') \end{bmatrix} = tr \begin{bmatrix} \Gamma^{-1}(f) \Gamma(g) \Gamma(f) \end{bmatrix} = tr \begin{bmatrix} \Gamma(g) \Gamma(f) \Gamma^{-1}(f) \end{bmatrix} = tr \begin{bmatrix} \Gamma(g) \end{bmatrix} = \chi(g) \tag{29.2}\]The characters of \(g\) and \(g'\) are identical.The variation theorem states that given a system with a Hamiltonian \(H\), then if \(\phi\) is any normalized, well-behaved function that satisfies the boundary conditions of the Hamiltonian, then\[\langle\phi | H | \phi\rangle \geq E_0 \tag{29.3}\]where \(E_0\) is the true value of the lowest energy eigenvalue of \(H\). This principle allows us to calculate an upper bound for the ground state energy by finding the trial wavefunction \(\phi\) for which the integral is minimized (hence the name; trial wavefunctions are varied until the optimum solution is found). Let us first verify that the variational principle is indeed correct.We first define an integral \[\begin{array}{rcll} I & = & \langle\phi | -E_0 | \phi\rangle & \\ & = & \langle\phi | H | \phi\rangle - \langle\phi | E_0 | \phi\rangle & \\ & = & \langle\phi | H | \phi\rangle - E_0 \langle\phi | \phi\rangle & \\ & = & \langle\phi | H | \phi\rangle - E_0 & \text{since} \: \phi \: \text{is normalized} \end{array} \tag{29.4}\]If we can prove that \(I \geq 0\) then we have proved the variation theorem.Let \(\Psi_i\) and \(E_i\) be the true eigenfunctions and eigenvalues of \(H\), so \(H \Psi_i = E_i \Psi_i\). Since the eigenfunctions \(\Psi_i\) form a complete basis set for the space spanned by \(H\), we can expand any wavefunction \(\phi\) in terms of the \(\Psi_i\) (so long as \(\phi\) satisfies the same boundary conditions as \(\Psi_i\)).\[\phi = \sum_k a_k \Psi_k \tag{29.5}\]Substituting this function into our integral \(I\) gives\[\begin{array}{rcl} I & = & \left \langle \sum_k a_k \Psi_k | H-E_0 | \sum_j a_j \Psi_j \right \rangle \\ & = & \langle\sum_k a_k \Psi_k | \sum_j (H-E_0) a_j \Psi_j\rangle \end{array} \tag{29.6}\]If we now use \(H \Psi + E \Psi\), we obtain\[\begin{array}{rcl} I & = & \langle\sum_k a_k \Psi_k | \sum_j a_j (E_j - E_0) \Psi_j\rangle \\ & = & \sum_k \sum_j a_k^* a_j (E_j - E_0) \langle\Psi_k | \Psi_j\rangle \\ & = & \sum_k \sum_j a_k^* a_j (E_j - E_0) \delta_{jk} \end{array} \tag{29.7}\]We now perform the sum over \(j\), losing all terms except the \(j = k\) term, to give\[\begin{array}{rcl} I & = & \sum_k a_k^* a_k (E_k - E_0) \\ & = & \sum_k |a_k|^2 (E_k- E_0) \end{array} \tag{29.8}\]Since \(E_0\) is the lowest eigenvalue, \(E_k -E_0\) must be positive, as must \(|a_k|^2\). This means that all terms in the sum are non-negative and \(I \geq 0\) as required.For wavefunctions that are not normalized, the variational integral becomes:\[\frac{\langle\phi | H | \phi\rangle}{\langle\phi | \phi\rangle} \geq E_0 \tag{29.9}\]In the study of molecules, the variation principle is often used to determine the coefficients in a linear variation function, a linear combination of \(n\) linearly independent functions \(\begin{pmatrix} f_1, f_2, ..., f_n \end{pmatrix}\) (often atomic orbitals) that satisfy the boundary conditions of the problem. i.e. \(\phi = \sum_i c_i f_i\). The coefficients \(c_i\) are parameters to be determined by minimizing the variational integral. In this case, we have:\[\begin{array}{rcll} \langle\phi | H | \phi\rangle & = & \langle\sum_i c_i f_i | H | \sum_j c_j f_j\rangle & \\ & = & \sum_i \sum_j c_i^* c_j \langle f_i | H | f_j\rangle & \\ & = & \sum_i \sum_j c_i^* c_j H_{ij} \end{array} \tag{29.10}\]where \(H_{ij}\) is the Hamiltonian matrix element.\[\begin{array}{rcll} \langle\phi | \phi\rangle & = & \langle\sum_i c_i f_i | \sum_j c_j f_j\rangle & \\ & = & \sum_i \sum_j c_i^* c_j \langle f_i | f_j\rangle & \\ & = & \sum_i \sum_j c_i^* c_j S_{ij} \end{array} \tag{29.11}\]where \(S_{ij}\) is the overlap matrix element.The variational energy is therefore\[E = \dfrac{\sum_i \sum_jci^* c_j H_{ij}}{\sum_i \sum_J c_i^* c_j S_{ij}} \tag{29.12}\]which rearranges to give\[E \sum_i \sum_j c_i^* c_j S_{ij} = \sum_i \sum_j c_i^* c_j H_{ij} \tag{29.13}\]We want to minimize the energy with respect to the linear coefficients \(c_i\), requiring that \(\dfrac{\partial E}{\partial c_i}\)for all \(i\). Differentiating both sides of the above expression gives,\[\frac{\partial E}{\partial c_k}\Sigma_i \Sigma_j c_i^* c_j S_{ij} + E \Sigma_i \Sigma_j \begin{bmatrix} \frac{\partial c_i^*}{\partial c_k} c_j + \frac{\partial c_j}{\partial c_k} c_i^* \end{bmatrix} S_{ij} + \Sigma_i \Sigma_j \begin{bmatrix} \frac{\partial c_i^*}{\partial c_k}c_j + \frac{\partial c_j}{\partial c_k}c_i^* \end{bmatrix} H_{ij} \tag{29.14}\]Since \(\frac{\partial c_i^*}{\partial c_k} = \delta_{ik}\) and \(S_{ij} = S_{ji}\), \(H_{ij} = H_{ji}\), we have\[\frac{\partial E}{\partial c_k} \Sigma_i \Sigma_j c_i^* c_j S_{ij} + 2E \Sigma_i S_{ik} = 2 \Sigma_i c_i H_{ik} \tag{29.15}\]When \(\frac{\partial E}{\partial c_k} = 0\) , this gives\[\begin{array}{cll} \boxed{\Sigma_i c_i (H_{ik} - ES_{ik}) = 0} & \text{for all k} & \text{SECULAR EQUATIONS} \end{array} \tag{29.16}\]This page titled 1.29: Appendix A is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Claire Vallance via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,813
1.30: Appendix B- Point Groups
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Symmetry_(Vallance)/01%3A_Chapters/1.30%3A_Appendix_B-_Point_Groups
\[\begin{array}{l|c} C_1 & E \\ \hline A_1 & 1 \end{array} \label{30.1}\]\[\begin{array}{l|cc|l|l} C_s & E & \sigma_h & & \\ \hline A & 1 & 1 & x, y , R_z & x^2, y^2, z^2, xy \\ A' & 1 & -1 & z, R_x, R_y & yz, xz \end{array} \label{30.2}\]\[\begin{array}{l|cc|l|l} C_1 & E & i & & \\ \hline A_g & 1 & 1 & R_x, R_y, R_z & x^2, y^2, z^2, xy, xz, yz \\ A_u & 1 & -1 & x, y, z & \end{array} \label{30.3}\]\[\begin{array}{l|cc|l|l} C_2 & E & C_2 & & \\ \hline A & 1 & 1 & z, R_z & x^2, y^2, z^2, xy \\ B & 1 & -1 & x, y , R_x, R_y & yz, xz \end{array} \label{30.4}\]\[\begin{array}{l|c|l|l} C_3 & E \: \: \: \: \: C_3 \: \: \: \: \: C_3^2 & & c=e^{2\pi/3} \\ \hline A & 1 \: \: \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: \: \: 1 & x, R_z & x^2 + y^2, z^2 \\ E & \begin{Bmatrix} 1 & c & c^* \\ 1 & c^* & c \end{Bmatrix} & x, y, R_x, R_y, & x^2-y^2, xy, xz, yz \end{array} \label{30.5}\]\[\begin{array}{l|c|l|l} C_4 & E \: \: \: \: \: C_4 \: \: \: \: \: C_2 \: \: \: \: \: C_4^3 & & \\ \hline A & 1 \: \: \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: \: \: 1 & z, R_z & x^2 + y^2, z^2 \\ B & 1 \: \: \: \: -1 \: \: \: \: \: \: \: \: 1 \: \: \: \: -1 & & x^2 - y^2, xy \\ E & \begin{Bmatrix} 1 & i & -1 & -i \\ 1 & -i & -1 & i \end{Bmatrix} & x, y, R_x, R_y & yz, xz \end{array} \label{30.6}\]\[\begin{array}{l|cccc|l|l} C_{2v} & E & C_2 & \sigma_v(xz) & \sigma_v'(yz) & & \\ \hline A_1 & 1 & 1 & 1 & 1 & z & x^2, y^2, z^2 \\ A_2 & 1 & 1 & -1 & -1 & R_z & xy \\ B_1 & 1 & -1 & 1 & -1 & x, R_y & xz \\ B_2 & 1 & -1 & -1 & 1 & y, R_x & yz \end{array} \label{30.7}\]\[\begin{array}{l|ccc|l|l} C_{3v} & E & 2C_3 & 3\sigma_v & & \\ \hline A_1 & 1 & 1 & 1 & z & x^2 + y^2, z^2 \\ A_2 & 1 & 1 & -1 & R_z & \\ E & 2 & -1 & 0 & x, y, R_x, R_y & x^2 - y^2, xy, xz, yz \end{array} \label{30.8}\]\[\begin{array}{l|cccc|l|l} C_{2h} & E & C_2 & i & \sigma_h & & \\ \hline A_g & 1 & 1 & 1 & 1 & R_z & x^2, y^2, z^2, xy \\ B_g & 1 & -1 & 1 & -1 & R_x, R_y & xz, yz \\ A_u & 1 & 1 & -1 & -1 & z & \\ B_u & 1 & -1 & -1 & 1 & x, y & \end{array} \label{30.9}\]\[\begin{array}{l|c|l|l}C_{3h} & E \: \: \: \: \: C_3 \: \: \: \: \: C_3^2 \: \: \: \: \: \sigma_h \: \: \: \: \: S_3 \: \: \: \: \: S_3^5 & & c = e^{2\pi/3} \\ \hline A & 1 \: \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: \: 1 & R_z & x^2 + y^2, z^2 \\ E & \begin{Bmatrix} 1 & \: \: c & \: \: c^* & \: \: 1 & \: \: c & \: \: c^* \\ 1 & \: \: c^* & \: \: c & \: \: 1 & \: \: c^* & \: \: c \end{Bmatrix} & x, y & x^2 - y^2, xy \\ A' & 1 \: \: \: \: \: \: 1 \: \: \: \: \: \: 1 \: \: \: \: -1 \: \: \: \: -1 \: \: \: \: \: -1 & z & \\ E' & \begin{Bmatrix} 1 & c & c^* & -1 & -c & -c^* \\ 1 & c^* & c & -1 & -c^* & -c \end{Bmatrix} & R_x, R_y & xz, yz \end{array} \label{30.10}\]\[\begin{array}{l|cccc|l|l} D_2 & E & C_2(z) & C_2(y) & C_2(x) & & \\ \hline A & 1 & 1 & 1 & 1 & & x^2, y^2, z^2 \\ B_1 & 1 & 1 & -1 & -1 & z, R_z & xy \\ B_2 & 1 & -1 & 1 & -1 & y, R_y & xz \\ B_3 & 1 & -1 & -1 & 1 & x, R_x & yz \end{array} \label{30.11}\]\[\begin{array}{l|ccc|l|l} D_3 & E & 2C_3 & 3C_2 & & \\ \hline A_1 & 1 & 1 & 1 & & x^2 + y^2, z^2 \\ A_2 & 1 & 1 & -1 & z, R_z & \\ E & 2 & -1 & 0 & x, y, R_x, R_y & x^2 - y^2, xy, xz, yz \end{array} \label{30.12}\]\[\begin{array}{l|cccccccc|l|l} D_{2h} & E & C_2(z) & C_2(y) & C_2(x) & i & \sigma_xy) & \sigma(xz) & \sigma(yz) & & \\ \hline A_g & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & & x^2, y^2, z^2 \\ B_{1g} & 1 & 1 & -1 & -1 & 1 & 1 & -1 & -1 & R_z & xy \\ B_{2g} & 1 & -1 & 1 & -1 & 1 & -1 & 1 & -1 & R_y & xz \\ B_{3g} & 1 & -1 & -1 & 1 & 1 & -1 & -1 & 1 & R_x & yz \\ A_u & 1 & 1 & 1 & 1 & -1 & -1 & -1 & -1 & & \\ B_{1u} & 1 & 1 & -1 & -1 & -1 & -1 & 1 & 1 & z & \\ B_{2u} & 1 & -1 & 1 & -1 & -1 & 1 & -1 & 1 & y & \\ B_{3u} & 1 & -1 & -1 & 1 & -1 & 1 & 1 & -1 & x & \end{array} \label{30.13}\]\[\begin{array}{l|ccccc|l|l} D_{2d} & E & 2S_4 & C_2 & 2C_2' & 2\sigma_d & & \\ \hline A_1 & 1 & 1 & 1 & 1 & 1 & & x^2 + y^2, z^2 \\ A_2 & 1 & 1 & 1 & -1 & -1 & R_z & \\ B_1 & 1 & -1 & 1 & 1 & -1 & & x^2 - y^2 \\ B_2 & 1 & -1 & 1 & -1 & 1 & z & xy \\ E & 2 & 0 & -2 & 0 & 0 & x, y, R_x, R_y & xy, yz \end{array} \label{30.14}\]\[\begin{array}{l|cccccc|l|l} D_{3d} & E & 2C_3 & 3C_2 & i & 2S_6 & 3\sigma_d & & \\ \hline A_{1g} & 1 & 1 & 1 & 1 & 1 & 1 & & x^2 + y^2, z^2 \\ A_{2g} & 1 & 1 & -1 & 1 & 1 & -1 & R_z & \\ E_g & 2 & -1 & 0 & 2 & -1 & 0 & R_x, R_y & x^2 - y^2, xy, xz, yz \\ A_{1u} & 1 & 1 & 1 & -1 & -1 & -1 & & \\ A_{2u} & 1 & 1 & -1 & -1 & -1 & 1 & z & \\ E_u & 2 & -1 & 0 & -2 & 1 & 0 & x, y & \end{array} \label{30.15}\]\[\begin{array}{l|cccccccc|l|l} D_{\infty h} & E & 2C_\infty^\Phi & \ldots & \infty \sigma_v & i & 2S_\infty^\Phi & \ldots & \infty C_2 & & \\ \hline \Sigma_g^+ & 1 & 1 & \ldots & 1 & 1 & 1 & \ldots & 1 & & x^2 + y^2, z^2 \\ \Sigma_g^- & 1 & 1 & \ldots & -1 & 1 & 1 & \ldots & -1 & R_z & \\ \Pi_g & 2 & 2cos \Phi & \ldots & 0 & 2 & -2cos \Phi & \ldots & 0 & R_x, R_y & xz, yz \\ \Delta_g & 2 & 2cos 2\Phi & \ldots & 0 & 2 & 2cos 2\Phi & \ldots & 0 & & x^2 - y^2, xy \\ \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & & \\ \Sigma_u^+ & 1 & 1 & \ldots & 1 & -1 & -1 & \ldots & -1 & z & \\ \Sigma_u^- & 1 & 1 & \ldots & -1 & -1 & -1 & \ldots & 1 & & \\ \Pi_u & 2 & 2cos \Phi & \ldots & 0 & -2 & 2cos \Phi & \ldots & 0 & x, y & \\ \Delta_u & 2 & 2cos 2\Phi & \ldots & 0 & -2 & -2cos 2\Phi & \ldots & 0 & & \\ \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & & \end{array} \label{30.16}\]\[\begin{array}{l|c|l|l} S_4 & E \: \: \: \: \: S_4 \: \: \: \: \: C_2 \: \: \: \: \: S_4^3 & & \\ \hline A & 1 \: \: \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: \: \: 1 & R_z & x^2 + y^2, z^2 \\ B & 1 \: \: \: \: -1 \: \: \: \: \: \: \: \: 1 \: \: \: \: -1 & z & x^2 - y^2, xy \\ E & \begin{Bmatrix} 1 & i & -1 & -i \\ 1 & -i & -1 & i \end{Bmatrix} & x, y, R_x, R_y & xz, yz \end{array} \label{30.17}\]\[\begin{array}{l|c|l|l} S_6 & E \: \: \: \: \: C_3 \: \: \: \: \: C_3^2 \: \: \: \: \: i \: \: \: \: \: S_6^5 \: \: \: \: \: S_6 & & c=e^{2\pi/3} \\ \hline A_g & 1 \: \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: \: 1 & R_z & x^2 + y^2, z^2 \\ E_g & \begin{Bmatrix} 1 & \: \: c & \: \: c^* & \: \: 1 & \: \: c & \: \: c^* \\ 1 \: \: & \: \: c^* & \: \: c & \: \: 1 & \: \: c^* & \: \: c \end{Bmatrix} & R_x, R_y & x^2 - y^2, xy, xz, yz \\ A_u & 1 \: \: \: \: \: \: 1 \: \: \: \: \: \: 1 \: \: \: \: -1 \: \: \: \: -1 \: \: \: \: \: -1 & z & \\ E_u & \begin{Bmatrix} 1 & c & c^* & -1 & -c & -c^* \\ 1 & c^* & c & -1 & -c^* & -c \end{Bmatrix} & x, y & \end{array} \label{30.18}\]\[\begin{array}{l|c|l|l} T & E \: \: \: 4C_3 \: \: \: 4C_3^2 \: \: \: 3C_2 & & c=e^{2\pi/3} \\ \hline A & 1 \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: 1 \: \: \: \: \: \: \: 1 & & x^2 + y^2, z^2 \\ E & \begin{Bmatrix} 1 & c & c^* & 1 \\ 1 & c* & c & 1 \end{Bmatrix} & & 2z^2 - x^2 - y^2, x^2 - y^2 \\ T & 3 \: \: \: \: \: 0 \: \: \: \: \: \: \: 0 \: \: \: -1 & R_x, R_y, R_z, x, y, z & xy, xz, yz \end{array} \label{30.19}\]\[\begin{array}{l|ccccc|l|l} T_d & E & 8C_3 & 3C_2 & 6S_4 & 6\sigma_d & & \\ \hline A_1 & 1 & 1 & 1 & 1 & 1 & & x^2 + y^2, z^2 \\ A_2 & 1 & 1 & 1 & -1 & -1 & & \\ E & 2 & -1 & 2 & 0 & 0 & & 2z^2 - x^-2 - y^2, x^2 - y^2 \\ T_1 & 3 & 0 & -1 & 1 & -1 & R_x, R_y, R_z & \\ T_2 & 3 & 0 & -1 & -1 & 1 & x, y, z & xy, xz, yz \end{array} \label{30.20}\]\[\begin{array}{llllll} & \boldsymbol{A_1} & \boldsymbol{A_2} & \boldsymbol{E} & \boldsymbol{T_1} & \boldsymbol{T_2} \\ \boldsymbol{A_1} & A_1 & A_2 & E & T_1 & T_2 \\ \boldsymbol{A_2} & & A_1 & E & T_2 & T_1 \\ \boldsymbol{E} & & & A_1 + A_2 + E & T_1 + T_2 & T_1 + T_2 \\ \boldsymbol{T_1} & & & & A_1 + E + T_1 + T_2 & A_2 + E + T_1 +T_2 \\ \boldsymbol{T_2} & & & & & A_1 + E + T_1 + T_2 \end{array} \label{30.21}\]\[\begin{array}{llllll} & \boldsymbol{A_1} & \boldsymbol{A_2} & \boldsymbol{B_1} & \boldsymbol{B_2} & \boldsymbol{E} \\ \boldsymbol{A_1} & A_1 & A_2 & B_1 & B_2 & E \\ \boldsymbol{A_2} & & A_1 & B_2 & B_1 & E \\ \boldsymbol{B_1} & & & A_1 & A_2 & E \\ \boldsymbol{B_2} & & & & A_1 & E \\ \boldsymbol{E} & & & & & A_1 + A_2 + B_1 + B_2 \end{array} \label{30.22}\]\[\begin{array}{llll} & \boldsymbol{A_1} & \boldsymbol{A_2} & \boldsymbol{E} \\ \boldsymbol{A_1} & A_1 & A_2 & E \\ \boldsymbol{A_2} & & A_1 & E \\ \boldsymbol{E} & & & A_1 + A_2 + E \end{array} \label{30.23}\]\[\begin{array}{lllllll} & \boldsymbol{A_1} & \boldsymbol{A_2} & \boldsymbol{B_1} & \boldsymbol{B_2} & \boldsymbol{E_1} & \boldsymbol{E_2} \\ \boldsymbol{A_1} & A_1 & A_2 & B_1 & B_2 & E_1 & E_2 \\ \boldsymbol{A_2} & & A_1 & B_2 & B_1 & E_1 & E_2 \\ \boldsymbol{B_1} & & & A_1 & A_2 & E_2 & E_1 \\ \boldsymbol{B_2} & & & & A_1 & E_2 & E_1 \\ \boldsymbol{E_1} & & & & & A_1 + A_2 + E_2 & B_1 + B_2 + E_1 \\ \boldsymbol{E_2} & & & & & & A_1 + A_2 + E_2 \end{array} \label{30.24}\]\(^*\)in D\(_{3h}\) make the following changes in the above table\[\begin{array}{ll} \text{In table} & \text{In D}_{3h} \\ A_1 & A_1' \\ A_2 & A_2' \\ B_1 & A_1'' \\ B_2 & A_2'' \\ E_1 & E'' \\ E_2 & E' \end{array} \label{30.25}\]This page titled 1.30: Appendix B- Point Groups is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Claire Vallance via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,814
InfoPage
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/00%3A_Front_Matter/02%3A_InfoPage
This text is disseminated via the Open Education Resource (OER) LibreTexts Project and like the hundreds of other texts available within this powerful platform, it is freely available for reading, printing and "consuming." Most, but not all, pages in the library have licenses that may allow individuals to make changes, save, and print this book. Carefully consult the applicable license(s) before pursuing such effects.Instructors can adopt existing LibreTexts texts or Remix them to quickly build course-specific resources to meet the needs of their students. Unlike traditional textbooks, LibreTexts’ web based origins allow powerful integration of advanced features and new technologies to support learning. The LibreTexts mission is to unite students, faculty and scholars in a cooperative effort to develop an easy-to-use online platform for the construction, customization, and dissemination of OER content to reduce the burdens of unreasonable textbook costs to our students and society. The LibreTexts project is a multi-institutional collaborative venture to develop the next generation of open-access texts to improve postsecondary education at all levels of higher learning by developing an Open Access Resource environment. The project currently consists of 14 independently operating and interconnected libraries that are constantly being optimized by students, faculty, and outside experts to supplant conventional paper-based books. These free textbook alternatives are organized within a central environment that is both vertically (from advance to basic level) and horizontally (across different fields) integrated.The LibreTexts libraries are Powered by NICE CXOne and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. This material is based upon work supported by the National Science Foundation under Grant No. 1246120, 1525057, and 1413739.Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation nor the US Department of Education.Have questions or comments? For information about adoptions or adaptions contact More information on our activities can be found via Facebook , Twitter , or our blog .This text was compiled on 07/13/2023
7,819
Preface
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/00%3A_Front_Matter/05%3A_Preface
This textbook is the official textbook for the Physical Chemistry 1 Course (CHM 3001) at Florida Tech.The instructor for this course and author of this textbook is Dr. Roberto Peverati.Contacts: Office: OPS 333, 674-7735Chemistry Program, Department of Biomedical and Chemical Engineering and Science Florida Institute of Technology, Melbourne, FL.This live open textbook is distributed under the CC-BY-SA 4.0 License and it was funded by the Florida Tech Open Educational Resources Grant Program: A Collaboration of the Teaching Council, eEducation, and the Evans Library.Please read this book carefully, since everything that will be in your exams is explained here. Since this book is specifically tailored for the CHM 3001 course at Florida Tech, there are no superfluous parts. In other words, everything in it might be subject to question in the quizzes and the final exam.Definitions and exercises are usually numbered and are highlighted in the text in this format (lighter grey, indented, and following a grey vertical bar). Please study the definitions carefully since they are fundamental concepts that will be used several times in the remainder of the text, and they will be subject to quizzes and exams. Exercises are essential for cementing the concepts, and you should attempt to execute them first without looking at the solution. Even if you were able to solve an exercise on your own, always read the solution after, since it might contain additional explanations expanding the main concepts in the text.Navigating the book should be straightforward. On each page, there is a useful sidebar on the left that gives you an overview of all chapters, and a toolbar at the top with important tools. Arrows to shift between chapters might also be present, depending on your browser. If you are old-school and prefer a pdf, you can download a printout by clicking on the toolbar’s corresponding icon. If you are really old-school and prefer a printed book, the best solution is to download the pdf and print it yourself. It is a LaTeX book, and I can promise you it will look good on paper. However, I cannot provide physical copies to each student. In the toolbar, you will find a useful search box that is capable of searching the entire book. The most adventurous will find in the toolbar a link to the raw GitHub source code. Feel free to head on over there and fork the book.Each chapter of this book represents one week of work in the classroom and at home. The sidebar on the left will reflect your syllabus, as well as the main structure of the class on Canvas. The book is a live document, which means it will be updated throughout the semester with new material. While you are not required to check it every day, you might want to review each week’s chapter before the lecture on Friday.If you spot a mistake or a typo, contact Dr. Peverati via email and you will receive a credit of up to three points towards your final score, once the typo has been verified and corrected.
7,822
1.1: Thermodynamic Systems
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/01%3A_Systems_and_Variables/1.01%3A_Thermodynamic_Systems
A thermodynamic system—or just simply a system—is a portion of space with defined boundaries that separate it from its surroundings (see also the title picture of this book). The surroundings may include other thermodynamic systems or physical systems that are not thermodynamic systems. A boundary may be a real physical barrier or a purely notional one. Typical examples of systems are reported in below. \(^1\)In the first case, a liquid is contained in a typical Erlenmeyer flask. The boundaries of the system are the glass walls of the beaker. The second system is represented by the gas contained in a balloon. The boundary is a physical barrier also in this case, being the plastic of the balloon. The third case is that of a thunder cloud. The boundary is not a well-defined physical barrier, but rather some condition of pressure and chemical composition at the interface between the cloud and the atmosphere. Finally, the fourth case is the case of an open flame. In this case, the boundary is again non-physical, and possibly even harder to define than for a cloud. For example, we can choose to define the flame based on some temperature threshold, color criterion, or even some chemical one. Despite the lack of physical boundaries, the cloud and the flame—as portions of space containing matter—can be defined as a thermodynamic system.A system can exchange exclusively mass, exclusively energy, or both mass and energy with its surroundings. Depending on the boundaries’ ability to transfer these quantities, a system is defined as open, closed, or isolated. An open system exchanges both mass and energy. A closed system exchanges only energy, but not mass. Finally, an isolated system does not exchange mass nor energy.When a system exchanges mass or energy with its surroundings, some of its parameters (variables) change. For example, if a system loses mass to the surroundings, the number of molecules (or moles) in the system will decrease. Similarly, if a system absorbs some energy, one or more of its variables (such as its temperature) increase. Mass and energy can flow into the system or out of the system. Let’s consider mass exchange only. If some molecules of a substance leave the system, and then the same amount of molecules flow back into the system, the system will not be modified. We can count, for example, 100 molecules leaving a system and assign them the value of –100 in an outgoing process, and then observe the same 100 molecules going back into the system and assign them a value of +100. Regardless of the number of molecules present in the system in the first place, the overall balance will be –100 (from the outgoing process) +100 (from the ingoing process) = 0, which brings the system to its initial situation (mass has not changed). However, from a mathematical standpoint, we could have equally assigned the label +100 to the outgoing process and –100 to the ingoing one, and the overall total would have stayed the same: +100–100 = 0. Which of the two labels is best? For this case, it seems natural to define a mass going out of the system as negative (the system is losing it), and a mass going into the system as positive (the system is gaining it), but is it as straightforward for energy?Here is another example. Let’s consider a system that is composed of your body. When you exercise, you lose mass in the form of water (sweat) and CO2 (from respiration). This mass loss can be easily measured by stepping on a scale before and after exercise. The number you observe on the scale will go down. Hence you have lost weight. After exercise, you will reintegrate the lost mass by drinking and eating. If you have reinstated the same amount you have lost, your weight will be the same as before the exercise (no weight loss). Nevertheless, which label do you attach to the amounts that you have lost and gained? Let’s say that you are running a 5 km race without drinking nor eating, and you measure your weight dropping 2 kg after the race. After the race, you drink 1.5 kg of water and eat a 500 g energy bar. Overall you did not lose any weight, and it would seem reasonable to label the 2 kg that you’ve lost as negative (–2) and the 1.5 kg of water that you drank and the 500 g bar that you ate as positive (+1.5 +0.5 = +2). But is it the only way? After all, you didn’t gain nor lose any weight, so why not calling the 2 kg due to exercise +2 and the 2 that you’ve ingested as –2? It might seem silly, but mathematically it would not make any difference, the total would still be zero. Now, let’s consider energy instead of mass. To run the 5km race, you have spent 500 kcal, which then you reintegrate precisely by eating the energy bar. Which sign would you put in front of the kilocalories that you “burned” during the race? In principle, you’ve lost them, so if you want to be consistent, you should use a negative sign. But if you think about it, you’ve put quite an effort to “lose” those kilocalories, so it might not feel bad to assign them a positive sign instead. After all, it’s perfectly OK to say, “I’ve done a 500 kcal run today”, while it might sound quite awkward to say, “I’ve done a –500 kcal run today.” Our previous exercise with mass demonstrates that it doesn’t really matter which sign you put in front of the quantities. As long as you are consistent throughout the process, the signs will cancel out. If you’ve done a +500 kcal run, you’ve eaten a bar for –500 kcal, resulting in a total zero loss/gain. Alternatively, if you’ve done a –500 kcal run, you would have eaten a +500 kcal bar, for a total of again zero loss/gain.These simple examples demonstrate that the sign that we assign to quantities that flow through a boundary is arbitrary (i.e., we can define it any way we want, as long as we are always consistent with ourselves). There is no best way to assign those signs. If you ask two different people, you might obtain two different answers. But we are scientists, and we must make sure to be rigorous. For this reason, chemists have established a convention for the signs that we will follow throughout this course. If we are consistent in following the convention, we are guaranteed to never make any mistake with the signs.The chemistry convention of the sign is system-centric:\(^2\)If you want a trick to remember the convention, use the weight loss/gain during the exercise example above. You are the system, if you lose weight, the kilograms will be negative (–2 kg), while if you gain weight, they will be positive (+2 kg). Similarly, if you eat an energy bar, you are the system, and you will have increased your energy by +500 kcal (positive). In contrast, if you burned energy during exercise, you are the system, and you will have lost energy, hence –500 kcal (negative). If the system is a balloon filled with gas, and the balloon is losing mass, you are the balloon, and you are losing weight; hence the mass will be negative. If the balloon is absorbing heat (likely increasing its temperature and increasing its volume), you are the system, and you are gaining heat; hence heat will be positive.This page titled 1.1: Thermodynamic Systems is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,823
1.2: Thermodynamic Variables
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/01%3A_Systems_and_Variables/1.02%3A_Thermodynamic_Variables
The system is defined and studied using parameters that are called variables. These variables are quantities that we can measure, such as pressure and temperature. However, don’t be surprised if, on some occasions, you encounter some variable that is a little harder to measure directly, such as entropy. The variables depend only on the current state of the system, and therefore they define it. If I know the values of all the “relevant variables” of a system, I know the state of the system. The relationship between the variables is described by mathematical functions called state functions, while the “relevant variables” are called natural variables.What are the “relevant variables” of a system? The answer to this question depends on the system, and it is not always straightforward. The simplest case is the case of an ideal gas, for which the natural variables are those that enter the ideal gas law and the corresponding equation:\[ PV=nRT \label{1.2.1} \]Therefore, the natural variables for an ideal gas are the pressure P, the volume V, the number of moles n, and the temperature T, with R being the ideal gas constant. Recalling from the general chemistry courses, R is a universal dimensional constant which has the values of R = 8.31 kJ/mol in SI units. We will use the ideal gas equation and its variables as an example to discuss variables and functions in this chapter. We will analyze more complicated cases in the next chapters. Variables can be classified according to numerous criteria, each with its advantages and disadvantages. A typical classification is:Another useful classification is:When we deal with thermodynamic systems, it is more convenient to work with intensive variables. Luckily, it is relatively easy to convert extensive variables into intensive ones by just taking the ratio between the two of them. For an ideal gas, by taking the ratio between V and n, we obtained the intensive variable called molar volume:\[ \overline{V}=\dfrac{V}{n}. \label{1.2.2} \]We can then recast Equation \ref{1.2.1} as:\[ P\overline{V}=RT, \label{1.2.3} \]which is the preferred equation that we will use for the remainder of this course. The ideal gas equation connects the 3 variables pressure, molar volume, and temperature, reducing the number of independent variables to just 2. In other words, once 2 of the 3 variables are known, the other one can be easily obtained using these simple relations:\[ P(T,\overline{V})=\dfrac{RT}{\overline{V}}, \label{1.2.4} \]\[ \overline{V}(T,P)=\dfrac{RT}{P}, \label{1.2.5} \]\[ T(P,\overline{V})=\dfrac{P\overline{V}}{R}. \label{1.2.6} \]These equations define three state functions, each one expressed in terms of two independent natural variables. For example, Equation \ref{1.2.4} defines the state function called “pressure”, expressed as a function of temperature and molar volume. Similarly, Equation \ref{1.2.5} defines the “molar volume” as a function of temperature and pressure, and Equation \ref{1.2.6} defines the “temperature” as a function of pressure and molar volume. When we know the natural variables that define a state function, we can express the function using its total differential, for example for the pressure \(P(T, \overline{V})\):\[ dP=\left( \dfrac{\partial P}{\partial T} \right)dT + \left( \dfrac{\partial P}{\partial \overline{V}} \right)d\overline{V} \label{1.2.7} \]Recalling Schwartz’s theorem, the mixed partial second derivatives that can be obtained from Equation \ref{1.2.7} are the same:\[ \dfrac{\partial^2 P}{\partial T \partial \overline{V}}=\dfrac{\partial}{\partial \overline{V}}\dfrac{\partial P}{\partial T}=\dfrac{\partial}{\partial T}\dfrac{\partial P}{\partial \overline{V}}=\dfrac{\partial^2 P}{\partial \overline{V} \partial T} \label{1.2.8} \]Which can be easily verified considering that:\[ \dfrac{\partial}{\partial \overline{V}} \dfrac{\partial P}{\partial T} = \dfrac{\partial}{\partial \overline{V}} \left(\dfrac{R}{\overline{V}}\right) = -\dfrac{R}{\overline{V}^2} \label{1.2.9} \]and\[ \dfrac{\partial}{\partial T} \dfrac{\partial P}{\partial \overline{V}} = \dfrac{\partial}{\partial T} \left(\dfrac{-RT}{\overline{V}^2}\right) = -\dfrac{R}{\overline{V}^2} \label{1.2.10} \]While for the ideal gas law, all the variables are “well-behaved” and always satisfy Schwartz’s theorem, we will encounter some variable for which Schwartz’s theorem does not hold. Mathematically, if the Schwartz’s theorem is violated (i.e., if the mixed second derivatives are not equal), then the corresponding function cannot be integrated, hence it is not a state function. The differential of a function that cannot be integrated cannot be defined exactly. Thus, these functions are called path functions; that is, they depend on the path rather than the state. The most typical examples of path functions that we will encounter in the next chapters are heat (\(Q\)) and work (\(W\)). For these functions, we cannot define exact differentials \(dQ\) and \(dW\), and we must introduce a new notation to define their “inexact differentials” \(đ Q\) and \(đ W\).We will return to exact and inexact differential when we discuss heat and work, but for this chapter, it is crucial to notice the difference between a state function and a path function. A typical example to understand the difference between state and path function is to consider the distance between two geographical locations. Let’s, for example, consider the distance between New York City and Los Angeles. If we fly straight from one city to the other, there are roughly 4,000 km between them. This “air distance” depends exclusively on the geographical location of the two cities. It stays constant regardless of the method of transportation that I have accessibility to travel between them. Since the cities’ positions depend uniquely on their latitudes and longitudes, the “air distance” is a state function, i.e., it is uniquely defined from a simple relationship between measurable variables. However, the “air distance” is not the distance that I will practically have to drive when I go from NYC to LA. Such “travel distance” depends on the method of transportation that I decide to take (airplane vs. car vs. train vs. boat vs. …). It will depend on a plentiful amount of other factors such as the choice of the road to be traveled (if going by car), the atmospheric conditions (if flying), and so on. A typical “travel distance” by car is, for example, about 4,500 km, which is about 12% more than the “air distance.” Indeed, we could even design a very inefficient road trip that avoids all highways and will result in a “travel distance” of 8,000 km or even more (200% of the “air distance”). The “travel distance” is a clear example of a path function because it depends on the specific path that I decide to travel to go from NYC to LA. See .This page titled 1.2: Thermodynamic Variables is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,824
10.1: Reaction Quotient and Equilibrium Constant
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/10%3A_Chemical_Equilibrium/10.01%3A_Reaction_Quotient_and_Equilibrium_Constant
Let’s consider a prototypical reaction at constant \(T,P\):\[ a\mathrm{A} + b\mathrm{B} \rightarrow c\mathrm{C} + d\mathrm{D} \label{10.1.1} \]The Gibbs free energy of the reaction is defined as:\[ \Delta_{\text{rxn}} G = G_{\text{products}} - G_{\text{reactants}} = G^{\text{C}} + G^{\text{D}} - G^{\text{A}}-G^{\text{B}}, \label{10.1.2} \]and replacing the absolute Gibbs free energies with the chemical potentials \(\mu_i\), we obtain:\[ \Delta_{\text{rxn}} G = c \mu_{\text{C}} + d \mu_{\text{D}} - a \mu_{\text{A}}- b\mu_{\text{B}}. \label{10.1.3} \]Assuming the reaction is happening in the gas phase, we can then use Equation 9.4.6 to replace the chemical potentials with their value in the reaction mixture, as:\[\begin{equation} \begin{aligned} \mkern-60mu \Delta_{\text{rxn}} G =& \; c (\mu_{\text{C}}^{-\kern-6pt{\ominus}\kern-6pt-}+RT \ln P_{\text{C}}) + d (\mu_{\text{D}}^{-\kern-6pt{\ominus}\kern-6pt-}+RT \ln P_{\text{D}}) - a (\mu_{\text{A}}^{-\kern-6pt{\ominus}\kern-6pt-}+RT \ln P_{\text{A}}) - b (\mu_{\text{B}}^{-\kern-6pt{\ominus}\kern-6pt-}+RT \ln P_{\text{B}}) \\[4pt] =& \; \underbrace{c \mu_{\text{C}}^{-\kern-6pt{\ominus}\kern-6pt-}+ d \mu_{\text{D}}^{-\kern-6pt{\ominus}\kern-6pt-}- a \mu_{\text{A}}^{-\kern-6pt{\ominus}\kern-6pt-}- b\mu_{\text{B}}^{-\kern-6pt{\ominus}\kern-6pt-}}_{\Delta_{\text{rxn}} G^{-\kern-6pt{\ominus}\kern-6pt-}} +RT \ln \dfrac{P_{\text{C}}^c \cdot P_{\text{D}}^d}{P_{\text{A}}^a \cdot P_{\text{B}}^b}. \end{aligned} \end{equation} \label{10.1.4} \]We can define a new quantity called the reaction quotient as a function of the partial pressures of each substance:\(^1)\)\[ Q_P = \dfrac{P_{\text{C}}^c \cdot P_{\text{D}}^d}{P_{\text{A}}^a \cdot P_{\text{B}}^b}, \label{10.1.5} \]and we can then simply rewrite Equation \ref{10.1.4} using Equation \ref{10.1.5} as:\[ \Delta_{\text{rxn}} G = \Delta_{\text{rxn}} G^{-\kern-6pt{\ominus}\kern-6pt-}+ RT \ln Q_P. \label{10.1.6} \]This equation tells us that the sign of \(\Delta_{\text{rxn}} G\) is influenced by the reaction quotient \(Q_P\). For a spontaneous reaction at the beginning, the partial pressures of the reactants are much higher than the partial pressures of the products, therefore \(Q_P \ll 1\) and \(\Delta_{\text{rxn}} G < 0\), as we expect. As the reaction proceeds, the partial pressures of the products will increase, while the partial pressures of the reactants will decrease. Consequently, both \(Q_P\) and \(\Delta_{\text{rxn}} G\) will increase. The reaction will completely stop when \(\Delta_{\text{rxn}} G = 0\), which is the chemical equilibrium point. At the reaction equilibrium:\[ \Delta_{\text{rxn}} G = 0 = \Delta_{\text{rxn}} G^{-\kern-6pt{\ominus}\kern-6pt-}+ RT \ln K_P, \label{10.1.7} \]where we have defined a new quantity called equilibrium constant, as the value the reaction quotient assumes when the reaction reaches equilibrium, and we have denoted it with the symbol \(K_P\).\(^2\) From Equation \ref{10.1.7} we can derive the following fundamental equation on the standard Gibbs free energy of reaction:\[ \Delta_{\text{rxn}} G^{-\kern-6pt{\ominus}\kern-6pt-}= - RT \ln K_P. \label{10.1.8} \]To extend the concept of \(K_P\) beyond the four species in the prototypical reaction (10.1), we can use the product of a series symbol \(\left( \prod_i \right)\), and write:\[ K_P=\prod_i P_{i,\text{eq}}^{\nu_i}, \label{10.1.9} \]where \(P_{i,\text{eq}}\) are the partial pressure of each species at equilibrium. Eq. (10.1.9) is in principle valid for ideal gases only. However, reaction involving ideal gases are pretty rare. As such, we can further extend the concept of equilibrium constant and write:\[ K_{\text{eq}} =\prod_i a_{i,\text{eq}}^{\nu_i}, \label{10.1.10} \]where we have replaced the partial pressure at equilibrium, \(P_{i,\text{eq}}\), with a new concept introduced initially by Gilbert Newton Lewis (1875–1946),\(^3\) that he termed activity, and represented by the letter \(a\). For ideal gases, it is clear that \(a_i=P_i/P^{-\kern-6pt{\ominus}\kern-6pt-}\). For non-ideal gases, the activity is equal to the fugacity \(a_i=f_i/P^{-\kern-6pt{\ominus}\kern-6pt-}\), a concept that we will investigate in the next chapter. For pure liquids and solids, the activity is simply \(a_i=1\). For diluted solutions, the activity is equal to a measured concentration (such as, for example, the mole fraction \(x_i\) in the liquid phase, and \(y_i\) in the gas phase, or the molar concentration \([i]/[i]^{-\kern-6pt{\ominus}\kern-6pt-}\) with \([i]^{-\kern-6pt{\ominus}\kern-6pt-}= 1\;\text[mol/L]\)). Finally for concentrated solutions, the activity is related to the measured concentration via an activity coefficient. We will return to the concept of activity in chapter 14, when we will specifically deal with solutions. For now, it is interesting to use the activity to write the definition of the following two constants:\[ K_y =\prod_i \left( y_{i,\text{eq}} \right)^{\nu_i} \qquad \qquad \qquad \qquad K_C =\left( \prod_i [i]_{\text{eq}}/[i]^{-\kern-6pt{\ominus}\kern-6pt-}\right)^{\nu_i}, \label{10.1.11} \]which can then be related with \(K_P\) for a mixture of ideal gases using:\[ P_i = y_i P \qquad \qquad \qquad P_i=\dfrac{n_i}{V}RT=[i]RT, \label{10.1.12} \]which then results in:\[ K_P = K_y\cdot \left(\dfrac{P}{P^{-\kern-6pt{\ominus}\kern-6pt-}}\right)^{\Delta \nu} \qquad \qquad K_P = K_C \left( \dfrac{[i]^{-\kern-6pt{\ominus}\kern-6pt-}RT}{P^{-\kern-6pt{\ominus}\kern-6pt-}} \right)^{\Delta \nu}, \label{10.1.13} \]with \(\Delta \nu =\sum_i \nu_i\).Using the general equilibrium constant, \(K_{\text{eq}}\), we can also rewrite the fundamental equation on \(\Delta_{\text{rxn}} G^{-\kern-6pt{\ominus}\kern-6pt-}\) that we derived in Equation \ref{10.1.8} to be applicable at most conditions, as:\[ \Delta_{\text{rxn}} G^{-\kern-6pt{\ominus}\kern-6pt-}= - RT \ln K_{\text{eq}}, \label{10.1.14} \]and since \(\Delta_{\text{rxn}} G^{-\kern-6pt{\ominus}\kern-6pt-}\) depends on \(T,P\) and \(\{n_i\}\), it is useful to explore how \(K_{\text{eq}}\) depends on those variables as well.This page titled 10.1: Reaction Quotient and Equilibrium Constant is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,826
10.2: Temperature Dependence of Keq
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/10%3A_Chemical_Equilibrium/10.02%3A_Temperature_Dependence_of_Keq
To study the temperature dependence of \(K_{\text{eq}}\) we can use Equation 10.1.14 for the general equilibrium constant and write:\[ \ln K_{\text{eq}} = -\dfrac{\Delta G^{-\kern-6pt{\ominus}\kern-6pt-}}{RT}, \label{10.2.1} \]which we can then differentiate with respect to temperature at constant \(P,\{n_i\}\) on both sides:\[ \left( \dfrac{\partial \ln K_{\text{eq}}}{\partial T} \right)_{P,\{n_i\}} = -\dfrac{1}{R} \left[ \dfrac{\partial \left( \dfrac{\Delta G^{-\kern-6pt{\ominus}\kern-6pt-}}{T} \right)}{\partial T} \right]_{P,\{n_i\}}, \label{10.2.2} \]and, using Gibbs-Helmholtz equation (Equation \ref{9.9}) to simplify the left hand side, becomes:\[ \left( \dfrac{\partial \ln K_{\text{eq}}}{\partial T} \right)_{P,\{n_i\}} = -\dfrac{1}{R} \left( -\dfrac{\Delta H^{-\kern-6pt{\ominus}\kern-6pt-}}{T^2} \right) = \dfrac{\Delta H^{-\kern-6pt{\ominus}\kern-6pt-}}{RT^2}, \label{10.2.3} \]which gives the dependence of \(\ln K_{\text{eq}}\) on \(T\) that we were looking for. Equation \ref{10.2.3} is also called van ’t Hoff equation,\(^1\) and it is the mathematical expression of Le Chatelier’s principle. The simplest interpretation is as follows:If we integrate the van ’t Hoff equation between two arbitrary points at constant \(P\), and assuming constant \(\Delta H^{-\kern-6pt{\ominus}\kern-6pt-}\), we obtain the following:\[ \int_1^2 d \ln K_{\text{eq}} = \dfrac{\Delta H^{-\kern-6pt{\ominus}\kern-6pt-}}{R} \int_1^2 \dfrac{dT}{T^2}, \label{10.2.4} \]which leads to the linear equation:\[ \ln K_{\text{eq}} = \ln K_{\text{eq}} - \dfrac{\Delta H^{-\kern-6pt{\ominus}\kern-6pt-}}{R} \left( \dfrac{1}{T_2}-\dfrac{1}{T_1} \right). \label{10.2.5} \]which is the equation that produces the so-called van ’t Hoff plots, from which \(\Delta H^{-\kern-6pt{\ominus}\kern-6pt-}\) can be experimentally determined:This page titled 10.2: Temperature Dependence of Keq is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,827
10.3: Pressure and Composition Dependence of Keq
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/10%3A_Chemical_Equilibrium/10.03%3A_Pressure_and_Composition_Dependence_of_Keq
While \(K_P\) is independent of both temperature and number of moles for an ideal gas, the same is not necessarily true for the other equilibrium constants.\[ \left( \dfrac{\partial K_P}{\partial P} \right)_{T,\{n_i\}} = 0 \qquad \qquad \left( \dfrac{\partial K_P}{\partial n_i} \right)_{T,P} =0. \label{10.3.1} \]For example, it is easy to look at Equation 10.1.13 and determine that \(K_y\) usually depends on \(P\).1 Using Dalton’s Law, Equation 9.4.7, we can also notice that the equilibrium partial pressures of the reactants and products in a gas-phase reaction can be expressed in terms of their equilibrium mole fractions \(y_i\) and the total pressure \(P\). As such, we can use \(K_y\) to demonstrate that the equilibrium mole fractions will change when \(P\) changes,2 as it is demonstrated by the following exercise.Calculate the mole fraction change for the dissociation of \(\mathrm{Cl}_{2(g)}\) when the pressure is increased from \(P^{-\kern-6pt{\ominus}\kern-6pt-}\) to \(P_f=2.5 \;\text{bar}\) at constant \(T=2\,298\;\mathrm{K}\), knowing that \(\Delta_{\mathrm{f}} G^{-\kern-6pt{\ominus}\kern-6pt-}_{\mathrm{Cl}_{(g)}} = 105.3 \;\text{kJ/mol}\) and \(\Delta_{\mathrm{f}} H^{-\kern-6pt{\ominus}\kern-6pt-}_{\mathrm{Cl}_{(g)}} = 121.3 \;\text{kJ/mol}\), and remembering that both of these values are tabulated at \(T=298\;\text{K}\).Let’s consider the reaction: \[ \mathrm{Cl}_{2(g)} \rightleftarrows 2 \mathrm{Cl}_{(g)} \nonumber \]We can divide the exercise into two parts. In the first one, we will deal with calculating the equilibrium constant at \(T=2\,298\;\mathrm{K}\) from the data at \(T=298\;\mathrm{K}\). In the second one, we will calculate the change in mole fraction when the pressure is increased from \(P^{-\kern-6pt{\ominus}\kern-6pt-}=1\;\text{bar}\) to \(P_f=2.5 \;\text{bar}\).Let’s begin the first part by calculating \(\Delta_{\text{rxn}} G^{-\kern-6pt{\ominus}\kern-6pt-}\) and \(\Delta_{\text{rxn}} H^{-\kern-6pt{\ominus}\kern-6pt-}\) from: \[ \begin{aligned} \Delta_{\text{rxn}} G^{-\kern-6pt{\ominus}\kern-6pt-}&= 2 \Delta_{\text{f}} G^{-\kern-6pt{\ominus}\kern-6pt-}_{\text{Cl}_{(g)}} - \Delta_{\text{f}} G^{-\kern-6pt{\ominus}\kern-6pt-}_{\text{Cl}_{2(g)}} \\ \Delta_{\text{rxn}} H^{-\kern-6pt{\ominus}\kern-6pt-}&= 2 \Delta_{\text{f}} H^{-\kern-6pt{\ominus}\kern-6pt-}_{\text{Cl}_{(g)}} - \Delta_{\text{f}} H^{-\kern-6pt{\ominus}\kern-6pt-}_{\text{Cl}_{2(g)}}, \end{aligned} \nonumber \] and since \(\text{Cl}_{2(g)}\) is an element in its most stable form at \(T=298\;\mathrm{K}\), its standard enthalpy and Gibbs free energy of formation are \(\Delta_{\text{f}} H^{-\kern-6pt{\ominus}\kern-6pt-}_{\text{Cl}_{2(g)}} = \Delta_{\text{f}} G^{-\kern-6pt{\ominus}\kern-6pt-}_{\text{Cl}_{2(g)}} = 0\). Therefore:\(^3\) \[ \begin{aligned} \Delta_{\text{rxn}} G^{-\kern-6pt{\ominus}\kern-6pt-}&= 2 \cdot 105.3 - 0 = 210.6 \;\text{kJ/mol} \\ \Delta_{\text{rxn}} H^{-\kern-6pt{\ominus}\kern-6pt-}&= 2 \cdot 121.3 - 0 = 242.6\;\text{kJ/mol}. \end{aligned} \nonumber \] Using Equation 10.1.8 to calculate \(K_P (P^{-\kern-6pt{\ominus}\kern-6pt-},298\;\text{K})\), we obtain:\(^4\) \[ \ln [ K_P (P^{-\kern-6pt{\ominus}\kern-6pt-},298\;\text{K}) ] = \dfrac{ - \Delta_{\text{rxn}} G^{-\kern-6pt{\ominus}\kern-6pt-}}{RT} = \dfrac{-210.6\times10^3}{8.31 \cdot 298} = - 85.0. \nonumber \] We can now use the integrated van ’t Hoff equation, Equation 10.2.5, to calculate \(K_P\) at \(T=2\,298\;\text{K}\): \[ \begin{aligned} \ln [K_P (P^{-\kern-6pt{\ominus}\kern-6pt-},&2\,298\;\text{K})] = \ln [K_P (P^{-\kern-6pt{\ominus}\kern-6pt-},298\;\text{K})] \;+ \\ &-\dfrac{\Delta_{\text{rxn}} H^{-\kern-6pt{\ominus}\kern-6pt-}}{R} \left(\dfrac{1}{2\,298}-\dfrac{1}{298} \right), \end{aligned} \nonumber \] which becomes: \[ \begin{aligned} \ln [K_P (P^{-\kern-6pt{\ominus}\kern-6pt-},&2\,298\;\text{K})] = - 85.0 \;+\\&-\dfrac{242.6\times 10^{3}}{8.31} \left(\dfrac{1}{2\,298}-\dfrac{1}{298} \right) = 0.262\;, \end{aligned} \nonumber \] which corresponds to: \[ K_P (P^{-\kern-6pt{\ominus}\kern-6pt-},2\,298\;\text{K}) = \exp (0.262)=1.30. \nonumber \]Let’s now move to the second part of the exercise, where we increase the pressure from \(1\;\text{bar}\) to \(2.5\;\text{bar}\) at constant \(T=2\,298\;\text{K}\). We start by writing the definition of \(K_P\) and \(K_y\): \[ K_P=\dfrac{P_\mathrm{Cl_{(g)}}^2}{P_{\mathrm{Cl}_{2(g)}}} \qquad \qquad K_y=\dfrac{y_\mathrm{Cl_{(g)}}^2}{y_{\mathrm{Cl}_{2(g)}}}, \nonumber \] and using Equation 10.1.13: \[ \begin{aligned} \Delta \nu &= 2 - 1 = 1 \\ K_P &= K_y \cdot \dfrac{P}{P^{-\kern-6pt{\ominus}\kern-6pt-}} \quad \xrightarrow \qquad K_y=K_P \left( \dfrac{P}{P^{-\kern-6pt{\ominus}\kern-6pt-}} \right)^{-1}, \end{aligned} \nonumber \] we can calculate the initial \(K_y\) at \(P_i=P^{-\kern-6pt{\ominus}\kern-6pt-}\), using: \[ K_y (P^{-\kern-6pt{\ominus}\kern-6pt-},2\,298\;\text{K}) = 1.30 =\dfrac{1.30}{1}. \nonumber \] and calculate the initial concentration of \(\mathrm{Cl}_{(g)}\) and \(\mathrm{Cl}_{(g)}\) at \(P^{-\kern-6pt{\ominus}\kern-6pt-}\), recalling that \(y_{\mathrm{Cl}_{2(g)}}=1-y_{\mathrm{Cl}_{(g)}}:\) \[ K_y (P_i,2\,298\;\text{K})=\dfrac{\left(y^i_{\mathrm{Cl}_{(g)}}\right)^2}{y^i_{\mathrm{Cl}_{(g)}}} = 1.30. \nonumber \] Solving the quadratic equation, we obtain one negative answer—which is unphysical—,\(^5\) and: \[ y_{\mathrm{Cl}_{(g)}}^i= 0.662 \quad \xrightarrow \qquad y_{\mathrm{Cl}_{2(g)}}^i=1-0.662 = 0.338. \nonumber \] At the end of the process, \(P_f=2.5\;\text{bar}\), and we obtain: \[ K_y (P_f,2\,298\;\text{K}) = 0.520 = K_P \dfrac{P^{-\kern-6pt{\ominus}\kern-6pt-}}{P_f} = \dfrac{1.30}{2.5}, \nonumber \] and, using the same technique used before to solve the quadratic equation: \[ K_y (P_f,2\,298\;\text{K})=\dfrac{\left(y^f_{\mathrm{Cl}_{(g)}}\right)^2}{y^f_{\mathrm{Cl}_{(g)}}} = 0.520, \nonumber \] gives: \[ y_{\mathrm{Cl}_{(g)}}^f=0.507 \quad \xrightarrow \qquad y_{\mathrm{Cl}_{2(g)}}^i=1-0.507 = 0.493. \nonumber \] To summarize, when we increase the pressure from \(1\;\text{bar}\) to \(2.5\;\text{bar}\) at \(T=2\,298\;\text{K}\), the equilibrium constant in terms of the mole fraction decreases from \(K_y(P^{-\kern-6pt{\ominus}\kern-6pt-},2\,298\;\text{K})=1.30\) to \(K_y(P_f=2.5\;\text{bar},2\,298\;\text{K})=0.520\). This reduction is causing a shift of the equilibrium towards the reactants, with the concentration of \(\text{Cl}_{2(g)}\) increasing from \(y_{\text{Cl}_{2(g)}}^i = 0.338\) to \(y_{\text{Cl}_{2(g)}}^f = 0.493\) and the concentration of \(\text{Cl}_{(g)}\) decreasing from \(y_{\text{Cl}_{2(g)}}^i = 0.662\) to \(y_{\text{Cl}_{(g)}}^f = 0.507\).The dependence of \(K_{\text{eq}}\) on \(P\) highlighted above is another mathematical expression of Le Chatelier’s principle, on this occasion, for changes in pressure. The interpretation For a reaction happening in the gas phase is as follows:This page titled 10.3: Pressure and Composition Dependence of Keq is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,828
11.1: The Ideal Gas Equation
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/11%3A_Ideal_and_Non-Ideal_Gases/11.01%3A_The_Ideal_Gas_Equation
The concept of an ideal gas is a theoretical construct that allows for straightforward treatment and interpretation of gases’ behavior. As such, the ideal gas is a simplified model that we use to understand nature, and it does not correspond to any real system. The following two assumptions define the ideal gas model:Because of its simplicity, the ideal gas model has been the historical foundation of thermodynamics and of science in general. The first studies of the ideal gas behavior date back to the seventeenth century, and the scientists that performed them are among the founders of modern science.In 1662 Robert Boyle (1627–1691) found that the pressure and the volume of an ideal gas are inversely related at constant temperature. Boyle’s Law has the following mathematical description:\[ P\propto\dfrac{1}{V}\quad\text{at const.}\;T, \label{11.1.1} \]or, in other terms:\[ PV=k_1\quad\text{at const.}\;T, \label{11.1.2} \]which results in the familiar \(PV\) plots of . As we already discussed in chapter 2, each of the curves in is obtained at constant temperature, and it is therefore called “isotherm.”It took scientists more than a century to expand Boyle’s work and study the relationship between volume and temperature. In 1787 Jacques Alexandre César Charles (1746–1823) wrote the relationship known as Charles’s Law:\[ V\propto T\quad\text{at const.}\;P, \label{11.1.3} \]or, in other terms:\[ V=k_2 T\quad\text{at const.}\;P, \label{11.1.4} \]which results in the plots of . Each of the curves is obtained at constant pressure, and it is termed “isobar.”The interesting thing about isobars is that each line seems to converge to a specific point along the temperature line when we extrapolate them to \(V\rightarrow 0\). This led to the introduction of the absolute temperature scale, suggesting that the temperature will never get smaller than \(-273.15^\circ\mathrm{C}\).It took an additional 21 years to write a formal relationship between pressure and temperature. The following relationships were proposed by Joseph Louis Gay-Lussac (1778–1850) in 1808:\[ P\propto T\quad\text{at const.}\;V, \label{11.1.5} \]or, in other terms:\[ P=k_3 T\quad\text{at const.}\;V, \label{11.1.6} \]which results in the plots of . Each of the curves is obtained at constant volume, and it is termed “isochor.”Ten years later, Amedeo Avogadro (1776–1856) discovered a seemingly unrelated principle by studying the composition of matter. His Avogadro’s Law encodes the relationship between the number of moles in an ideal gas and its volume as:\[ V\propto n\quad\text{at const.}\;P,T, \label{11.1.7} \]or in other terms:\[ V=k_4 n\quad\text{at const.}\;P,T, \label{11.1.8} \]Despite all of the ingredients being available for more than 20 years, it’s only in 1834 that Benoît Paul Émile Clapeyron (1799–1864) was finally able to combine them into what is now known as the ideal gas Law. Using the same formulas obtained above, we can write:\[ PV=\underbrace{k_3 T}_{\text{from Gay-Lussac's}} \cdot \underbrace{k_4 n,}_{\text{from Avogadro's}} \label{11.1.9} \]which by renaming the product of the two constants \(k_3\) and \(k_4\) as \(R\), becomes:\[ PV=nRT \label{11.1.10} \]The value of the constant \(R\) can be determined experimentally by measuring the volume that 1 mol of an ideal gas occupies at a constant temperature (e.g., at \(T=0^\circ\mathrm{C}\)) and a constant pressure (e.g., atmospheric pressure \(P=1\;\mathrm{atm}\)). At those conditions, the volume is measured at 22.4 L, resulting in the following value of \(R\):\[ R=\dfrac{VP}{nT}=\dfrac{22.4 \cdot 1}{1 \cdot 273}=0.082 \;\dfrac{\text{L atm}}{\text{mol K}}, \label{11.1.11} \]which a simple conversion to SI units transforms into:\[ R=8.31\;\dfrac{\text{J}}{\text{mol K}}. \label{11.1.12} \]This page titled 11.1: The Ideal Gas Equation is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,829
11.2: Behaviors of Non-Ideal Gases
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/11%3A_Ideal_and_Non-Ideal_Gases/11.02%3A_Behaviors_of_Non-Ideal_Gases
Non-ideal gases (sometimes also referred to as “real gases”), do not behave as ideal gases because at least one of the assumptions in Definition: Ideal Gas is violated. What characterizes non-ideal gases is that there is no unique equation that we can use to describe their behavior. For this reason, we have a plethora of several experimental models, none of which is superior to the other. The van der Waals (vdW) equation is the only model that we will analyze in detail because of its simple interpretation. However, it is far from universal, and for several non-ideal gases, it is severely inaccurate. Other popular non-ideal gases equations are the Clausius equation, the virial equation, the Redlich–Kwong equation and several others.\(^1\)One of the simplest empirical equation that describes non-ideal gases was obtained in 1873 by Johannes Diderik van der Waals (1837–1923). The vdW equation includes two empirical parameters (\(a\) and \(b\)) with different values for different non-ideal gases. Each of the parameters corresponds to a correction for the breaking of one of the two conditions that define the ideal gas behavior (Definition: Ideal Gas). The vdW equation is obtained from the ideal gas equation performing the following simple substitutions:\[ \begin{aligned} P & \;\rightarrow\;\left( P + \dfrac{a}{\overline{V}^2} \right)\\ \overline{V} & \;\rightarrow\;\left( \overline{V} - b\right),\\ \end{aligned} \label{11.2.1} \]which results in:\[ \begin{aligned} P\overline{V} &=RT \; \rightarrow \; \left( P + \dfrac{a}{\overline{V}^2} \right)\left( \overline{V} - b\right)=RT\\ P &=\dfrac{RT}{\overline{V} - b}-\dfrac{a}{\overline{V}^2}. \end{aligned} \label{11.2.2} \]The parameter \(a\) accounts for the presence of intermolecular interactions, while the parameter \(b\) accounts for the non-negligible volume of the gas molecules. Despite the parameters having simple interpretations, their values for each gas must be determined experimentally. Values for these parameters for some significant non-ideal gas are reported below:We have already met William Thomson, also known as Lord Kelvin, and his seminal work on the second law of thermodynamics. In conjunction with that work, Thomson is famous for developing a sensitive method for measuring the temperature changes related to the expansion of a gas. These experiments improved on the earlier work by James Joule, and Lord Kelvin’s improved instrument depicted in is named the Joule–Thomson apparatus. The apparatus is composed of two chambers, each with its own mobile piston. The chambers are connected via a valve or a porous plug. The entire equipment is also thermally isolated from the surroundings. This instrument is a more sensitive version of the Joule expansion apparatus that we already described in section 3 (compare with .Thomson realized that a gas flowing through an obstruction experience a drop in pressure. If the entire apparatus is insulated, it will not exchange heat with its surroundings (\(Q=0\)), and each transformation will happen at adiabatic conditions. Let’s consider an initial condition with 1 mol of gas in the left chamber, occupying a volume \(V_l\), and a completely closed right chamber, for which \(V_r^i=0\). After the process completes, the volume of the right chamber will reduce to \(V_l^f=0\), while the volume of the right chamber will be \(V_r\). Using the first law of thermodynamics, we can write: \[ \Delta U=U_r-U_l=\underbrace{Q}_{=0}+W=W_l+W_r, \label{11.2.3} \]with:\[ \begin{aligned} W_l &=-\int_{V_l}^0 P_l dV = P_l V_l\\ W_r &=-\int_0^{V_r} P_r dV = - P_r V_r. \end{aligned} \label{11.2.4} \]Replacing \ref{11.2.4} into Equation \ref{11.2.3}, results in:\[ \begin{aligned} U_r-U_l &=P_l V_l-P_r V_r \\ \underbrace{U_r+P_r V_r}_{H_r} &= \underbrace{U_l + P_l V_l}_{H_l}, \end{aligned} \label{11.2.5} \]which, replacing the definition of enthalpy \(H=U+PV\), we obtain:\[ \begin{aligned} H_r &=H_l \\ \Delta H &=0, \end{aligned} \label{11.2.6} \]or, in other words, the process is isenthalpic. Using the total differential of \(H\):\[ dH=\left(\dfrac{\partial H}{\partial T} \right)_P dT + \left(\dfrac{\partial H}{\partial P} \right)_T dP = C_P dT + \left(\dfrac{\partial H}{\partial P} \right)_T dP, \label{11.2.7} \]we obtain:\[ \Delta H=\int dH = \int C_P dT + \int \left(\dfrac{\partial H}{\partial P} \right)_T dP =0, \label{11.2.8} \]or, in purely differential form:\[ dH = C_P dT + \left(\dfrac{\partial H}{\partial P} \right)_T dP =0, \label{11.2.9} \]From Equation \ref{11.2.9} we can define a new coefficient, called the Joule–Thomson coefficient, \(\mu_{\mathrm{JT}}\), that measures the rate of change of temperature of a gas with respect to pressure in the Joule–Thomson process:\[ \mu_{\mathrm{JT}}=\left( \dfrac{\partial T}{\partial P} \right)_H=-\dfrac{1}{C_P} \left( \dfrac{\partial H}{\partial T} \right)_P \label{11.2.10} \]The value of \(\mu_{\mathrm{JT}}\) depends on the type of gas, the temperature and pressure before expansion, and the heat capacity at constant pressure of the gas. The temperature at which \(\mu_{\mathrm{JT}}\) changes sign is called the “Joule–Thomson inversion temperature.” Since the pressure decreases during an expansion, \(\partial P\) is negative by definition, and the following possibilities are available for \(\mu_{\mathrm{JT}}\):For example, helium has a very low Joule–Thomson inversion temperature at standard pressure \((T=45\;\text{K})\), and it warms when expanded at constant enthalpy at typical room temperatures. The only other gases that have standard inversion temperature lower than room temperature are hydrogen and neon. On the other hand, nitrogen and oxygen have high inversion temperatures (\(T=621\;\text{K}\) and \(T=764\;\text{K}\), respectively), and they both cool when expanded at room temperature. Therefore, it is possible to use the Joule–Thomson effect in refrigeration processes such as air conditioning.\(^2\) As we already discussed in chapter 3, the temperature of an ideal gases stays constant in an adiabatic expansion, therefore its Joule–Thomson coefficient is always equal to zero.This page titled 11.2: Behaviors of Non-Ideal Gases is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,830
11.3: Critical Phenomena
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/11%3A_Ideal_and_Non-Ideal_Gases/11.03%3A_Critical_Phenomena
The compressibility factor is a correction coefficient that describes the deviation of a real gas from ideal gas behaviour. It is usually represented with the symbol \(z\), and is calculated as:\[ z=\dfrac{\overline{V}}{\overline{V}_{\text{ideal}}} = \dfrac{P \overline{V}}{RT}. \label{11.3.1} \]It is evident from Equation \ref{11.3.1} that the compressibility factor is dependent on the pressure, and for an ideal gas \(z=1\) always. For a non-ideal gas at any given pressure, \(z\) can be higher or lower than one, separating the behavior of non-ideal gases into two possibilities. The dependence of the compressibility factor against pressure is represented for \(\mathrm{H}_2\) and \(\mathrm{CO}_2\) in .The two types of possible behaviors are differentiated based on the compressibility factor at \(P\rightarrow 0\). To analyze these situations we can use the vdW equation to calculate the compressibility factor as:\[ z= \dfrac{\overline{V}}{RT} \left( \dfrac{RT}{\overline{V}-b} -\dfrac{a}{\overline{V}^2} \right). \label{11.3.2} \]and then we can differentiate this equation at constant temperature with respect to changes in the pressure near \(P=0\), to obtain:\[ \left. \left( \dfrac{\partial z}{\partial P}\right)_T \right|_{P=0} = \dfrac{1}{RT} \left( b -\dfrac{a}{RT} \right). \label{11.3.3} \]which is then interpreted as follows:The dependence of the compressibility factor as a function of temperature ) results in different plots for each of the two types of behavior.Both type I and type II non-ideal gases will approach the ideal gas behavior as \(T\rightarrow \infty\), because \(\dfrac{1}{RT}\rightarrow 0\) as \(T\rightarrow \infty\). For type II gases, there are three interesting situations:Let’s now turn our attention to the \(PV\) phase diagram of a non-ideal gas, reported in .We can start the analysis from an isotherm at a high temperature. Since every gas will behave as an ideal gas at those conditions, the corresponding isotherms will look similar to those of an ideal gas (\(T_5\) and \(T_4\) in ). Lowering the temperature, we start to see the deviation from ideality getting more prominent (\(T_3\) in ) until we reach a particular temperature called the critical temperature, \(T_c\).The temperature above which no appearance of a second phase is observed, regardless of how high the pressure becomes.At the critical temperature and below, the gas liquefies when the pressure is increased. For this reason, the liquefaction of a gas is called a critical phenomenon.The critical temperature is the coordinate of a unique point, called the critical point, that can be visualized in the three-dimensional \(T,P,V\) diagram of each gas )\(^1\).The critical point has coordinates \({T_c,P_c, \overline{V}_c}\). These critical coordinates can be determined from the vdW equation at \(T_c\), as:\[ T_c=\dfrac{8a}{27Rb} \qquad P_c=\dfrac{a}{27b^2} \qquad \overline{V}_c=3b, \label{11.3.4} \]These relations are used, in practice, to determine the vdW constants \(a,b\) from the experimentally measured critical isotherms.The critical compressibility factor, \(z_c\), is predicted from the vdW equation at:\[ z_c=\dfrac{P_c \overline{V}_c}{R T_c}=\left( \dfrac{a}{27b^2} \right) \left( \dfrac{3b}{R} \right) \left( \dfrac{27Rb}{8a} \right) = \dfrac{3}{8} = 0.375, \label{11.3.5} \]a value that is independent of the gas. Experimentally measured values of \(z_c\) for different non-ideal gases are in the range of 0.2–0.3. These values can be used to infer the accuracy of the vdW equation for each non-ideal gas. Since the experimental \(z_c\) is usually lower than the one calculated from the vdW equation, we can deduce that the vdW equation overestimates the critical molar volume.Notice how slicing the \(PT\overline{V}\) diagram at constant \(T\) results in the \(PV\) diagram that we reported in . On the other hand, slicing the \(PT\overline{V}\) diagram at constant \(P\) results in the \(PT\) diagram that we will examine in detail in the next chapter.This page titled 11.3: Critical Phenomena is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,831
11.4: Fugacity
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/11%3A_Ideal_and_Non-Ideal_Gases/11.04%3A_Fugacity
The chemical potential of a pure ideal gas can be calculated using Equation 9.4.5. Since we are not interested in mixture, we can drop the asterisk in \(\mu^*\), and rewrite Equation 9.4.5 as:\[ \mu_{\text{ideal}} = \mu^{-\kern-6pt{\ominus}\kern-6pt-}+ RT \ln \dfrac{P}{P^{-\kern-6pt{\ominus}\kern-6pt-}}. \label{11.4.1} \]For a non-ideal gas, the pressure cannot be used in Equation \ref{11.4.1} because each gas response to changes in pressure is not universal. We can, however, define a new variable to replace the pressure in Equation \ref{11.4.1} and call it fugacity (\(f\)).The effective pressure of a non-ideal gas that corresponds to the pressure of an ideal gas with the same temperature and chemical potential of the non-ideal one.Equation \ref{11.4.1} then becomes:\[ \mu_{\text{non-ideal}} = \mu^{-\kern-6pt{\ominus}\kern-6pt-}+ RT \ln \dfrac{f}{P^{-\kern-6pt{\ominus}\kern-6pt-}}. \label{11.4.2} \]Since the chemical potential of a gas \(\mu\) is equal to the standard chemical potential \(\mu^{-\kern-6pt{\ominus}\kern-6pt-}\) when \(P=P^{-\kern-6pt{\ominus}\kern-6pt-}\), it is easy to use Equation \ref{11.4.2} to demonstrate that:\[ \lim_{P\rightarrow 0} \dfrac{f}{P} = 1, \label{11.4.3} \]in other words, any non-ideal gas will approach the ideal gas behavior as \(P\rightarrow 0\). This condition, in conjunction with the \(T\rightarrow \infty\) behavior obtained in the previous section, results in the following statement:The highest chances for any gas to behave ideally happen at high temperature and low pressure.We can now return our attention to the definition of fugacity. Remembering that the chemical potential is the molar Gibbs free energy of a substance, we can write:\[ d \mu_{\text{ideal}} = \overline{V}_{\text{ideal}}dP, \label{11.4.4} \]and:\[ d \mu_{\text{non-ideal}} = \overline{V}_{\text{non-ideal}}dP, \label{11.4.5} \]Subtracting Equation \ref{11.4.4} from Equation \ref{11.4.5}, we obtain:\[ d \mu_{\text{non-ideal}}-d \mu_{\text{ideal}} = \left(\overline{V}_{\text{non-ideal}}-\overline{V}_{\text{ideal}} \right) dP, \label{11.4.6} \]which we can then integrate between \(0\) and \(P\):\[ \mu_{\text{non-ideal}}-\mu_{\text{ideal}} = \int_0^P \left(\overline{V}_{\text{non-ideal}}-\overline{V}_{\text{ideal}} \right) dP. \label{11.4.7} \]Using eqs. \ref{11.4.1} and \ref{11.4.2} we can then replace the definition of chemical potentials, resulting into:\[ \ln f - \ln P = \dfrac{1}{RT} \int_0^P \left(\overline{V}_{\text{non-ideal}} - \overline{V}_{\text{ideal}} \right) dP, \label{11.4.8} \]which gives us a mathematical definition of the fugacity, as:\[ f = P \cdot \underbrace{\exp\left[ \dfrac{1}{RT} \int_0^P \left(\overline{V}_{\text{non-ideal}}-\overline{V}_{\text{ideal}} \right) dP \right]}_{\text{fugacity coefficient, }\phi(T,P)}. \label{11.4.9} \]The exponential term in Equation \ref{11.4.9} is complicated to write, but it can be interpreted as a coefficient—unique to each non-ideal gas—that can be measured experimentally. Such coefficients are dependent on pressure and temperature and are called the fugacity coefficients. Using letter \(\phi\) to represent the fugacity coefficient, we can rewrite Equation \ref{11.4.9} as:\[ f = \phi P, \label{11.4.10} \]which gives us a straightforward interpretation of the fugacity as an effective pressure. As such, the fugacity will have the same unit as the pressure, while the fugacity coefficients will be adimensional.As we already saw in chapter 10, the fugacity can be used to replace the pressure in the definition of the equilibrium constant for reactions that involve non-ideal gases. The new constant is usually called \(K_f\), and is obtained from:\[ K_f=\prod_i f_{i,\text{eq}}^{\nu_i} = K_P \prod_i \phi_{i}^{\nu_i}. \label{11.38} \]This page titled 11.4: Fugacity is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,832
12.1: Phase Stability
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/12%3A_Phase_Equilibrium/12.01%3A_Phase_Stability
We have already encountered the gas, liquid, and solid phases and already discussed some of their properties. These terms are intuitive since these are the three most common states of matter.\(^1\) For this reason, we have previously used the terms without the necessity of formally defining their meaning. However, a formal definition of “phase” is necessary to discuss several concepts in this chapter and the following ones:A region of the system with homogeneous chemical composition and physical state.Let’s now use the total differential of the chemical potential and the definition of molar Gibbs free energy for one component:\[\begin{equation} \begin{aligned} d\mu &= \left( \dfrac{\partial \mu}{\partial T} \right)_P dT + \left( \dfrac{\partial \mu}{\partial P} \right)_T dP \\ d\mu &= -SdT+\overline{V}dP, \end{aligned} \end{equation} \label{12.1.1} \]to write:\[ \left( \dfrac{\partial \mu}{\partial T} \right)_P=-S \qquad \left( \dfrac{\partial \mu}{\partial P} \right)_T =\overline{V}. \label{12.1.2} \]We can use these definitions to study the dependence of the chemical potential with respect to changes in pressure and temperature. If we plot \(\mu\) as a function of \(T\) using the first coefficient in Equation \ref{12.1.2}, we obtain the diagram in . The diagram presents three curves, each corresponding to one of the three most common states of matter – solid, liquid, and gas. As we saw in several previous chapters, the entropy of a phase is almost constant with respect to temperature,\(^2\) and therefore the three curves are essentially straight, with negative angular coefficients \(-S\). This also explains why the solid phase has a basically flat line since, according to the third law, the entropy of a perfect solid is zero and close to zero if the solid is not perfect. The difference between the three lines’ angular coefficients is explained by the fact that each of these states has a different value of entropy:\[ \left( \dfrac{\partial \mu_{\text{solid}}}{\partial T} \right)_P =-S_{\text{s}} \qquad \left( \dfrac{\partial \mu_{\text{liquid}}}{\partial T} \right)_P =-S_{\text{l}} \qquad \left( \dfrac{\partial \mu_{\text{gas}}}{\partial T} \right)_P =-S_{\text{g}}, \label{12.1.3} \]and since the entropy of a gas is always bigger than the entropy of a liquid, which in turn, is yet bigger than the entropy of a solid (\(S_{\text{g}} \gg S_{\text{l}}>S_{\text{s}}\)), we obtain three lines with different angular coefficients that intersect each other. At each temperature, the phase with the lowest chemical potential will be the most stable (see red segments in ). At each intersection between two lines, the two phases have the same chemical potential, representing the temperature at which they coexist. This temperature is the temperature at which the phase change happens. Recalling from general chemistry, at the junction between the solid and the liquid lines, the fusion (fus) process occurs, and the corresponding temperature is called the melting point \(T_{\text{m}}\). At the junction between the liquid and the gas lines, the vaporization (vap) process happens, and the corresponding temperature is called the boiling point \(T_{\text{b}}\). Depending on the substance and the pressure at which the process happens, the solid line might intersect the gas line before the liquid line. When that occurs, the liquid phase is never observed, and only the sublimation (subl) process happens at the sublimation point \(T_{\text{subl}}\).The effects of pressure on this diagram can be studied using the second coefficient in Equation \ref{12.1.2}. For the majority of substances, \(\overline{V}_{\text{g}} \gg \overline{V}_{\text{l}} > \overline{V}_{\text{s}}\), hence the curves will shift to lower values when the pressure is reduced, as in . Notice also that since \(\overline{V}_{\text{l}} \cong \overline{V}_{\text{s}}\), the shifts for both the solid and liquid lines is much smaller than the shift for the gas line. These shifts also translate to different values of the junctions, which means the phase changes will occur at different temperatures. Therefore both the melting point and the boiling point in general increase when pressure is increased (and vice versa). Notice how the change for the melting point is always much smaller than the change for the boiling point. Water is a noticeable exception to these trend because \(\overline{V}_{\mathrm{H}_2\mathrm{O,l}} < \overline{V}_{\text{ice}}\). This explains the experimental observation that increasing the pressure on ice causes the ice to melt\(^3\)Considering the intersections between two lines, two phases are in equilibrium with each other at each of these points. Therefore their chemical potentials must be equal:For two or more phases to be in equilibrium, their chemical potential must be equal:\[ \mu_{\alpha} = \mu_{\beta}. \label{12.1.4} \]If we now change either the temperature or the pressure, the location of the intersection will be shifted (see again and the discussion above). For infinitesimal changes in variables, the new location will be:\[ \mu_{\alpha} + d\mu_{\alpha}= \mu_{\beta}+d\mu_{\beta}, \label{12.1.5} \]which using Equation \ref{12.1.4}, simply becomes:\[ d\mu_{\alpha}= d\mu_{\beta}. \label{12.1.6} \]Replacing the differential with the definition of chemical potential in Equation \ref{12.1.1}, we obtain:\[\begin{equation} \begin{aligned} -S_{\alpha}dT+\overline{V}_{\alpha} &= -S_{\beta}dT+\overline{V}_{\beta} \\ \underbrace{\left(S_{\beta}-S_{\alpha}\right)}_{\Delta S} dT &= \underbrace{\left( \overline{V}_{\beta}-\overline{V}_{\alpha}\right)}_{\Delta \overline{V}}, \end{aligned} \end{equation} \label{12.1.7} \]which can be rearranged into:\[ \dfrac{dP}{dT}=\dfrac{\Delta S}{\Delta \overline{V}}. \label{12.1.8} \]This equation is known as the Clapeyron equation, and it is the mathematical relation at the basis of the pressure-temperature phase diagrams. Plotting the results of Equation \ref{12.1.8} on a \(PT\) phase diagram for common substances results in three lines representing the equilibrium between two different phases. These diagrams are useful to study the relationship between the phases of a substance.This page titled 12.1: Phase Stability is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,833
12.2: Gibbs Phase Rule
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/12%3A_Phase_Equilibrium/12.02%3A_Gibbs_Phase_Rule
In chapter 1, we have already seen that the number of independent variables required to describe an ideal gas is two. This number was derived by counting the total number of variables \((3: P,\overline{V},T)\), and reduce it by one because the ideal gas law constrains the value of one of them, once the other two are fixed. For a generic system potentially containing more than one chemical substance in several different phases, however, the number of independent variables can be different than two. For a system composed of \(c\) components (chemical substances) and \(p\) phases, the number of independent variables, \(f\), is given by the Gibbs phase rule:\[ f=c-p+2. \label{12.2.1} \]The Gibbs phase rule derives from the fact that different phases are in equilibrium with each other at some conditions, resulting in the reduction of the number of independent variables at those conditions. More rigorously, when two phases are in thermodynamic equilibrium, their chemical potentials are equal (see Equation 12.1.4). For each equality, the number of independent variables—also called the number of degrees of freedom—is reduced by one. For example, the chemical potentials of the liquid and its vapor depend on both \(T\) and \(P\). But when these phases are in equilibrium with each other, their chemical potentials must be equal. If either the pressure or the temperature is fixed, the other variable will be uniquely determined by the equality relation. In other terms, when a liquid is in equilibrium with its vapor at a given pressure, the temperature is determined by the fact that the chemical potentials of the two phases is the same, and is denoted as the boiling temperature \(T_{\text{b}}\). Similarly, at a given temperature, the pressure of the vapor is uniquely determined by the same equality relation and is denoted as the vapor pressure, \(P^*\).The Gibbs phase rule is obtained considering that the number of independent variables is given by the total number of variables minus the constraints. The total number of variables is given by temperature, pressure, plus all the variables required to describe each of the phases. The composition of each phase is determined by \((c-1)\) variables.\(^1\) The number of constraints is determined by the number of possible equilibrium relations, which is \(c(p-1)\) since the chemical potential of each component must be equal in all phases. The number of degrees of freedom \(f\) is then given by\[\begin{align*} f &=(c-1)p+2-c(p-1) \\[4pt] &=c-p+2 \end{align*} \]which is the Gibbs phase rule, as in Equation \ref{12.2.1}.This page titled 12.2: Gibbs Phase Rule is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,834
12.3: PT Phase Diagrams
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/12%3A_Phase_Equilibrium/12.03%3A_PT_Phase_Diagrams
Let’s now discuss the pressure–temperature diagram of a typical substance, as reported in . Each of the lines reported in the diagram represents an equilibrium between two phases, and therefore it represents a condition that reduces the number of degrees of freedom to one. The lines can be determined using the Clapeyron equation, Equation 12.1.8. The interpretation of each line is as follows:For this equilibrium we can use Trouton’s rule, Equation 7.1.5, and write: \[ \Delta_{\text{vap}} S = S_{\text{g}}-S_{\text{l}} \cong 88 \; \dfrac{\text{kJ}}{\text{mol}} > 0\quad \text{always}, \label{12.3.1} \]where the entropy of vaporization is always positive, even for cases where the Trouton’s rule is violated. The difference in molar volumes is easily obtained, since the volume of the gas is always much greater than the volume of the liquid:\[ \overline{V}_{\text{g}} - \overline{V}_{\text{l}} \cong \overline{V}_{\text{g}} = 22.4\; \dfrac{\text{L}}{\text{mol}} >0\quad \text{always}. \label{12.3.2} \]Replacing these values in the Clapeyron equation, we obtain:\[ \dfrac{dP}{dT}=\dfrac{88}{22.4}\left( \dfrac{0.0831}{8.31} \right) = 0.004\;\text{bar} > 0 \quad \text{always}, \label{12.3.3} \]which is always positive,regardless of violations to the Trouton’s rule. Notice how small this value is, meaning that the liquid–gas equilibrium curve is mostly flat as \(T\rightarrow 0\).If we look at the signs of each quantity, this case is similar to the previous one: \[ \begin{equation} \begin{aligned} \Delta_{\text{subl}} S &> 0 \quad \text{always} \\ \Delta_{\text{subl}} \overline{V }&> 0 \quad \text{always} \\ \\ \dfrac{dP}{dT} &> 0 \quad \text{always}. \end{aligned} \end{equation}\label{12.3.4} \]However, the Trouton’s rule is not valid for the solid–gas equilibrium, and \(\dfrac{dP}{dT}\) will be larger than for the previous case.The final curve is for the solid-liquid equilibrium, for which we have:\[ \Delta_{\text{fus}} S = \dfrac{\Delta_{\text{fusion}} H}{T_{\text{m}}} > 0 \quad \text{always}, \label{12.3.5} \]since fusion is always an exothermic process, \((\Delta_{\text{fus}} H>0)\). On the other side:\[ \Delta_{\text{fusion}} \overline{V} = \overline{V}_{\text{l}} - \overline{V}_{\text{s}} > 0 \quad \text{generally}. \nonumber \]In other words, the difference of the molar volume of the liquid and that of the solid is positive for most substances, but it might be negative (for example for \(\mathrm{H}_2\mathrm{O}\)). As such:\[ \dfrac{dP}{dT} > 0 \quad \text{generally}. \label{12.3.6} \]For \(\mathrm{H}_2\mathrm{O}\) and a few other substances, \(\dfrac{dP}{dT}<0\), an anomalous behavior that has crucial consequences for the existence of life on earth.\(^1\) For this importance, this behavior is also depicted in using a dashed green line.Since the differences in molar volumes between the solid and the liquid phases are usually small (changes are generally of the order of \(10^{-3}\;\mathrm{L}\)), \(\dfrac{dP}{dT}\) is always much larger than for the previous two cases. The resulting lines for the solid–liquid equilibria are still almost vertical, regardless of the signs of their angular coefficients.The only point in the \(PT\) diagram where all the three phases coexist is called the triple point. The number of degrees of freedom at the triple point for every 1-component diagram is \(f=1-3+2=0\). The fact that the triple point has zero degrees of freedom means that its coordinates, \({T_{\text{tp}},P_{\text{tp}},\overline{V}_{\text{tp}}}\), are uniquely determined for each chemical substance. For this reason, the value of the triple point of water was fixed by definition—rather than measured—until 2019. This definition was necessary to establish the base unit of the thermodynamic temperature scale in the SI (the Kelvin).\(^2\)In addition to the triple point where the solid, liquid, and gas phases meet, a triple point may involve more than one condensed phase. Triple points are common for substances with multiple solid phases (polymorphs), involving either two solid phases and a liquid one or three solid phases. Helium is a special case that presents a triple point involving two different fluid phases, called the lambda point. Since the number of degrees of freedom cannot be negative, the Gibbs phase rule for a 1-component diagram sets the limit to how many phases can coexist to just three. Therefore, quadruple points (or higher coexistence points) are not possible for pure substances, even for polymorphs.\(^3\)Another point with a fixed position in the \(PT\) diagram is the critical point, \({T_{\text{c}},P_{\text{c}},\overline{V}_{\text{c}}}\). We have already given the definition of the critical temperature in Definition: Critical Temperature. This point represents the end of the liquid–gas equilibrium curve. This point is also semantically important to define different regions of the phase diagram, as in . A gas whose pressure and temperature are below the critical point is called a vapor. A gas whose temperature is above the critical one and the pressure is below its critical one is called a supercritical fluid. Finally, a liquid whose pressure is above the critical point is called a compressible liquid.\(^4\)This page titled 12.3: PT Phase Diagrams is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,835
12.4: The Clausius-Clapeyron Equation
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/12%3A_Phase_Equilibrium/12.04%3A_The_Clausius-Clapeyron_Equation
Let’s now take a closer look at the equilibrium between a condensed phase and the gas phase. For both the vaporization and sublimation processes, Clausius showed that the Clapeyron equation can be simplified by using:\[ \Delta_{\text{vap}} S = \dfrac{\Delta_{\text{vap}} H}{T} \qquad \Delta \overline{V}= \overline{V}_{\mathrm{g}} -\overline{V}_{\mathrm{l}} \cong \overline{V}_{\mathrm{g}}, \label{12.4.1} \]resulting in:\[ \dfrac{dP}{dT} = \dfrac{ \Delta_{\text{vap}} S}{\Delta \overline{V}} \cong \dfrac{ \Delta_{\text{vap}} H}{T \overline{V}_{\mathrm{g}}}. \label{12.4.2} \]Using the ideal gas law to replace the molar volume of the gas, we obtain:\[ \dfrac{dP}{dT} = \dfrac{P \Delta_{\text{vap}} H}{RT^2}, \label{12.4.3} \]which can be rearranged as:\[ \dfrac{dP}{P} = \dfrac{\Delta_{\text{vap}} H}{R} \dfrac{dT}{T^2}. \label{12.4.4} \]Equation \ref{12.4.4} is known as the Clausius–Clapeyron equation, and it measures the dependence of the vapor pressure of a substance as a function of the temperature. The Clausius–Clapeyron equation can be integrated to obtain:\[ \begin{aligned} \int_{P_i}^{P_f} \dfrac{dP}{P} &= \dfrac{\Delta_{\text{vap}} H}{R} \int_{T_i}^{T_f} \dfrac{dT}{T^2} \\ \ln \dfrac{P_f}{P_i} &=-\dfrac{\Delta_{\text{vap}} H}{R} \left( \dfrac{1}{T_f}-\dfrac{1}{T_i} \right). \end{aligned} \label{12.4.5} \]The integrated Clausius–Clapeyron equation shows that the vapor pressure depends exponentially on the temperature. Thus, even a small change in the temperature will result in a significant change in the vapor pressure. In fact, we daily use the fact that the vapor pressure of water changes drastically when we increase its temperature for cooking most of our food. For example, at an external pressure of 1 bar, it rapidly grows from \(P^*=0.02\;\text{bar}\) to \(P^*=1\;\text{bar}\) when the temperature is increased from \(T=293\;\mathrm{K}\) (around room temperature) to \(T=373\;\mathrm{K}\) (boiling point). The integrated Clausius–Clapeyron equation is also often used to determine the enthalpy of vaporization from measurements of vapor pressure at different temperatures.This page titled 12.4: The Clausius-Clapeyron Equation is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,836
13.1: Raoult’s Law and Phase Diagrams of Ideal Solutions
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/13%3A_Multi-Component_Phase_Diagrams/13.01%3A_Raoults_Law_and_Phase_Diagrams_of_Ideal_Solutions
The behavior of the vapor pressure of an ideal solution can be mathematically described by a simple law established by François-Marie Raoult (1830–1901). Raoult’s law states that the partial pressure of each component, \(i\), of an ideal mixture of liquids, \(P_i\), is equal to the vapor pressure of the pure component \(P_i^*\) multiplied by its mole fraction in the mixture \(x_i\):\[ P_i=x_i P_i^*. \label{13.1.1} \]Raoult’s law applied to a system containing only one volatile component describes a line in the \(Px_{\text{B}}\) plot, as in .As emerges from , Raoult’s law divides the diagram into two distinct areas, each with three degrees of freedom.\(^1\) Each area contains a phase, with the vapor at the bottom (low pressure), and the liquid at the top (high pressure). Raoult’s law acts as an additional constraint for the points sitting on the line. Therefore, the number of independent variables along the line is only two. Once the temperature is fixed, and the vapor pressure is measured, the mole fraction of the volatile component in the liquid phase is determined.In an ideal solution, every volatile component follows Raoult’s law. Since the vapors in the gas phase behave ideally, the total pressure can be simply calculated using Dalton’s law as the sum of the partial pressures of the two components \(P_{\text{TOT}}=P_{\text{A}}+P_{\text{B}}\). The corresponding diagram is reported in . The total vapor pressure, calculated using Dalton’s law, is reported in red. The Raoult’s behaviors of each of the two components are also reported using black dashed lines.Calculate the mole fraction in the vapor phase of a liquid solution composed of 67% of toluene (\(\mathrm{A}\)) and 33% of benzene (\(\mathrm{B}\)), given the vapor pressures of the pure substances: \(P_{\text{A}}^*=0.03\;\text{bar}\), and \(P_{\text{B}}^*=0.10\;\text{bar}\).The data available for the systems are summarized as follows: \[\begin{equation} \begin{aligned} x_{\text{A}}=0.67 \qquad & \qquad x_{\text{B}}=0.33 \\ P_{\text{A}}^* = 0.03\;\text{bar} \qquad & \qquad P_{\text{B}}^* = 0.10\;\text{bar} \\ & P_{\text{TOT}} = ? \\ y_{\text{A}}=? \qquad & \qquad y_{\text{B}}=? \end{aligned} \end{equation}\label{13.1.2} \] The total pressure of the vapors can be calculated combining Dalton’s and Roult’s laws: \[\begin{equation} \begin{aligned} P_{\text{TOT}} &= P_{\text{A}}+P_{\text{B}}=x_{\text{A}} P_{\text{A}}^* + x_{\text{B}} P_{\text{B}}^* \\ &= 0.67\cdot 0.03+0.33\cdot 0.10 \\ &= 0.02 + 0.03 = 0.05 \;\text{bar} \end{aligned} \end{equation}\label{13.1.3} \] We can then calculate the mole fraction of the components in the vapor phase as: \[\begin{equation} \begin{aligned} y_{\text{A}}=\dfrac{P_{\text{A}}}{P_{\text{TOT}}} & \qquad y_{\text{B}}=\dfrac{P_{\text{B}}}{P_{\text{TOT}}} \\ y_{\text{A}}=\dfrac{0.02}{0.05}=0.40 & \qquad y_{\text{B}}=\dfrac{0.03}{0.05}=0.60 \end{aligned} \end{equation}\label{13.1.4} \] Notice how the mole fraction of toluene is much higher in the liquid phase, \(x_{\text{A}}=0.67\), than in the vapor phase, \(y_{\text{A}}=0.40\).As is clear from the results of Exercise \(\PageIndex{1}\), the concentration of the components in the gas and vapor phases are different. We can also report the mole fraction in the vapor phase as an additional line in the \(Px_{\text{B}}\) diagram of . When both concentrations are reported in one diagram—as in —the line where \(x_{\text{B}}\) is obtained is called the liquidus line, while the line where the \(y_{\text{B}}\) is reported is called the Dew point line.The liquidus and Dew point lines determine a new section in the phase diagram where the liquid and vapor phases coexist. Since the degrees of freedom inside the area are only 2, for a system at constant temperature, a point inside the coexistence area has fixed mole fractions for both phases. We can reduce the pressure on top of a liquid solution with concentration \(x^i_{\text{B}}\) (see ) until the solution hits the liquidus line. At this pressure, the solution forms a vapor phase with mole fraction given by the corresponding point on the Dew point line, \(y^f_{\text{B}}\).We can now consider the phase diagram of a 2-component ideal solution as a function of temperature at constant pressure. The \(T_{\text{B}}\) diagram for two volatile components is reported in .Compared to the \(Px_{\text{B}}\) diagram of , the phases are now in reversed order, with the liquid at the bottom (low temperature), and the vapor on top (high Temperature). The liquidus and Dew point lines are curved and form a lens-shaped region where liquid and vapor coexists. Once again, there is only one degree of freedom inside the lens. As such, a liquid solution of initial composition \(x_{\text{B}}^i\) can be heated until it hits the liquidus line. At this temperature the solution boils, producing a vapor with concentration \(y_{\text{B}}^f\). As is clear from , the mole fraction of the \(\text{B}\) component in the gas phase is lower than the mole fraction in the liquid phase. This fact can be exploited to separate the two components of the solution. In particular, if we set up a series of consecutive evaporations and condensations, we can distill fractions of the solution with an increasingly lower concentration of the less volatile component \(\text{B}\). This is exemplified in the industrial process of fractional distillation, as schematically depicted in .Each of the horizontal lines in the lens region of the \(Tx_{\text{B}}\) diagram of corresponds to a condensation/evaporation process and is called a theoretical plate. These plates are industrially realized on large columns with several floors equipped with condensation trays. The temperature decreases with the height of the column. A condensation/evaporation process will happen on each level, and a solution concentrated in the most volatile component is collected. The theoretical plates and the \(Tx_{\text{B}}\) are crucial for sizing the industrial fractional distillation columns.This page titled 13.1: Raoult’s Law and Phase Diagrams of Ideal Solutions is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,837
13.2: Phase Diagrams of Non-Ideal Solutions
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/13%3A_Multi-Component_Phase_Diagrams/13.02%3A_Phase_Diagrams_of_Non-Ideal_Solutions
Non-ideal solutions follow Raoult’s law for only a small amount of concentrations. The typical behavior of a non-ideal solution with a single volatile component is reported in the \(Px_{\text{B}}\) plot in .Raoult’s behavior is observed for high concentrations of the volatile component. This behavior is observed at \(x_{\text{B}} \rightarrow 0\) in , since the volatile component in this diagram is \(\mathrm{A}\). At low concentrations of the volatile component \(x_{\text{B}} \rightarrow 1\) in , the solution follows a behavior along a steeper line, which is known as Henry’s law. William Henry (1774–1836) has extensively studied the behavior of gases dissolved in liquids. His studies resulted in a simple law that relates the vapor pressure of a solution to a constant, called Henry’s law solubility constants:\[ P_{\text{B}}=k_{\text{AB}} x_{\text{B}}, \label{13.2.1} \]where \(k_{\text{AB}}\) depends on the chemical nature of \(\mathrm{A}\) and \(\mathrm{B}\). The corresponding diagram for non-ideal solutions with two volatile components is reported on the left panel of . The total pressure is once again calculated as the sum of the two partial pressures. Positive deviations on Raoult’s ideal behavior are not the only possible deviation from ideality, and negative deviation also exits, albeit slightly less common. An example of a negative deviation is reported in the right panel of .If we move from the \(Px_{\text{B}}\) diagram to the \(Tx_{\text{B}}\) diagram, the behaviors observed in will correspond to the diagram in .The minimum (left plot) and maximum (right plot) points in represent the so-called azeotrope.An azeotrope is a constant boiling point solution whose composition cannot be altered or changed by simple distillation. This happens because the liquidus and Dew point lines coincide at this point. Therefore, the liquid and the vapor phases have the same composition, and distillation cannot occur. Two types of azeotropes exist, representative of the two types of non-ideal behavior of solutions. The first type is the positive azeotrope (left plot in ). A notorious example of this behavior at atmospheric pressure is the ethanol/water mixture, with composition 95.63% ethanol by mass. This positive azeotrope boils at \(T=78.2\;^\circ \text{C}\), a temperature that is lower than the boiling points of the pure constituents, since ethanol boils at \(T=78.4\;^\circ \text{C}\) and water at \(T=100\;^\circ \text{C}\). The second type is the negative azeotrope (right plot in ). An example of this behavior at atmospheric pressure is the hydrochloric acid/water mixture with composition 20.2% hydrochloric acid by mass. This negative azeotrope boils at \(T=110\;^\circ \text{C}\), a temperature that is higher than the boiling points of the pure constituents, since hydrochloric acid boils at \(T=-84\;^\circ \text{C}\) and water at \(T=100\;^\circ \text{C}\).This page titled 13.2: Phase Diagrams of Non-Ideal Solutions is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,838
13.3: Phase Diagrams of 2-Components/2-Condensed Phases Systems
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/13%3A_Multi-Component_Phase_Diagrams/13.03%3A_Phase_Diagrams_of_2-Components_2-Condensed_Phases_Systems
We now consider equilibria between two condensed phases: liquid/liquid, liquid/solid, and solid/solid. These equilibria usually occur in the low-temperature region of a phase diagram (or high pressure). Three situations are possible, depending on the constituents and concentration of the mixture.We have already encountered the situation where the components of a solution mix entirely in the liquid phase. All the diagrams that we’ve discussed up to this point belong to this category.A more complicated case is that for components that do not mix in the liquid phase. The liquid region of the temperature–composition phase diagram for a solution with components that do not mix in the liquid phase below a specific temperature is reported in .While the liquid 1+liquid 2 region (white area in ) might seem similar to the liquid region that sits on top of it (blue area in ), it is substantially different in nature. To prove this, we can calculate the degrees of freedom in each region using the Gibbs phase rule. For the liquid region at the top of the diagram, at constant pressure, we have \(f=2-1+1=2\). In other words, the temperature and the composition are independent, and their values can be changed regardless of each other. In the liquid 1+liquid 2 at the bottom, however, we have \(f=2-2+1=1\), which means that only one variable is independent of the others. The white region in is a 2-phase region, and it behaves similarly to the other 2-phases regions that we encountered before, such as the inner portion of the lens in and 100% \(\mathrm{B}\), respectively).The third and final case is undoubtedly the most interesting since several behaviors are possible. In fact, there might be components that are partially miscible at low temperatures but totally miscible at higher temperatures, for which the diagram will assume the general shape depicted in . A typical example of this behavior is the mixture between water and phenol, whose liquids are completely miscible at \(T>66\;^\circ \text{C}\), and only partially miscible below this temperature. The composition of the 2-phases region (white area in ) is determined by tracing a horizontal line and reading the mole fraction on the line that delimits the area, as for the previous case.\(^1\)On the opposite side of the spectrum, the diagram for a mixture whose components are partially miscible at high temperature, but completely miscible at lower temperatures is depicted in . A typical example of this behavior is the mixture between water and triethylamine, whose liquids are completely miscible at \(T<18.5\;^\circ \text{C}\), and only partially miscible above this temperature.Finally, both situations described above are possible simultaneously. For some particular solutions, there exists a range of temperature where the two components are only partially miscible. A typical example of this behavior is given by the water/nicotine mixture, whose liquids are completely miscible at \(T>210\;^\circ \text{C}\) and \(T<61\;^\circ \text{C}\), but only partially miscible in between these two temperatures, as in the diagram of .For some particular mixture, the temperature of partial miscibility in the liquid/liquid region might be close to the azeotrope temperature. In some cases, these two regions might even overlap. These characteristic behaviors are reported in .When the azeotrope and partially miscibility temperature overlap, the system forms what is known as an eutectic. Eutectic diagrams are possible at the liquid/gas equilibrium. Still, they are widespread at the liquid/solid equilibrium, where two components are completely miscible in the liquid phase, but only partially miscible in the solid phase. Eutectics with completely immiscible components in the solid phase are also very common, as the diagram reported in .This page titled 13.3: Phase Diagrams of 2-Components/2-Condensed Phases Systems is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,839
14.1: Activity
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/14%3A_Properties_of_Solutions/14.01%3A_Activity
For non-ideal gases, we introduced in chapter 11 the concept of fugacity as an effective pressure that accounts for non-ideal behavior. If we extend this concept to non-ideal solution, we can introduce the activity of a liquid or a solid, \(a\), as:\[ \mu_{\text{non-ideal}} = \mu^{- {\ominus} } + RT \ln a, \label{14.1.1} \]where \(\mu\) is the chemical potential of the substance or the mixture, and \(\mu^{-\ominus}\) is the chemical potential at standard state. Comparing this definition to Equation 11.4.2, it is clear that the activity is equal to the fugacity for a non-ideal gas (which, in turn, is equal to the pressure for an ideal gas). However, for a liquid and a liquid mixture, it depends on the chemical potential at standard state. This means that the activity is not an absolute quantity, but rather a relative term describing how “active” a compound is compared to standard state conditions. The choice of the standard state is, in principle, arbitrary, but conventions are often chosen out of mathematical or experimental convenience. We already discussed the convention that standard state for a gas is at \(P^{-\ominus}=1\;\text{bar}\), so the activity is equal to the fugacity. The standard state for a component in a solution is the pure component at the temperature and pressure of the solution. This definition is equivalent to setting the activity of a pure component, \(i\), at \(a_i=1\).For a component in a solution we can use Equation 11.4.2 to write the chemical potential in the gas phase as:\[ \mu_i^{\text{vapor}} = \mu_i^{- {\ominus} } + RT \ln \dfrac{P_i}{P^{-\ominus}}. \label{14.1.2} \]If the gas phase is in equilibrium with the liquid solution, then:\[ \mu_i^{\text{solution}} = \mu_i^{\text{vapor}} = \mu_i^*, \label{14.1.3} \]where \(\mu_i^*\) is the chemical potential of the pure element. Subtracting Equation \ref{14.1.3} from Equation \ref{14.1.2}, we obtain:\[ \mu_i^{\text{solution}} = \mu_i^* + RT \ln \dfrac{P_i}{P^*_i}. \label{14.1.4} \]For an ideal solution, we can use Raoult’s law, Equation 13.1.1, to rewrite Equation \ref{14.1.4} as:\[ \mu_i^{\text{solution}} = \mu_i^* + RT \ln x_i, \label{14.1.5} \]which relates the chemical potential of a component in an ideal solution to the chemical potential of the pure liquid and its mole fraction in the solution. For a non-ideal solution, the partial pressure in Equation \ref{14.1.4} is either larger (positive deviation) or smaller (negative deviation) than the pressure calculated using Raoult’s law. The chemical potential of a component in the mixture is then calculated using:\[ \mu_i^{\text{solution}} = \mu_i^* + RT \ln \left(\gamma_i x_i\right), \label{14.1.6} \]where \(\gamma_i\) is a positive coefficient that accounts for deviations from ideality. This coefficient is either larger than one (for positive deviations), or smaller than one (for negative deviations). The activity of component \(i\) can be calculated as an effective mole fraction, using:\[ a_i = \gamma_i x_i, \label{14.1.7} \]where \(\gamma_i\) is defined as the activity coefficient. The partial pressure of the component can then be related to its vapor pressure, using:\[ P_i = a_i P_i^*. \label{14.1.8} \]Comparing Equation \ref{14.1.8} with Raoult’s law, we can calculate the activity coefficient as:\[ \gamma_i = \dfrac{P_i}{x_i P_i^*} = \dfrac{P_i}{P_i^{\text{R}}}, \label{14.1.9} \]where \(P_i^{\text{R}}\) is the partial pressure calculated using Raoult’s law. This result also proves that for an ideal solution, \(\gamma=1\). Equation \ref{14.1.9} can also be used experimentally to obtain the activity coefficient from the phase diagram of the non-ideal solution. This is achieved by measuring the value of the partial pressure of the vapor of a non-ideal solution. Examples of this procedure are reported for both positive and negative deviations in .This page titled 14.1: Activity is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,840
14.2: Colligative Properties
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/14%3A_Properties_of_Solutions/14.02%3A_Colligative_Properties
Colligative properties are properties of solutions that depend on the number of particles in the solution and not on the nature of the chemical species. More specifically, a colligative property depends on the ratio between the number of particles of the solute and the number of particles of the solvent. This ratio can be measured using any unit of concentration, such as mole fraction, molarity, and normality. For diluted solutions, however, the most useful concentration for studying colligative properties is the molality, \(m\), which measures the ratio between the number of particles of the solute (in moles) and the mass of the solvent (in kg):\[ m = \dfrac{n_{\text{solute}}}{m_{\text{solvent}}}. \label{14.2.1} \]Colligative properties usually result from the dissolution of a nonvolatile solute in a volatile liquid solvent, and they are properties of the solvent, modified by the presence of the solute. They are physically explained by the fact that the solute particles displace some solvent molecules in the liquid phase, thereby reducing the concentration of the solvent. This explanation shows how colligative properties are independent of the nature of the chemical species in a solution only if the solution is ideal. For non-ideal solutions, the formulas that we will derive below are valid only in an approximate manner. We will discuss the following four colligative properties: relative lowering of the vapor pressure, elevation of the boiling point, depression of the melting point, and osmotic pressure.As we have already discussed in chapter 13, the vapor pressure of an ideal solution follows Raoult’s law. Its difference with respect to the vapor pressure of the pure solvent can be calculated as:\[\begin{equation} \begin{aligned} P_{\text{solvent}}^* &- P_{\text{solution}} = P_{\text{solvent}}^* - x_{\text{solvent}} P_{\text{solvent}}^* \\ & = \left( 1-x_{\text{solvent}}\right)P_{\text{solvent}}^* =x_{\text{solute}} P_{\text{solvent}}^*, \end{aligned} \end{equation} \label{14.2.2} \]which shows that the vapor pressure lowering depends only on the concentration of the solute. As such, it is a colligative property.The following two colligative properties are explained by reporting the changes due to the solute molecules in the plot of the chemical potential as a function of temperature ).At the boiling point, the chemical potential of the solution is equal to the chemical potential of the vapor, and the following relation can be obtained:\[\begin{equation} \begin{aligned} \mu_{\text{solution}} &=\mu_{\text{vap}}=\mu_{\text{solvent}}^{-\kern-6pt{\ominus}\kern-6pt-} + RT \ln P_{\text{solution}} \\ &= \mu_{\text{solvent}}^{-\kern-6pt{\ominus}\kern-6pt-} + RT \ln \left(x_{\text{solution}} P_{\text{solvent}}^* \right)\\ &= \underbrace{\mu_{\text{solvent}}^{-\kern-6pt{\ominus}\kern-6pt-} + RT \ln P_{\text{solvent}}^*}_{\mu_{\text{solvent}}^*} + RT \ln x_{\text{solution}} \\ &= \mu_{\text{solvent}}^* + RT \ln x_{\text{solution}}, \end{aligned} \end{equation} \label{14.2.3} \]and since \(x_{\text{solution}}<1\), the logarithmic term in the last expression is negative, and:\[ \mu_{\text{solution}} < \mu_{\text{solvent}}^*. \label{14.2.4} \]Equation \ref{14.2.3} proves that the addition of a solute always stabilizes the solvent in the liquid phase, and lowers its chemical potential, as shown in .The elevation of the boiling point can be quantified using:\[ \Delta T_{\text{b}}=T_{\text{b}}^{\text{solution}}-T_{\text{b}}^{\text{solvent}}=iK_{\text{b}}m, \label{14.2.5} \]where \(i\) is the van ’t Hoff factor, a coefficient that measures the number of solute particles for each formula unit, \(K_{\text{b}}\) is the ebullioscopic constant of the solvent, and \(m\) is the molality of the solution, as introduced in Equation \ref{14.2.1} above. For a solute that does not dissociate in solution, \(i=1\). For a solute that dissociates in solution, the number of particles in solutions depends on how many particles it dissociates into, and \(i>1\). For example, the strong electrolyte \(\mathrm{Ca}\mathrm{Cl}_2\) completely dissociates into three particles in solution, one \(\mathrm{Ca}^{2+}\) and two \(\mathrm{Cl}^-\), and \(i=3\). For cases of partial dissociation, such as weak acids, weak bases, and their salts, \(i\) can assume non-integer values.If we assume ideal solution behavior, the ebullioscopic constant can be obtained from the thermodynamic condition for liquid-vapor equilibrium. At the boiling point of the solution, the chemical potential of the solvent in the solution phase equals the chemical potential in the pure vapor phase above the solution:\[ \mu_{\text{solution}} (T_{\text{b}}) = \mu_{\text{solvent}}^*(T_b) + RT\ln x_{\text{solvent}}, \label{14.2.6} \]from which we can derive, using the Gibbs–Helmholtz equation, Equation 9.2.4:\[ K_{\text{b}}=\dfrac{RMT_{\text{b}}^{2}}{\Delta_{\mathrm{vap}} H}, \label{14.2.7} \]where \(R\) is the ideal gas constant, \(M\) is the molar mass of the solvent, and \(\Delta_{\mathrm{vap}} H\) is its molar enthalpy of vaporization.The reduction of the melting point is similarly obtained by:\[ \Delta T_{\text{m}}=T_{\text{m}}^{\text{solution}}-T_{\text{m}}^{\text{solvent}}=-iK_{\text{m}}m, \label{14.2.8} \]where \(i\) is the van ’t Hoff factor introduced above, \(K_{\text{m}}\) is the cryoscopic constant of the solvent, \(m\) is the molality, and the minus sign accounts for the fact that the melting temperature of the solution is lower than the melting temperature of the pure solvent (\(\Delta T_{\text{m}}\) is defined as a negative quantity, while \(i\), \(K_{\text{m}}\), and \(m\) are all positive). Similarly to the previous case, the cryoscopic constant can be related to the molar enthalpy of fusion of the solvent using the equivalence of the chemical potential of the solid and the liquid phases at the melting point, and employing the Gibbs–Helmholtz equation:\[ K_{\text{m}}=\dfrac{RMT_{\text{m}}^{2}}{\Delta_{\mathrm{fus}}H}. \label{14.2.9} \]Notice from how the depression of the melting point is always smaller than the elevation of the boiling point. This is because the chemical potential of the solid is essentially flat, while the chemical potential of the gas is steep. Consequently, the value of the cryoscopic constant is always bigger than the value of the ebullioscopic constant. For example, for water \(K_{\text{m}} = 1.86\; \dfrac{\text{K kg}}{\text{mol}}\), while \(K_{\text{b}} = 0.512\; \dfrac{\text{K kg}}{\text{mol}}\). This is also proven by the fact that the enthalpy of vaporization is larger than the enthalpy of fusion.The osmotic pressure of a solution is defined as the difference in pressure between the solution and the pure liquid solvent when the two are in equilibrium across a semi-permeable (osmotic) membrane. The osmotic membrane is made of a porous material that allows the flow of solvent molecules but blocks the flow of the solute ones. The osmosis process is depicted in .Starting from a solvent at atmospheric pressure in the apparatus depicted in , we can add solute particles to the left side of the apparatus. The increase in concentration on the left causes a net transfer of solvent across the membrane. This flow stops when the pressure difference equals the osmotic pressure, \(\pi\). The formula that governs the osmotic pressure was initially proposed by van ’t Hoff and later refined by Harmon Northrop Morse (1848–1920). The Morse formula reads:\[ \pi = imRT, \label{14.2.10} \]where \(i\) is the van ’t Hoff factor introduced above, \(m\) is the molality of the solution, \(R\) is the ideal gas constant, and \(T\) the temperature of the solution. As with the other colligative properties, the Morse equation is a consequence of the equality of the chemical potentials of the solvent and the solution at equilibrium.\(^1\)This page titled 14.2: Colligative Properties is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,841
15.1: Differential and integrated rate laws
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/15%3A_Chemical_Kinetics/15.01%3A_Differential_and_integrated_rate_laws
The rate law of a chemical reaction is an equation that links the initial rate with the concentrations (or pressures) of the reactants. Rate laws usually include a constant parameter, \(k\), called the rate coefficient, and several parameters found at the exponent of the concentrations of the reactants, and are called reaction orders. The rate coefficient depends on several conditions, including the reaction type, the temperature, the surface area of an adsorbent, light irradiation, and others. The reaction rate is usually represented with the lowercase letter \(k\), and it should not be confused with the thermodynamic equilibrium constant that is generally designated with the uppercase letter \(K\). Another useful concept in kinetics is the half-life, usually abbreviated with \(t_{1/2}\). The half-life is defined as the time required to reach half of the initial reactant concentration.A reaction that happens in one single microscopic step is called elementary. Elementary reactions have reaction orders equal to the (integer) stoichiometric coefficients for each reactant. As such, only a limited number of elementary reactions are possible (four types are commonly observed), and they are classified according to their overall reaction order. The global reaction order of a reaction is calculated as the sum of each reactant’s individual orders and is, at most, equal to three. We examine in detail the four most common reaction orders below.For a zeroth-order reaction, the reaction rate is independent of the concentration of a reactant. In other words, if we have a reaction of the type:\[ \text{A}\longrightarrow\text{products} \nonumber \]the differential rate law can be written:\[ - \dfrac{d[\mathrm{A}]}{dt}=k_0 [\mathrm{A}]^0 = k_0, \label{15.1.1} \]which shows that any change in the concentration of \(\mathrm{A}\) will have no effect on the speed of the reaction. The minus sign at the right-hand-side is required because the rate is always defined as a positive quantity, while the derivative is negative because the concentration of the reactant is diminishing with time. Separating the variables \([\mathrm{A}]\) and \(t\) of Equation \ref{15.1.1} and integrating both sides, we obtain the integrated rate law for a zeroth-order reaction as:\[ \begin{equation} \begin{aligned} \int_{[\mathrm{A}]_0}^{[A]} d[\mathrm{A}] &= -k_0 \int_{t=0}^{t} dt \\ [\mathrm{A}]-[\mathrm{A}]_0 &= -k_0 t \\ \\ [\mathrm{A}]&=[\mathrm{A}]_0 -k_0 t. \end{aligned} \end{equation} \label{15.1.2} \]Using the integrated rate law, we notice that the concentration on the reactant diminishes linearly with respect to time. A plot of \([\mathrm{A}]\) as a function of \(t\), therefore, will result in a straight line with an angular coefficient equal to \(-k_0\), as in the plot of .Eq. (15.1.2) also suggests that the units of the rate coefficient for a zeroth-order reaction are of concentration divided by time, typically \(\dfrac{\mathrm{M}}{\mathrm{s}}\), with \(\mathrm{M}\) being the molar concentration in \(\dfrac{\mathrm{mol}}{\mathrm{L}}\) and \(s\) the time in seconds. The half-life of a zero order reaction can be calculated from Equation \ref{15.1.2}, by replacing \([\mathrm{A}]\) with \(\dfrac{1}{2}[\mathrm{A}]_0\):\[ \begin{equation} \begin{aligned} \dfrac{1}{2}[\mathrm{A}]_0 &=[\mathrm{A}]_0 -k_0 t_{1/2} \\ t_{1/2} &= \dfrac{[\mathrm{A}]_0}{2k_0}. \end{aligned} \end{equation} \label{15.1.3} \]Zeroth-order reactions are common in several biochemical processes catalyzed by enzymes, such as the oxidation of ethanol to acetaldehyde in the liver by the alcohol dehydrogenase enzyme, which is zero-order in ethanol.A first-order reaction depends on the concentration of only one reactant, and is therefore also called a unimolecular reaction. As for the previous case, if we consider a reaction of the type:\[ \mathrm{A}\rightarrow \text{products} \nonumber \]the differential rate law for a first-order reaction is:\[ - \dfrac{d[\mathrm{A}]}{dt}=k_1 [\mathrm{A}]. \label{15.1.4} \]Following the usual blueprint of separating the variables, and integrating both sides, we obtain the integrated rate law as:\[ \begin{equation} \begin{aligned} \int_{[\mathrm{A}]_0}^{[A]} \dfrac{d[\mathrm{A}]}{[\mathrm{A}]} &= -k_1 \int_{t=0}^{t} dt \\ \ln \dfrac{[\mathrm{A}]}{[\mathrm{A}]_0}&=-k_1 t\\ \\ [\mathrm{A}] &= [\mathrm{A}]_0 \exp(-k_1 t). \end{aligned} \end{equation} \label{15.1.5} \]Using the integrated rate law to plot the concentration of the reactant, \([\mathrm{A}]\), as a function of time, \(t\), we obtain an exponential decay, as in .However, if we plot the logarithm of the concentration, \(\ln[\mathrm{A}]\), as a function of time, we obtain a line with angular coefficient \(-k_1\), as in the plot of . From Equation \ref{15.1.5}, we can also obtain the units for the rate coefficient for a first-order reaction, which typically is \(\dfrac{1}{\mathrm{s}}\), independent of concentration. Since the rate coefficient for first-order reactions has units of inverse time, it is sometimes called the frequency rate.The half-life of a first-order reaction is:\[ \begin{equation} \begin{aligned} \ln \dfrac{\dfrac{1}{2}[\mathrm{A}]_0}{[\mathrm{A}]_0}&=-k_1 t_{1/2}\\ t_{1/2} &= \dfrac{\ln 2}{k_1}. \end{aligned} \end{equation} \label{15.1.6} \]The half-life of a first-order reaction is independent of the initial concentration of the reactant. Therefore, the half-life can be used in place of the rate coefficient to describe the reaction rate. Typical examples of first-order reactions are radioactive decays. For radioactive isotopes, it is common to report their rate of decay in terms of their half-life. For example, the most stable uranium nucleotide, \(^{238}\mathrm{U}\), has a half-life of \(4.468\times 10^9\) years, while the most common fissile isotope of uranium, \(^{235}\mathrm{U}\), has a half-life of \(7.038\times 10^8\) years.\(^1\) Other examples of first-order reactions in chemistry are the class of SN1 nucleophilic substitution reactions in organic chemistry.A reaction is second-order when the sum of the reaction orders is two. Elementary second-order reactions are also called bimolecular reactions. There are two possibilities, a simple one, where the reaction order of one reagent is two, or a more complicated one, with two reagents having each a reaction order of one.Notice that the line has a positive angular coefficient, in contrast with the previous two cases, for which the angular coefficients were negative. The units of \(k\) for a simple second order reaction are calculated from Equation \ref{15.1.8} and typically are \(\dfrac{1}{\mathrm{M}\cdot \mathrm{s}}\). The half-life of a simple second-order reaction is: \[ \begin{equation} \begin{aligned} \dfrac{1}{\dfrac{1}{2}[\mathrm{A}]_0}-\dfrac{1}{[\mathrm{A}]_0} &= k_2 t_{1/2} \\ t_{1/2} &= \dfrac{1}{k_2 [\mathrm{A}]_0}, \end{aligned} \end{equation} \label{15.1.9} \] which, perhaps not surprisingly, depends on the initial concentration of the reactant, \([\mathrm{A}]_0\). Therefore, if we start with a higher concentration of the reactant, the half-life will be shorter, and the reaction will be faster. An example of simple second-order behavior is the reaction \(\mathrm{NO}_2 + \mathrm{CO} \rightarrow \mathrm{NO} + \mathrm{CO}_2\), which is second-order in \(\mathrm{NO}_2\) and zeroth-order in \(\mathrm{CO}\).Although elementary reactions with order higher than two are possible, they are in practice infrequent, and only very few experimental third-order reactions are observed. Fourth-order or higher have never been observed because the probabilities for a simultaneous interaction between four molecules are essentially zero. Third-order elementary reactions are also called termolelucar reactions. While termolelucar reactions with three identical reactants are possible in principle, there is no known experimental example. Some complex third-order reactions are known, such as:\[ 2\text{NO}_{(g)}+\text{O}_{2(g)}\longrightarrow 2\text{NO}_{2(g)} \nonumber \]for which the differential rate law can be written as:\[ -\dfrac{dP_{\mathrm{O}_2}}{dt}=k_3 P_{\mathrm{NO}}^2 P_{\mathrm{O}_2}. \label{15.1.12} \]This page titled 15.1: Differential and integrated rate laws is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,842
15.2: Complex Rate Laws
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/15%3A_Chemical_Kinetics/15.02%3A_Complex_Rate_Laws
It is essential to specify that the order of a reaction and its molecularity are equal only for elementary reactions. Reactions that follow complex laws are composed of several elementary steps, and they usually have non-integer reaction orders, for at least one of the reactants.A reaction that happens following a sequence of two elementary steps can be written as follows:\[ \text{A}\xrightarrow{\;k_1\;}\text{B}\xrightarrow{\;k_2\;}\text{C} \nonumber \]Assuming that each of the steps follows a first order kinetic law, and that only the reagent \(\mathrm{A}\) is present at the beginning of the reaction, we can write the differential change in concentration of each species with respect to infinitesimal time \(dt\), using the following formulas:\[\begin{equation} \begin{aligned} -\dfrac{d[\mathrm{A}]}{dt}&=k_1 [\mathrm{A}] \Rightarrow [\mathrm{A}] = [\mathrm{A}]_0 \exp(-k_1 t) \\ \dfrac{d[\mathrm{B}]}{dt} &=k_1 [\mathrm{A}]-k_2 [\mathrm{B}] \\ \dfrac{d[\mathrm{C}]}{dt} &=k_2 [\mathrm{B}]. \end{aligned} \end{equation}\label{15.2.1} \]These three equations represent a system of differential equations with three unknown variables. Unfortunately, these equations are linearly dependent on each other, and they are not sufficient to solve the system for each variable. To do so, we need to include a fourth equation, coming from the conservation of mass:\[ [\mathrm{A}]_0=[\mathrm{A}]+[\mathrm{B}]+[\mathrm{C}]. \label{15.2.2} \]Using the first equation in Equation \ref{15.2.1}, we can now replace the concentration \([\mathrm{A}]\) in the second equation and solve for \([\mathrm{B}]\):\[ \dfrac{d[\mathrm{B}]}{dt}+k_2 [\mathrm{B}]=k_1 [\mathrm{A}]_0 \exp(-k_1 t), \label{15.2.3} \]which can be simplified by multiplying both sides by \(\exp (k_2t)\):\[\begin{equation} \begin{aligned} \left( \dfrac{d[\mathrm{B}]}{dt}+k_2 [\mathrm{B}] \right) \exp (k_2t) &= k_1 [\mathrm{A}]_0 \exp[(k_2-k_1) t] \\ \Rightarrow \dfrac{d\left\{[\mathrm{B}]\exp (k_2t)\right\}}{dt} &= k_1 [\mathrm{A}]_0 \exp[(k_2-k_1) t], \end{aligned} \end{equation}\label{15.2.4} \]which can then be integrated remembering that \([B]_0=0\), and \(\int \exp(kx)=\dfrac{1}{k}\exp(kx)\):\[ [\mathrm{B}] = \dfrac{k_1}{k_2-k_1} [\mathrm{A}]_0 [\exp(-k_1t)-\exp(-k_2t)]. \label{15.2.5} \]We can then use both \([\mathrm{A}]\), from Equation \ref{15.2.1}, and \([\mathrm{B}]\), from Equation \ref{15.2.5}, in Equation \ref{15.2.2} to solve for \([\mathrm{C}]\):\[\begin{equation} \begin{aligned} \left[\mathrm{C}\right] &= [\mathrm{A}]_0-[\mathrm{A}]-[\mathrm{B}] \\ &= [\mathrm{A}]_0-[\mathrm{A}]_0 \exp(-k_1 t)-\dfrac{k_1}{k_2-k_1} [\mathrm{A}]_0 [\exp(-k_1t)-\exp(-k_2t)] \\ &= [\mathrm{A}]_0\left\{1+\dfrac{-k_2 \exp(-k_1t)+ k_1 \exp(-k_2t)}{k_2-k_1} \right\}. \end{aligned} \end{equation}\label{15.2.6} \]From these results, we can distinguish two extreme behaviors. The first one is observed when \(k_1 \cong k_2\), and it produces a plot of the concentration of species with respect to time reported in . This behavior is observed when a process undergoing a series of consecutive reactions present a rate-determining step in the middle of the sequence (the second reaction, in the simple case analyzed above). Once the process is established, its rate will equate the rate of the slowest step.The second behavior is observed when \(k_1\ll k_2\), and it produces the plot in In this case, the concentration of the intermediate species \(B\) is not relevant throughout the process, and the rate-determining step is the first reaction. As such, the process has the same rate law as an elementary reaction going directly from \(A\) to \(C\).Since the concentration of \(B\) is small and relatively constant throughout the process, \(\dfrac{d[\mathrm{B}]}{dT}=0\). We can then simplify the mathematical treatment of these reactions by eliminating it from the process altogether. This simplification is known as the steady-state approximation. It is used in chemical kinetics to study processes that undergo a series of reactions producing intermediate species whose concentrations are constants throughout the entire process.\[\begin{equation} \begin{aligned} \text{A} &\xrightarrow{\;k_1\;} \text{I}_1 \xrightarrow{\;k_2\;} \text{I}_2 \xrightarrow{\quad} \cdots \xrightarrow{\;k_n\;}\text{products} \\ & \text{Steady State Approximation:} \\ \text{A}&\xrightarrow{\qquad\qquad\qquad\qquad\quad\quad\;\;}\text{products} \end{aligned} \end{equation}\label{15.2.7} \]A process where two elementary reactions happen in parallel, competing with each can be written as follows:\[ \begin{matrix} &_{k_1} & B\\ &\nearrow & \\ A & & \\ &\searrow& \\ &_{k_2} & C \end{matrix} \nonumber \]Assuming that each step follows first order kinetic, we can write:\[\begin{equation} \begin{aligned} -\dfrac{d[\mathrm{A}]}{dt} &=k_1 [\mathrm{A}]+k_2 [\mathrm{A}] \Rightarrow [\mathrm{A}]=[\mathrm{A}]_0\exp \left[ -(k_1+k_2)t \right] \\ \dfrac{d[\mathrm{B}]}{dt} &=k_1 [\mathrm{A}] \Rightarrow [\mathrm{B}]=\dfrac{k_1}{k_1+k_2}[\mathrm{A}]_0 \left\{ 1-\exp \left[ -(k_1+k_2)t \right] \right\} \\ \dfrac{d[\mathrm{C}]}{dt} &=k_2 [\mathrm{A}]\Rightarrow [\mathrm{C}]=\dfrac{k_2}{k_1+k_2}[\mathrm{A}]_0 \left\{ 1-\exp \left[ -(k_1+k_2)t \right] \right\}. \end{aligned} \end{equation}\nonumber \]The concentration of each of the species can then be plotted against time, obtaining the diagram reported in . The final concentrations of the products, \([\mathrm{B}]_f\) and \([\mathrm{C}]_f\), will depend on the values of the two rate coefficients. For example, if \(k_1>k_2\), \([\mathrm{B}]_f>[\mathrm{C}]_f\), as in , but if \(k_1[k_1][k_{-1}]}\mathrm{B} \nonumber \]where the rate coefficients for the forward and backwards reaction, \(k_1\) and \(k_{-1}\) respectively, are not necessarily equal to each other, but comparable in magnitude. We can write the rate laws for each of these elementary steps as:\[\begin{equation} \begin{aligned} -\dfrac{d[\mathrm{A}]}{dt} &=k_1 [\mathrm{A}]-k_{-1} [\mathrm{B}] = k_1 [\mathrm{A}]-k_{-1}\left([\mathrm{A}]_0-[\mathrm{A}]\right) \\ \dfrac{d[\mathrm{A}]}{dt} &=-(k_1+k_{-1})[\mathrm{A}] + k_{-1}[\mathrm{A}]_0, \end{aligned} \end{equation}\label{15.2.8} \]which can then be integrated to:\[\begin{equation} \begin{aligned} \left[\mathrm{A}\right] &=[\mathrm{A}]_0\dfrac{k_{-1}+k_1\exp[-(k_1+k_{-1})t]}{k_1+k_{-1}} \\ \left[\mathrm{B}\right] &=[\mathrm{A}]_0\left\{ 1-\dfrac{k_{-1}+k_1\exp[-(k_1+k_{-1})t]}{k_1+k_{-1}}\right\}. \end{aligned} \end{equation}\label{15.2.9} \]These formulas can then be used to obtain the plots in .As can be seen from the plots in , after a sufficiently long time, the systems reach a dynamic equilibrium, where the concentration of \(\mathrm{A}\) and \(\mathrm{B}\) don’t change. These equilibrium concentrations can be calculated replacing \(t=\infty\) in Equation \ref{15.2.8}:\[\begin{equation} \begin{aligned} \left[\mathrm{A} \right] _{\mathrm{eq}} &= [\mathrm{A}]_0 \dfrac{k_{-1}}{k_1+k_{-1}} \\ [\mathrm{B}]_{\mathrm{eq}} &= [\mathrm{A}]_0 \dfrac{k_{1}}{k_1+k_{-1}}. \end{aligned} \end{equation}\label{15.2.10} \]Considering that the concentrations of the species don’t change at equilibrium:\[\begin{equation} \begin{aligned} -\dfrac{d[\mathrm{A}]_{\mathrm{eq}}}{dt} &= \dfrac{d[\mathrm{B}]_{\mathrm{eq}}}{dt} = 0\\ & \Rightarrow \; k_1[\mathrm{A}]_{\mathrm{eq}} = k_{-1}[\mathrm{B}]_{\mathrm{eq}} \\ & \Rightarrow \; \dfrac{k_1}{k_{-1}} = \dfrac{[\mathrm{B}]_{\mathrm{eq}}}{[\mathrm{A}]_{\mathrm{eq}}} = K_C, \\ \end{aligned} \end{equation} \label{15.2.11} \]where \(K_C\) is the equilibrium constant as defined in chapter 10. This is a rare link between kinetics and thermodynamics and appears only for opposed reactions after sufficient time has passed so that the system can reach the dynamic equilibrium.This page titled 15.2: Complex Rate Laws is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,843
15.3: Experimental Methods for Determination of Reaction Orders
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/15%3A_Chemical_Kinetics/15.03%3A_Experimental_Methods_for_Determination_of_Reaction_Orders
To experimentally measure the reaction rate, we need a method to measure concentration changes with respect to time. The simplest way to determine the reaction rate is to monitor the entire reaction as it proceeds and then plot the resulting data differently until a linear plot is found. A summary of the results obtained in section 15.1 and that is useful for this task is reported in the following table:However, this method works only if the reaction has few reactants, and it requires several measurements, each of which might be complicated to make. More useful methods to determine the reaction rate are the initial rate and the isolation methods that we describe below.The initial rates method involves measuring the rate of a reaction as soon as it starts before any significant change in the concentrations of the reactants occurs. The initial rate method is practical only if the reaction is reasonably slow, but it can measure the rate unambiguously when more than one reactant is involved. For example, if we have a reaction with the following stoichiometry:\[ \alpha \mathrm{A} + \beta \mathrm{B} \xrightarrow{k} \text{products} \nonumber \]the initial rate method can be used to determine the coefficients of the rate law:\[ \text{Rate}=k[\mathrm{A}]^{\alpha}[\mathrm{B}]^{\beta} \label{15.3.1} \]by designing three experiment, where the initial concentrations of \(\mathrm{A}\) and \(\mathrm{B}\) are appropriately changed. For example, let’s consider the following experimental data from three different experiments:we can calculate \(\alpha\) by taking the ratio of the rates measured in experiment 1 and 2:\[\begin{equation} \begin{aligned} \dfrac{\text{Rate}}{\text{Rate}}&=\dfrac{k(0.10\;\text{M})^\alpha(0.10\;\text{M})^\beta}{k(0.15\;\text{M})^\alpha(0.10\;\text{M})^\beta} \\ \dfrac{4.32}{9.70}&=\dfrac{(0.10\;\text{M})^\alpha}{(0.15\;\text{M})^\alpha} \\ 0.445&=0.667^\alpha \;\rightarrow\; \ln0.445=\alpha \ln0.667 \\ \alpha &= \dfrac{-0.81}{-0.405}=2. \end{aligned} \end{equation} \label{15.3.2} \]\(\beta\) can be calculated similarly by taking the ratio between experiments 1 and 3. Alternatively, we can also notice that the reaction rate does not change when the initial concentration \([\mathrm{B}]_0\) is doubled, therefore \(\beta=0\).Another method that is widely used to determine reaction orders is the isolation method. This method is performed by using large excess concentrations of all reactants but one. For example, if we have the following reaction with three reagents and unknown rate law:\[ \alpha \mathrm{A} + \beta \mathrm{B} + \gamma \mathrm{C} \xrightarrow{k} \text{products} \nonumber \]we can perform three different experiments, in each of which we use an excessive amount of one of the two reagents, such as:From each experiment we can determine the pseudo-order of the reaction with respect to the reagent that is in minority concentration. For example, for the reaction above, we can write the rate law as:\[ \text{Rate}=k[\mathrm{A}]^{\alpha}[\mathrm{B}]^{\beta}[\mathrm{C}]^{\gamma} \label{15.3.3} \]and we can write the initial concentrations, \([X]_0\), and the final concentrations, \([X]_f\), of each of the species in experiment 1, as:\[ \begin{aligned} \left[\mathrm{A}\right]_0 =1\;\text{M}\;\longrightarrow &[\mathrm{A}]_f=0\;\text{M} \qquad &\text{(100\% change)} \\ \left[\mathrm{B}\right]_0 =1000\;\text{M}\;\longrightarrow &[\mathrm{B}]_f=1000-1=999\;\text{M}\cong [\mathrm{B}]_0\qquad &\text{(0.1\% change)}\\ \left[\mathrm{C}\right]_0 =1000\;\text{M}\;\longrightarrow &[\mathrm{C}]_f=1000-1=999\;\text{M} \cong \left[\mathrm{C}\right]_0. \qquad &\text{(0.1\% change)} \end{aligned} \nonumber \]The coefficient \(\alpha\) can then be determined by incorporating the concentration of the reactants in excess into the rate constant as:\[ \begin{aligned} \text{rate}&=k[\mathrm{A}]^{\alpha}\underbrace{[\mathrm{B}]^{\beta}[\mathrm{C}]^{\gamma}}_{\text{constant}} \\ &= k'[\mathrm{A}]^{\alpha} \end{aligned} \nonumber \]and then determine \(\alpha\) by verifying which order the data collected for \([\mathrm{A}]\) at various time fit. This can be simply achieved by using the zero-, first-, and second-order kinetic plots, as reported in the table above. We can determine \(\beta\) and \(\gamma\) by repeating the same procedure for the data from the other two experiments. For example, if we find for a specific reaction that \(\alpha=\) 1, \(\beta=2\), and \(\gamma=0\), we can then say that the reaction is pseudo-order one in \(\mathrm{A}\), pseudo-order two in \(\mathrm{B}\), and pseudo-order zero in \(\mathrm{C}\), with an overall reaction order of three.This page titled 15.3: Experimental Methods for Determination of Reaction Orders is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,844
15.4: Temperature Dependence of the Rate Coefficients
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/15%3A_Chemical_Kinetics/15.04%3A_Temperature_Dependence_of_the_Rate_Coefficients
The dependence of the rate coefficient, \(k\), on the temperature is given by the Arrhenius equation. This formula was derived by Svante August Arrhenius (1859–1927) in 1889 and is based on the simple experimental observation that every chemical process gets faster when the temperature is increased. Working on data from equilibrium reactions previously reported by van ’t Hoff, Arrhenius proposed the following simple exponential formula to explain the increase of \(k\) when \(T\) is increased:\[ k=A\exp\left( \dfrac{E_a}{RT}\right), \label{15.4.1} \]where \(A\) is the so-called Arrhenius pre-exponential factor, and \(E_a\) is the activation energy. Both of these terms are independent of temperature,\(^1\) and they represent experimental quantities that are unique to each individual reaction. Since there is no known exception to the fact that a temperature increase speeds up chemical reactions, both \(A\) and \(E_a\) are always positive. The pre-exponential factor units are the same as the rate constant and will vary depending on the order of the reaction. As suggested by its name, the activation energy has units of energy per mole of substance, \(\dfrac{\mathrm{J}}{\mathrm{mol}}\) in SI.The Arrhenius equation is experimentally useful in its linearized form, which is obtained from two Arrhenius experiments, taken at different temperatures. Applying Equation \ref{15.4.1} to two different experiments, and taking the ratio between the results, we obtain:\[ \ln \dfrac{k_{T_2}}{k_{T_1}}=-\dfrac{E_a}{RT}\left(\dfrac{1}{T_2}-\dfrac{1}{T_1}\right), \label{15.4.2} \]which gives the plot of , from which \(E_a\) can be determined.From empirical arguments, Arrhenius proposed the idea that reactants must acquire a minimum amount of energy before they can form any product. He called this amount of minimum energy the activation energy. We can motivate this assumption by plotting energy of a reaction along the reaction coordinate, as in .\(^2\) The reaction coordinate is defined as the minimum energy path that connects the reactants with the products.This page titled 15.4: Temperature Dependence of the Rate Coefficients is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,845
16.1: Introduction
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/16%3A_The_Motivation_for_Quantum_Mechanics/16.01%3A_Introduction
Quantum mechanics is an important intellectual achievement of the 20th century. It is one of the more sophisticated field in physics that has affected our understanding of nano-meter length scale systems important for chemistry, materials, optics, and electronics. The existence of orbitals and energy levels in atoms can only be explained by quantum mechanics. Quantum mechanics can explain the behaviors of insulators, conductors, semi-conductors, and giant magneto-resistance. It can explain the quantization of light and its particle nature in addition to its wave nature. Quantum mechanics can also explain the radiation of hot body, and its change of color with respect to temperature. It explains the presence of holes and the transport of holes and electrons in electronic devices. Quantum mechanics has played an important role in photonics, quantum electronics, and micro-electronics. But many more emerging technologies require the understanding of quantum mechanics; and hence, it is important that scientists and engineers understand quantum mechanics better. One area is nano-technologies due to the recent advent of nano-fabrication techniques. Consequently, nano-meter size systems are more common place. In electronics, as transistor devices become smaller, how the electrons move through the device is quite different from when the devices are bigger: nano-electronic transport is quite different from micro-electronic transport. The quantization of electromagnetic field is important in the area of nano-optics and quantum optics. It explains how photons interact with atomic systems or materials. It also allows the use of electromagnetic or optical field to carry quantum information. Moreover, quantum mechanics is also needed to understand the interaction of photons with materials in solar cells, as well as many topics in material science. When two objects are placed close together, they experience a force called the Casimir force that can only be explained by quantum mechanics. This is important for the understanding of micro/nano-electromechanical sensor systems (M/NEMS). Moreover, the understanding of spins is important in spintronics, another emerging technology where giant magneto-resistance, tunneling magneto-resistance, and spin transfer torque are being used. Quantum mechanics is also giving rise to the areas of quantum information, quantum communication, quantum cryptography, and quantum computing. It is seen that the richness of quantum physics will greatly affect the future generation technologies in many aspects.︎This page titled 16.1: Introduction is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati.
7,846
16.2: Quantum Mechanics is Bizarre
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/16%3A_The_Motivation_for_Quantum_Mechanics/16.02%3A_Quantum_Mechanics_is_Bizarre
The development of quantum mechanics is a great intellectual achievement, but at the same time, it is bizarre. The reason is that quantum mechanics is quite different from classical physics. The development of quantum mechanics is likened to watching two players having a game of chess, but the watchers have not a clue as to what the rules of the game are. By observations, and conjectures, finally the rules of the game are outlined. Often, equations are conjectured like conjurors pulling tricks out of a hat to match experimental observations. It is the interpretations of these equations that can be quite bizarre. Quantum mechanics equations were postulated to explain experimental observations, but the deeper meanings of the equations often confused even the most gifted. Even though Einstein received the Nobel prize for his work on the photo-electric effect that confirmed that light energy is quantized, he himself was not totally at ease with the development of quantum mechanics as charted by the younger physicists. He was never comfortable with the probabilistic interpretation of quantum mechanics by Born and the Heisenberg uncertainty principle: “God doesn’t play dice,” was his statement assailing the probabilistic interpretation. He proposed “hidden variables” to explain the random nature of many experimental observations. He was thought of as the “old fool” by the younger physicists during his time. Schrödinger came up with the bizarre “Schrödinger cat paradox” that showed the struggle that physicists had with quantum mechanics’s interpretation. But with today’s understanding of quantum mechanics, the paradox is a thing of yesteryear. The latest twist to the interpretation in quantum mechanics is the parallel universe view that explains the multitude of outcomes of the prediction of quantum mechanics. All outcomes are possible, but with each outcome occurring in different universes that exist in parallel with respect to each other.\(^1\)The development of quantum mechanics was initially motivated by two observations which demonstrated the inadeqacy of classical physics. These are the “ultraviolet catastrophe” and the photoelectric effect.This page titled 16.2: Quantum Mechanics is Bizarre is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati.
7,847
16.3: The Ultraviolet Catastrophe
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/16%3A_The_Motivation_for_Quantum_Mechanics/16.03%3A_The_Ultraviolet_Catastrophe
The ultraviolet (UV) catastrophe, also called the Rayleigh–Jeans catastrophe, is the prediction of classical electromagnetism that the intensity of the radiation emitted by an ideal black body at thermal equilibrium goes to infinity as wavelength decreases (see figure \(\PageIndex{1}\) )\(^1\).A black body is an idealized object that absorbs and emits all frequencies. Classical physics can be used to derive an approximated equation describing the intensity of a black body radiation as a function of frequency for a fixed temperature. The result is known as the Rayleigh-Jeans law, which for wavelength \(\lambda\), is:\[ B_{\lambda }(T)={\dfrac {2ck_{\mathrm {B} }T}{\lambda ^{4}}} \label{17.3.1} \]where \(B_{\lambda }\) is the intensity of the radiation —expressed as the power emitted per unit emitting area, per steradian, per unit wavelength (spectral radiance)— \(c\) is the speed of light, \(k_{\mathrm{B}}\) is the Boltzmann constant, and \(T\) is the temperature in kelvins. The paradox —or rather the breakdown of the Rayleigh–Jeans formula— happens at small wavelength \(\lambda\). If we take the limit for \(\lambda \rightarrow 0\) in Equation \ref{17.3.1}, we obtain that \(B_{\lambda } \rightarrow \infty\). In other words, as the wavelength of the emitted light gets smaller (approaching the UV range), the intensity of the radiation approaches infinity, and the black body emits an infinite amount of energy. This divergence for low wavelength (high frequencies) is called the ultraviolet catastrophe, and it is clearly unphysical.Max Planck explained the black body radiation in 1900 by assuming that the energies of the oscillations of the electrons responsible for the radiation must be proportional to integral multiples of the frequency, i.e.,\[ E = n h \nu = n h \dfrac{c}{\lambda} \label{17.3.2} \]Planck’s assumptions led to the correct form of the spectral function for a black body: \[ B_{\lambda }(\lambda ,T)={\dfrac {2hc^{2}}{\lambda ^{5}}}{\dfrac {1}{e^{hc/(\lambda k_{\mathrm {B} }T)}-1}}. \label{17.3.3} \]If we now take the limit for \(\lambda \rightarrow 0\) of Equation \ref{17.3.3}, it is easy to prove that \(B_{\lambda }\) goes to zero, in agreement with the experimental results, and our intuition. Planck also found that for \(h = 6.626 \times 10^{-34} \; \text{J s}\), the experimental data could be reproduced exactly. Nevertheless, Planck could not offer a good justification for his assumption of energy quantization. Physicists did not take this energy quantization idea seriously until Einstein invoked a similar assumption to explain the photoelectric effect.This page titled 16.3: The Ultraviolet Catastrophe is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati.
7,848
16.4: The Photoelectric Effect
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/16%3A_The_Motivation_for_Quantum_Mechanics/16.04%3A_The_Photoelectric_Effect
In 1886 and 1887, Heinrich Hertz discovered that ultraviolet light can cause electrons to be ejected from a metal surface. According to the classical wave theory of light, the intensity of the light determines the amplitude of the wave, and so a greater light intensity should cause the electrons on the metal to oscillate more violently and to be ejected with a greater kinetic energy. In contrast, the experiment showed that the kinetic energy of the ejected electrons depends on the frequency of the light. The light intensity affects only the number of ejected electrons and not their kinetic energies. Einstein tackled the problem of the photoelectric effect in 1905. Instead of assuming that the electronic oscillators had energies given by Planck’s formula, Equation 17.3.2, Einstein assumed that the radiation itself consisted of packets of energy \(E = h \nu\), which are now called photons. Einstein successfully explained the photoelectric effect using this assumption, and he calculated a value of \(h\) close to that obtained by Planck.Two years later, Einstein showed that not only is light quantized, but so are atomic vibrations. Classical physics predicts that the molar heat capacity at constant volume (\(C_V\)) of a crystal is \(3 R\), where \(R\) is the molar gas constant. This works well for high temperatures, but for low temperatures \(C_V\) actually falls to zero. Einstein was able to explain this result by assuming that the oscillations of atoms about their equilibrium positions are quantized according to \(E = n h \nu\), Planck’s quantization condition for electronic oscillators. This demonstrated that the energy quantization concept was important even for a system of atoms in a crystal, which should be well-modeled by a system of masses and springs (i.e., by classical mechanics).This page titled 16.4: The Photoelectric Effect is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati.
7,849
16.5: Wave-Particle Duality
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/16%3A_The_Motivation_for_Quantum_Mechanics/16.05%3A_Wave-Particle_Duality
Einstein had shown that the momentum of a photon is\[ p = \dfrac{h}{\lambda}. \label{17.5.1} \]This can be easily shown as follows. Assuming \(E = h \nu\) for a photon and \(\lambda \nu = c\) for an electromagnetic wave, we obtain\[ E = \dfrac{h c}{\lambda} \label{17.5.2} \]Now we use Einstein’s relativity result, \(E = m c^2\), and the definition of mementum \(p=mc\), to find: \[ \lambda = \dfrac{h}{p}, \label{17.5.3} \]which is equivalent to Equation \ref{17.5.1}. Note that \(m\) refers to the relativistic mass, not the rest mass, since the rest mass of a photon is zero. Since light can behave both as a wave (it can be diffracted, and it has a wavelength), and as a particle (it contains packets of energy \(h \nu\)), de Broglie reasoned in 1924 that matter also can exhibit this wave-particle duality. He further reasoned that matter would obey the same Equation \ref{17.5.3} as light. In 1927, Davisson and Germer observed diffraction patterns by bombarding metals with electrons, confirming de Broglie’s proposition.\(^1\)Rewriting the previous equations in terms of the wave vector, \(k=\dfrac{2\pi}{\lambda}\), and the angular frequency, \(\omega=2\pi\nu\), we obtain the following two equations\[ \begin{aligned} p &= \hbar k \\ E &= \hbar \omega, \end{aligned} \label{17.5.4} \]which are known as de Broglie’s equations. We will use those equation to develop wave mechanics in the next chapters.This page titled 16.5: Wave-Particle Duality is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati.
7,850
17.1: Newtonian Formulation
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/17%3A_Classical_Mechanics/17.01%3A_Newtonian_Formulation
Classical mechanics as formulated by Isaac Newton is all about forces. Newtonian mechanics works well for problems where we know the forces and have a reasonable coordinate system. In these cases, the net force acting on a system at position \(q\) is simply:\[ F_{\mathrm{net}}(q) = m\ddot{q} = m \dfrac{d^2 q}{dt^2}. \label{18.1.1} \]Or, in other words, if we know the net force acting on a system of mass \(m\) at position \(q\) at some time \(t_0\), we can use Equation \ref{18.1.1} to calculate the position of the system at any future (or past) time. We have completely determined the dynamical evolution of the system.\(^1\)A ball of mass \(m\) is at ground level and tossed straight up from an initial position \(q_0\) with an initial velocity \(\dot{q}_0\) and subject to gravity alone. Calculate the equation of motion for the ball (i.e. where is the ball going to be after some time \(t\)?).2Since the only force acting on the ball is gravity, we can use the equation for the gravitational force to start our derivation:\[ F_{\mathrm{gravity}}=-mG, \nonumber \]with \(G\) the usual gravitational constant (\(G=9.8\; \mathrm{m}/\mathrm{s}^{2}\)). We can then replace this expression into Equation \ref{18.1.1}, to obtain:\[ \begin{aligned} -mG &=m \ddot{q} \\ -G &=\ddot{q} \\ -G &=\dfrac{d\dot{q}}{dt}, \\ \end{aligned} \nonumber \]which can then be integrated with respect to time, to obtain:\[ \begin{aligned} -G\int_{t=0}^{t} dt &=\int_{\dot{q}_0}^{\dot{q}} d\dot{q}\\ \dot{q} &= \dot{q}_0-Gt\\ \dfrac{dq}{dt} &= \dot{q}_0-Gt,\\ \end{aligned} \nonumber \]which can be further integrated with respect to time, to give:\[ \begin{aligned} \int_{q_0}^{q} dq &= \int_{t=0}^{t} \dot{q}_0 dt -G \int_{t=0}^{t}tdt\\ q &= q_0 + \dot{q}_0 t -\dfrac{1}{2}Gt^2. \end{aligned} \nonumber \]This final equation is the equation of motion for the ball, from which we can calculate the position of the ball at any time \(t\). Notice how the equation of motion does not depend on the mass of the ball!How much time will a ball ejected from a height of \(1 \;\mathrm{m}\) at an initial velocity of \(10 \;\mathrm{m/s}\) take to hit the floor?We can use the equation of motion obtained above to solve this problem, and obtain for this specific case \(t\simeq 2.12 \;\mathrm{s}\).3The formula of Newtonian mechanics are not the only one we can use to solve a problem in classical mechanics. We have at least two other equivalent approaches to the same problem that might end up being more useful in certain situations.This page titled 17.1: Newtonian Formulation is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati.
7,851
17.2: Lagrangian Formulation
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/17%3A_Classical_Mechanics/17.02%3A_Lagrangian_Formulation
Another way to derive the equations of motion for classical mechanics is via the use of the Lagrangian and the principle of least action. The Lagrangian formulation is obtained by starting from the definition of the Lagrangian of the system:\[ L = K - V, \label{18.2.1} \]where \(K\) is the kinetic energy, and \(V\) is the potential energy. Both are expressed in terms of the coordinates \((q,\dot{q})\). Notice that for a fixed time, \(t\), \(q\) and \(\dot{q}\) are independent variables, since \(\dot{q}\) cannot be derived from \(q\) alone.The time integral of the Lagrangian is called the action, and is defined as:\[ S = \int_{t_1}^{t_2} L\, dt, \label{18.2.2} \]which is a functional: it takes in the Lagrangian function for all times between \(t_1\) and \(t_2\) and returns a scalar value. The equations of motion can be derived from the principle of least action,\(^1\) which states that the true evolution of a system \(q(t)\) described by the coordinate \(q\) between two specified states \(q_1 = q(t_1)\) and \(q_2 = q(t_2)\) at two specified times \(t_1\) and \(t_2\) is a minimum of the action functional. For a minimum point:\[ \delta S = \dfrac{dS}{dq}= 0 \label{18.2.3} \]Requiring that the true trajectory \(q(t)\) minimizes the action functional \(S\), we obtain the equation of motion ).\(^2\) This can be achieved applying classical variational calculus to the variation of the action integral \(S\) under perturbations of the path \(q(t)\), Equation \ref{18.2.3}. The resulting equation of motion (or set of equations in the case of many dimensions) is sometimes also called the Euler—Lagrange equation:\(^3\)\[ \dfrac{d}{dt}\left(\dfrac{\partial L}{\partial\dot q}\right)=\dfrac{\partial L}{\partial q}. \label{18.2.4} \]Let’s apply the Lagrangian mechanics formulas to the same problem as in the previous Example.The expression of the kinetic energy, the potential energy, and the Lagrangian for our system are:\[ \begin{aligned} K &= \dfrac{1}{2}m\dot{q}^2 \\ V &= mGq \\ L &= K-V = \dfrac{1}{2}m\dot{q}^2 - mGq. \end{aligned} \nonumber \]To get the equation of motion using Equation \ref{18.2.4}, we need to first take the partial derivative of \(L\) with respect to \(q\) (right hand side):\[ \dfrac{\partial L}{\partial q}=-mG, \nonumber \]and then we need the derivative with respect to \(t\) the derivative of the Lagrangian with respect to \(\dot{q}\) at the hand side:\[ \dfrac{d}{dt}\dfrac{\partial L}{\partial \dot{q}} = \dfrac{d\left(\dfrac{1}{2}m\dot{q}^2 - mGq\right)}{dt}= m\ddot{q}. \nonumber \]Putting this together, we get:\[ \begin{aligned} m\ddot{q}&=-mG \\ \ddot{q} &= -G \\ \end{aligned} \nonumber \]Which is the same result as obtained from the Newtonian method. Integrating twice, we get the exact same formulas that we can use the same way.The advantage of Lagrangian mechanics is that it is not constrained to use a coordinate system. For example, if we have a bead moving along a wire, we can define the coordinate system as the distance along the wire, making the formulas much simpler than in Newtonian mechanics. Also, since the Lagrangian depends on kinetic and potential energy it does a much better job with constraint forces.This page titled 17.2: Lagrangian Formulation is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati.
7,852
17.3: Hamiltonian Mechanics
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/17%3A_Classical_Mechanics/17.03%3A_Hamiltonian_Mechanics
A third way of obtaining the equation of motion is Hamiltonian mechanics, which uses the generalized momentum in place of velocity as a coordinate. The generalized momentum is defined in terms of the Lagrangian and the coordinates \((q,\dot{q})\):\[ p = \dfrac{\partial L}{\partial\dot{q}}. \label{18.3.1} \]The Hamiltonian is defined from the Lagrangian by applying a Legendre transformation as:\(^1\)\[ H(p,q) = p\dot{q} - L(q,\dot{q}), \label{18.3.2} \]The Lagrangian equation of motion becomes a pair of equations known as the Hamiltonian system of equations:\[\begin{equation} \begin{aligned} \dot{p}=\dfrac{dp}{dt} &= -\dfrac{\partial H}{\partial q} \\ \dot{q}=\dfrac{dq}{dt} &= +\dfrac{\partial H}{\partial p}, \end{aligned} \end{equation} \label{18.3.3} \]where \(H=H(q,p,t)\) is the Hamiltonian of the system, which often corresponds to its total energy. For a closed system, it is the sum of the kinetic and potential energy in the system:\[ H = K + V. \label{18.3.4} \]Notice the difference between the Hamiltonian, Equation \ref{18.3.4}, and the Lagrangian, Equation 18.2.1. In Newtonian mechanics, the time evolution is obtained by computing the total force being exerted on each particle of the system, and from Newton’s second law, the time evolutions of both position and velocity are computed. In contrast, in Hamiltonian mechanics, the time evolution is obtained by computing the Hamiltonian of the system in the generalized momenta and inserting it into Hamilton’s equations. This approach is equivalent to the one used in Lagrangian mechanics, since the Hamiltonian is the Legendre transform of the Lagrangian. The main motivation to use Hamiltonian mechanics instead of Lagrangian mechanics comes from the more simple description of complex dynamic systems.Let’s apply the Hamiltonian mechanics formulas to the same problem in the previous examples.Using Equation \ref{18.3.2}, the Hamiltonian can be written as:\[ H = m\dot{q}\dot{q} - \dfrac{1}{2} m \dot{q}^2+m G q = \dfrac{1}{2}m\dot{q} ^2+mGq. \label{18.3.5} \]Since the Hamiltonian really depends on position and momentum, we need to get this in terms of \(q\) and \(p\), with \(p = m\dot{q}\) for the momentum. This is not always the case, since it depends on the choice of coordinate system. For a trivial coordinate system for our simple 1-dimensional problem, we have:\[ H=\dfrac{p^2}{2m}+mGq, \nonumber \]from which we can use eqs. \ref{18.3.3} to get:\[ \begin{aligned} \dot{q} &= \dfrac{\partial H}{\partial p} = \dfrac{p}{m} \\ \dot{p} &=-\dfrac{\partial H}{\partial q} = -mG. \end{aligned} \nonumber \]These equations represent a major diffference of the Hamiltonian method, since we describe the system using two first-order differential equations, rather than one second-order differential equation. In order to get the equation of motion, we need to take the derivative of \(\dot{q}\):\[ \ddot{q} = \dfrac{d}{dt} \left( \dfrac{p}{m} \right) = \dfrac{\dot{p}}{m}, \nonumber \]and then replacing the definition of \(\dot{p}\) obtained above, we get:\[ \ddot{q} = \dfrac{-mG}{m} = -G \nonumber \]which—once again—is the same result obtained for the two previous cases. Integrating this twice, we get the familiar equation of motion for our problem.This page titled 17.3: Hamiltonian Mechanics is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati.
7,853
18.1: The Time-Independent Schrödinger Equation
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/18%3A_The_Schrodinger_Equation/18.01%3A_The_Time-Independent_Schrodinger_Equation
We can start the derivation of the single-particle time-independent Schrödinger equation (TISEq) from the equation that describes the motion of a wave in classical mechanics:\[ \psi(x,t)=\exp[i(kx-\omega t)], \label{19.1.1} \]where \(x\) is the position, \(t\) is time, \(k=\dfrac{2\pi}{\lambda}\) is the wave vector, and \(\omega=2\pi\nu\) is the angular frequency of the wave. If we are not concerned with the time evolution, we can consider uniquely the derivatives of Equation \ref{19.1.1} with respect to the location, which are:\[ \begin{aligned} \dfrac{\partial \psi}{\partial x} &=ik\exp[i(kx-\omega t)] = ik\psi, \\ \dfrac{\partial^2 \psi}{\partial x^2} &=i^2k^2\exp[i(kx-\omega t)] = -k^2\psi, \end{aligned} \label{19.1.2} \]where we have used the fact that \(i^2=-1\).Assuming that particles behaves as wave—as proven by de Broglie’s we can now use the first of de Broglie’s equation, Equation 17.5.4, we can replace \(k=\dfrac{p}{\hbar}\) to obtain:\[ \dfrac{\partial^2 \psi}{\partial x^2} = -\dfrac{p^2\psi}{\hbar^2}, \label{19.1.3} \]which can be rearranged to:\[ p^2 \psi = -\hbar^2 \dfrac{\partial^2 \psi}{\partial x^2}. \label{19.1.4} \]The total energy associated with a wave moving in space is simply the sum of its kinetic and potential energies:\[ E = \dfrac {p^{2}}{2m} + V(x), \label{19.1.5} \]from which we can obtain:\[ p^2 = 2m[E - V(x)], \label{19.1.6} \]which we can then replace into Equation \ref{19.1.4} to obtain:\[ 2m[E-V(x)]\psi = - \hbar^2 \dfrac{\partial^2 \psi}{\partial x^2}, \label{19.1.7} \]which can then be rearranged to the famous time-independent Schrödinger equation (TISEq):\[ - \dfrac{\hbar^2}{2m} \dfrac{\partial^2 \psi}{\partial x^2} + V(x) \psi = E\psi, \label{19.1.8} \]A two-body problem can also be treated by this equation if the mass \(m\) is replaced with a reduced mass \(\mu = \dfrac{m_1 m_2}{m_1+m_2}\).This page titled 18.1: The Time-Independent Schrödinger Equation is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati.
7,854
18.2: The Time-Dependent Schrödinger Equation
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/18%3A_The_Schrodinger_Equation/18.02%3A_The_Time-Dependent_Schrodinger_Equation
Unfortunately, the analogy with the classical wave equation that allowed us to obtain the TISEq in the previous section cannot be extended to the time domain by considering the equation that involves the partial first derivative with respect to time. Schrödinger himself presented his time-independent equation first, and then went back and postulated the more general time-dependent equation. We are following here the same strategy and just give the time-independent variable as a postulate. The single-particle time-dependent Schrödinger equation is:\[ i\hbar\dfrac{\partial \psi(x,t)}{\partial t}=-\dfrac{\hbar^2}{2m} \dfrac{\partial^2 \psi(x,t)}{\partial x^2}+V(x)\psi(x,t) \label{19.2.1} \]where \(V \in \mathbb{R}^{n}\) represents the potential energy of the system. Obviously, the time-dependent equation can be used to derive the time-independent equation. If we write the wavefunction as a product of spatial and temporal terms, \(\psi(x, t) = \psi(x) f(t)\), then Equation \ref{19.2.1} becomes:\[ \psi(x) i \hbar \dfrac{df(t)}{dt} = f(t) \left[-\dfrac{\hbar^2}{2m} \dfrac{\partial^2}{\partial x^2} + V(x) \right] \psi(x), \label{19.2.2} \]which can be rearranged to:\[ \dfrac{i \hbar}{f(t)} \dfrac{df(t)}{dt} = \dfrac{1}{\psi(x)} \left[-\dfrac{\hbar^2}{2m} \dfrac{\partial^2}{\partial x^2} + V(x) \right] \psi(x). \label{19.2.3} \]Since the left-hand side of Equation \ref{19.2.3} is a function of \(t\) only and the right hand side is a function of \(x\) only, the two sides must equal a constant. If we tentatively designate this constant \(E\) (since the right-hand side clearly must have the dimensions of energy), then we extract two ordinary differential equations, namely:\[ \dfrac{1}{f(t)} \dfrac{df(t)}{dt} = - \dfrac{i E}{\hbar} \label{19.2.4} \]and: \[ -\dfrac{\hbar^2}{2m} \dfrac{\partial^2\psi(x)}{\partial x^2} + V(x) \psi(x) = E \psi(x). \label{19.2.5} \]The latter equation is the TISEq. The former equation is easily solved to yield\[ f(t) = e^{-iEt / \hbar} \label{19.2.6} \]The solutions of Equation \ref{19.2.6}, \(f(t)\), are purely oscillatory, since \(f(t)\) never changes in magnitude. Thus if:\[ \psi(x, t) = \psi(x) \exp\left(\dfrac{-iEt}{\hbar}\right), \label{19.2.7} \]then the total wave function \(\psi(x, t)\) differs from \(\psi(x)\) only by a phase factor of constant magnitude. There are some interesting consequences of this. First of all, the quantity \(\vert \psi(x, t) \vert^2\) is time independent, as we can easily show:\[ \vert \psi(x, t) \vert^2 = \psi^{*}(x, t) \psi(x, t)= \psi^{*}(x)\exp\left(\dfrac{iEt}{\hbar}\right)\psi(x)\exp\left(\dfrac{-iEt}{\hbar}\right)= \psi^{*}(x) \psi(x). \label{19.2.8} \]Wave functions of the form of Equation \ref{19.2.7} are called stationary states. The state \(\psi(x, t)\) is “stationary,” but the particle it describes is not! Of course Equation \ref{19.2.6} represents only a particular solution to the time-dependent Schrödinger equation. The general solution is much more complicated, and the factorization of the temporal part is often not possible:\(^1\)\[ \psi({\bf r}, t) = \sum_i c_i e^{-iE_it / \hbar} \psi_i({\bf r}) \nonumber \]This page titled 18.2: The Time-Dependent Schrödinger Equation is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati.
7,855
2.1: What is Thermodynamics?
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/02%3A_Zeroth_Law_of_Thermodynamics/2.01%3A_What_is_Thermodynamics
Thermodynamics is the branch of science that deals with heat and work, and their relation to energy. As the definition suggests, thermodynamics is concerned with two types of energy: heat and work. A formal definition of these forms of energy is as follow:As we saw in chapter 1, heat and work are not “well-behaved” quantities because they are path functions. On the one hand, it might be simple to measure the amount of heat and/or work experimentally, these measured quantities cannot be used to define the state of a system. Since heat and work are path functions, their values depend directly on the methods used to transfer them (their paths). Understanding and quantifying these energy transfers is the reason why thermodynamics was developed in the first place. The origin of thermodynamics dates back to the seventeenth century when people began to use heat and work for technological applications. These early scientists needed a mathematical tool to understand how heat and work were related to each other, and how they were related to the other variables that they were able to measure, such as temperature and volume.Before we even discuss the definition of energy and how it relates to heat and work, it is crucial to introduce the essential concept of temperature. Temperature is an intuitive concept that has a surprisingly complex definition at the microscopic level. 1 However, for all our purposes, it is not essential to have a microscopic definition of temperature, as long as we have the guarantee that this quantity can be measured unambiguously. In other words, we only need a mathematical definition of temperature that agrees with the physical existence of thermometers.1. In fact, we will not even give a rigorous microscopic definition of temperature within this textbook.This page titled 2.1: What is Thermodynamics? is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,856
2.2: The Zeroth Law of Thermodynamics
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/02%3A_Zeroth_Law_of_Thermodynamics/2.02%3A_The_Zeroth_Law_of_Thermodynamics
The mathematical definition that guarantees that thermal equilibrium is an equivalence relation is called the zeroth law of thermodynamics. The zeroth law of thermodynamics states that if two thermodynamic systems are each in thermal equilibrium with a third one, then they are in thermal equilibrium with each other. The law might appear trivial and possibly redundant, but it is a fundamental requirement for the mathematical formulation of thermodynamics, so it needs to be stated. The zeroth law can be summarized by the following simple mathematical relation:If \(T_A = T_B\), and \(T_B = T_C\), then \(T_A = T_C\).Notice that when we state the zeroth law, it appears intuitive. However, this is not necessarily the case. Let’s, for example, consider a pot of boiling water at \(P=1\;\mathrm{bar}\). Its temperature, \(T_{H_2O}\), is about 373 K. Let’s now submerge in this water a coin made of wood and another coin made of metal. After some sufficient time, the wood coin will be in thermal equilibrium with the water, and its temperature \(T_W = T_{H_2O}\). Similarly, the metal coin will also be in thermal equilibrium with the water, hence \(T_M = T_{H_2O}\). According to the zeroth law, the temperature of the wood coin and that of the metal coin are precisely the same \(T_W = T_M = 373\;\mathrm{K}\), even if they are not in direct contact with each other. Now here’s the catch: since wood and metal transmit heat in different manners if I take the coins out of the water and put them immediately in your hands, one of them will be very hot, but the other will burn you. If you had to guess the temperature of the two coins without a thermometer, and without knowing that they were immersed in boiling water, would you suppose that they have the same temperature? Probably not. This page titled 2.2: The Zeroth Law of Thermodynamics is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,857
2.3: Calculation of Heat
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/02%3A_Zeroth_Law_of_Thermodynamics/2.03%3A_Calculation_of_Heat
Heat (\(Q\)) is a property that gets transferred between substances. Similarly to work, the amount of heat that flows through a boundary can be measured, but its mathematical treatment is complicated because heat is a path function. As you probably recall from general chemistry, the ability of a substance to absorb heat is given by a coefficient called the heat capacity, which is measured in SI in \(\dfrac{\text{J}}{\text{mol K}}\). However, since heat is a path function, these coefficients are not unique, and we have different ones depending on how the heat transfer happens.The heat capacity at constant volume measures the ability of a substance to absorb heat at constant volume. Recasting from general chemistry: The molar heat capacity at constant volume is the amount of heat required to increase the temperature of 1 mol of a substance by 1 K at constant volume.This simple definition can be written in mathematical terms as:\[ C_V = \dfrac{đ Q_V}{n dT} \Rightarrow đ Q_V = n C_V dT. \label{2.3.1} \]Given a known value of \(C_V\), the amount of heat that gets transfered can be easily calculated by measuring the changes in temperature, after integration of Equation \ref{2.3.1:\[ đ Q_V = n C_V dT \rightarrow \int đ Q_V = n \int_{T_i}^{T_F}C_V dT \rightarrow Q_V = n C_V \int_{T_i}^{T_F}dT, \label{2.3.2} \]which, assuming \(C_V\) independent of temperature, simply becomes:\[ Q_V \cong n C_V \Delta T. \label{2.3.3} \]Similarly to the previous case, the heat capacity at constant pressure measures the ability of a substance to absorb heat at constant pressure. Recasting again from general chemistry:The molar heat capacity at constant pressure is the amount of heat required to increase the temperature of 1 mol of a substance by 1 K at constant pressure.And once again, this mathematical treatment follows:\[ C_P = \dfrac{đ Q_P}{n dT} \Rightarrow đ Q_P = n C_P dT \rightarrow \int đ Q_P = n \int_{T_i}^{T_F}C_P dT, \label{2.3.4} \]which result in the simple formula:\[ Q_P \cong n C_P \Delta T. \label{2.3.5} \]This page titled 2.3: Calculation of Heat is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,858
2.4: Calculation of Work
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/02%3A_Zeroth_Law_of_Thermodynamics/2.04%3A_Calculation_of_Work
In thermodynamics, work (\(W\)) is the ability of a system to transfer energy by exerting a force on its surroundings. Work can be measured simply by evaluating its effects, such as displacing a massive object by some amount of space. The mathematical treatment of work, however, is complicated because work is a path function. In the following sections, we will analyze how work is calculated in some prototypical situations commonly encountered in the thermodynamical treatment of systems.Let’s consider the situation in , where a special beaker with a piston that is free to move is filled with an ideal gas. The beaker sits on a desk, so the piston is not subject to any external forces other than the external pressure, \(P_{\text{ext}}\), and the internal pressure of the gas, \(P\). 1 The piston is initially compressed to a position that is not in equilibrium \((i)\). After the process, the piston reaches a final equilibrium position \((f)\). How do we calculate the work (\(W\)) performed by the system?From basic physics, we recall that the infinitesimal amount of work associated with an object moving in space is given by the force acting on the object (\(F\)) multiplied by the infinitesimal amount it gets displaced (\(d h\)):\[ đ W = - Fdh, \label{2.4.1} \]where the negative sign comes from the chemistry sign convention, Definition: System-centric, since the work in is performed by the system (expansion). What kind of force is moving the piston? It is the force due to the pressure of the gas. Relying upon another definition from physics, the pressure is the ratio between the force (\(F\)) and the area (\(A\)) that such force acts upon:\[ P = F/A. \label{2.4.2} \]Obtaining \(F\) from Equation \ref{2.4.2} and replacing it in Equation \ref{2.4.1}, we obtain:\[ đ W - P \underbrace{Adh}_{dV}, \label{2.4.3} \]and considering that \(Adh\) (area times infinitesimal height) is the definition of an infinitesimal volume \(dV\), we obtain:\[ đ W = - PdV, \label{2.4.4} \]If we want to calculate the amount of work performed by a system, \(W\), from Equation \ref{2.4.4}, we need to recall that \(đ W\) is an inexact differential. As such, we cannot integrate it from initial to final as for the (exact) differential of a state function, because:\[ \int_{i}^{f}đ W \neq W_f - W_i, \label{2.4.5} \]but rather:\[ \int_{\text{path}} đ W = W, \label{2.4.6} \]where the integration is performed along the path. Using Equation \ref{2.4.6}, we can integrate Equation \ref{2.4.4} as:\[ \int đ W = W = - \int_{i}^{f} PdV, \label{2.4.7} \]where the integral on the left-hand side is taken along the path,2 while the integral on the right-hand side can be taken between the initial and final states, since \(dV\) is a state function. How do we solve the integral in Equation \ref{2.4.7}? It turns out that there are many different ways to solve this integral (perhaps not surprisingly, since the left-hand side depends on the path), which we will explore in the next section.At constant temperature, the piston in moves along the following PV diagram (this curve is obtained from an ideal gas at constant \(T=298\) K):An expansion of the gas will happen between \(P_i\) and \(P_f\). If the expansion happens in a one-step fast process, for example against external atmospheric pressure, then we can consider such final pressure constant (for example \(P_f=P_{\text{ext}} =1\;\mathrm{bar}\)), and solve the integral as:\[ W_{\text{1-step}} = - \int_{i}^{f} P_{\text{ext}}dV = -P_{\text{ext}} \int_{i}^{f} dV = -P_{\text{ext}} (V_f-V_i), \label{2.4.8} \]Notice how the work is negative, since during an expansion the work is performed by the system (recall the chemistry sign convention). The absolute value of the work3 represents the red area of the PV-diagram:\[ \left| W_{\text{1-step}} \right| = P_{\text{ext}} (V_f-V_i) \label{2.4.9} \]If the process happens in two steps by pausing at an intermediate position until equilibrium is reached, then we should calculate the work by dividing it into two separate processes, \(A\) and \(B\), and solve each one as we did in the previous case. The first process is an expansion between \(P_i\) and \(P_1\), with \(P_1\) constant. The absolute value of the work, \(W_A\), is represented by the blue area:\[ \left| W_A \right| = P_1 (V_1-V_i) \label{2.4.10} \]The second process is an expansion between \(P_1\) and \(P_f\), with \(P_f=P_{\text{ext}}\) constant. The absolute value of the work for this second process is represented by the green area:\[ \left| W_B \right| = P_f (V_f-V_1) \label{2.4.11} \]The total absolute value of the work for the 2-step process is given by the sum of the two areas:\[ \left| W_{\text{2-step}} \right| = \left| W_A \right| + \left| W_B \right| = P_1 (V_1-V_i)+P_f (V_f-V_1). \label{2.4.12} \]As can be easily verified by comparing the shaded areas in the plots, \(\left| W_{\text{2-step}} \right| > \left| W_{\text{1-step}} \right|\).We can easily extend this procedure to consider processes that happens in 3, 4, 5, …, \(n\) steps. What is the limit of this procedure? In other words, what happens when \(n \rightarrow \infty\)? A simple answer is given by the plots in the next is achieved in an \(\infty\)-step process, for which the work is calculated as:\[ \left| W_{\infty \text{-step}} \right| = \left| W_{\text{max}} \right| = \sum_{n}^{\infty} P_n(V_n-V_{n-1}) = \int_{i}^{f} PdV. \label{2.4.13} \]The integral on the right hand side of Equation \ref{2.4.13} can be solved for an ideal gas by calculating the pressure using the ideal gas law \(P=\dfrac{nRT}{V}\), and solving the integral since \(n\), \(R\), and \(T\) are constant:\[ \left| W_{\text{max}} \right| = nRT \int_{i}^{f} \dfrac{dV}{V} = nRT \ln \dfrac{V_f}{V_i}. \label{2.4.14} \]This example demonstrates why work is a path function. If we perform a fast 1-step expansion, the system will perform an amount of work that is much smaller than the amount of work it can perform if the expansion between the same points happens slowly in an \(\infty\)-step process.The same considerations that we made up to this point for expansion processes hold specularly for compression ones. The only difference is that the work associated with compressions will have a positive sign since it must be performed onto the system. As such, the amount of work for a transformation that happens in a finite amount of steps will be an upper bound to the minimum amount of work required to compress the system.4 \(\left| W_{\text{min}} \right|\) for compressions is calculated as the area underneath the PV curve, exactly as \(\left| W_{\text{min}} \right|\) for expansions in Equation \ref{2.4.13}.This page titled 2.4: Calculation of Work is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,859
21.1: Operators in Quantum Mechanics
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/21%3A_Operators_and_Mathematical_Background/21.01%3A_Operators_in_Quantum_Mechanics
The central concept in this new framework of quantum mechanics is that every observable (i.e., any quantity that can be measured in a physical experiment) is associated with an operator. To distinguish between classical mechanics operators and quantum mechanical ones, we use a hat symbol \(\hat{}\) on top of the latter. Physical pure states in quantum mechanics are represented as unit-norm (probabilities are normalized to one) vectors in a special complex Hilbert space. Following the definition, an operator is a function that projects a vector in the Hilbert space onto the space of physical observables. Since observables are values that come up as the result of the experiment, quantum mechanical operators must yield real eigenvalues.16 Operators that possess this property are called Hermitian. In the wave mechanics formulation of quantum mechanics that we have seen so far, the wave function varies with space and time—or equivalently momentum and time—and observables are differential operators. A completely analogous formulation is possible in terms of matrices. In the matrix formulation of quantum mechanics, the norm of the physical state should stay fixed, so the evolution operator should be unitary, and the operators can be represented as matrices.The expectation value of an operator \(\hat{A}\) for a system with wave function \(\psi(\mathbf{r})\) living in a Hilbert space with unit vector \(\mathbf{r}\) (i.e., in three-dimensional Cartesian space \(\mathbf{r} = \left\{ x,y,z \right\}\)), is given by:\[ \langle A \rangle = \int \psi^{*}({\bf r}) \hat{A} \psi({\bf r}) d{\bf r}, \label{22.1.1} \]and if \(\hat{A}\) is a Hermitian operator, all physical observables are represented by such expectation values. It is easy to show that if \(\hat{A}\) is a linear operator with an eigenfunction \(g\), then any multiple of \(g\) is also an eigenfunction of \(\hat{A}\).Most of the properties of operators are obvious, but they are summarized below for completeness. The sum and difference of two operators \(\hat{A}\) and \(\hat{B}\) are given by:\[ \begin{equation} \begin{aligned} (\hat{A} + \hat{B}) f &= \hat{A} f + \hat{B} f \\ (\hat{A} - \hat{B}) f &= \hat{A} f - \hat{B} f. \end{aligned} \end{equation}\label{22.1.2} \]The product of two operators is defined by:\[ \hat{A} \hat{B} f \equiv \hat{A} [ \hat{B} f ] \label{22.1.3} \]Two operators are equal if \[ \hat{A} f = \hat{B} f \label{22.1.4} \]for all functions \(f\). The identity operator \(\hat{1}\) does nothing (or multiplies by 1):\[ {\hat 1} f = f \label{22.1.5} \]The associative law holds for operators:\[ \hat{A}(\hat{B}\hat{C}) = (\hat{A}\hat{B})\hat{C} \label{22.1.6} \]The commutative law does not generally hold for operators. In general, \(\hat{A} \hat{B} \neq \hat{B} \hat{A}\). It is convenient to define the quantity:\[ [\hat{A}, \hat{B}]\equiv \hat{A} \hat{B} - \hat{B} \hat{A} \label{22.1.7} \]which is called the commutator of \(\hat{A}\) and \(\hat{B}\). Note that the order matters, so that \([ \hat{A}, \hat{B}] = - [ \hat{B}, \hat{A}]\). If \(\hat{A}\) and \(\hat{B}\) happen to commute, then \([\hat{A}, \hat{B}] = 0\).Almost all operators encountered in quantum mechanics are linear. A linear operator is any operator \(\hat{A}\) satisfying the following two conditions:\[ \begin{equation} \begin{aligned} \hat{A} (f + g) &= \hat{A} f + \hat{A} g, \\ \hat{A} (c f) &= c \hat{A} f, \end{aligned} \end{equation}\label{22.1.8} \]where \(c\) is a constant and \(f\) and \(g\) are functions. As an example, consider the operators \(\dfrac{d}{dx}\) and \(()^2\). We can see that \(\dfrac{d}{dx}\) is a linear operator because:\[ \begin{equation} \begin{aligned} \dfrac{d}{dx}[f(x) + g(x)] &=\dfrac{d}{dx}f(x) + \dfrac{d}{dx}g(x), \\ \dfrac{d}{dx}[c f(x)] &= c (d/dx) f(x). \end{aligned} \end{equation}\label{22.1.9} \]However, \(()^2\) is not a linear operator because:\[ (f(x) + g(x))^2 \neq (f(x))^2 + (g(x))^2 \label{22.1.10} \]Hermitian operators are characterized by the self-adjoint property:\[ \int \psi_a^{*} (\hat{A} \psi_a)d{\bf r} = \int \psi_a (\hat{A} \psi_a)^{*}d{\bf r}, \label{22.1.11} \]where the integral is performed over all space. This property guarantees that all the eigenvalues of the operators are real. Defining \(a\) as the eigenvalue of operator \(\hat{A}\) using:\[ \hat{A} \psi({\bf r}) = a \psi({\bf r}), \label{22.1.12} \]we can prove that \(a\) is real by replacing Equation \ref{22.1.12} into Equation \ref{22.1.11}:\[ \begin{equation} \begin{aligned} a \int \psi_a^{*} \psi_a d{\bf r}&= a^{*} \int \psi_a \psi_a^{*} d{\bf r}\\ (a - a^{*}) \int \vert\psi_a\vert^2 d{\bf r} &= 0, \end{aligned} \end{equation}\label{22.1.13} \]and since \(\vert\psi_a\vert^2\) is never negative, either \(a = a^{*}\) or \(\psi_a = 0\). Since \(\psi_a = 0\) is not an acceptable wavefunction, \(a = a^{*}\), and \(a\) is real.The following additional properties of Hermitian operators can also be proven with some work:\[ \int \psi^{*}\hat{A} \psi d{\bf r} = \int (\hat{A} \psi)^{*} \psi d{\bf r}, \label{22.1.14} \]and for any two states \(\psi_1\) and \(\psi_2\):\[ \int \psi_1^{*} \hat{A} \psi_2 d{\bf r}= \int (\hat{A} \psi_1)^{*} \psi_2 d{\bf r}. \label{22.1.15} \]Taking \(\psi_a\) and \(\psi_b\) as eigenfunctions of \(\hat{A}\) with eigenvalues \(a\) and \(b\) with \(a \neq b\), and using Equation \ref{22.1.15}, we obtain:\[ \begin{equation} \begin{aligned} \int \psi_a^{*} \hat{A} \psi_b d{\bf r} &= \int (\hat{A} \psi_a)^{*} \psi_b d{\bf r}\\ b \int \psi_a^{*} \psi_b d{\bf r} &= a^{*} \int \psi_a^{*} \psi_b d{\bf r}\\ (b - a) \int \psi_a^{*} \psi_b d{\bf r} &= 0. \end{aligned} \end{equation}\label{22.1.16} \]Thus, since \(a = a^{*}\), and since we assumed \(b \neq a\), we must have \(\int \psi_a^{*} \psi_b d{\bf r} = 0\), i.e. \(\psi_a\) and \(\psi_b\) are orthogonal. In other words, eigenfunctions of a Hermitian operator with different eigenvalues are orthogonal (or can be chosen to be so).This page titled 21.1: Operators in Quantum Mechanics is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati.
7,862
21.2: Eigenfunctions and Eigenvalues
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/21%3A_Operators_and_Mathematical_Background/21.02%3A_Eigenfunctions_and_Eigenvalues
As we have already seen, an eigenfunction of an operator \(\hat{A}\) is a function \(f\) such that the application of \(\hat{A}\) on \(f\) gives \(f\) again, times a constant:\[ \hat{A} f = k f, \label{22.2.1} \]where \(k\) is a constant called the eigenvalue.When a system is in an eigenstate of observable \(A\) (i.e., when the wave function is an eigenfunction of the operator \(\hat{A}\)) then the expectation value of \(A\) is the eigenvalue of the wave function. Therefore:\[ \hat{A} \psi({\bf r}) = a \psi({\bf r}), \label{22.2.2} \]then:\[\begin{equation} \begin{aligned}\langle A \rangle &= \int \psi^{*}({\bf r}) \hat{A} \psi({\bf r}) d{\bf r} \\ &= \int \psi^{*}({\bf r}) a \psi({\bf r}) d{\bf r} \\ &= a \int \psi^{*}({\bf r}) \psi({\bf r}) d{\bf r} = a, \end{aligned} \end{equation} \label{22.2.3} \]which implies that:\[ \int \psi^{*}({\bf r}) \psi({\bf r}) d{\bf r} = 1. \label{22.2.4} \]This property of wave functions is called normalization, and in the one-electron TISEq guarantees that the maximum probability of finding an electron over the entire space is one.1A unique property of quantum mechanics is that a wave function can be expressed not just as a simple eigenfunction, but also as a combination of several of them. We have in part already encountered such property in the previous chapter, where complex hydrogen orbitals have been combined to form corresponding linear ones. As a general example, let us consider a wave function written as a linear combination of two eigenstates of \(\hat{A}\), with eigenvalues \(a\) and \(b\):\[ \psi = c_a \psi_a + c_b \psi_b, \label{22.2.5} \]where \(\hat{A} \psi_a = a \psi_a\) and \(\hat{A} \psi_b = b \psi_b\). Then, since \(\psi_a\) and \(\psi_b\) are orthogonal and normalized (usually abbreviated as orthonormal), the expectation value of \(A\) is:\[\begin{equation} \begin{aligned}\langle A \rangle &= \int \psi^{*} \hat{A} \psi d{\bf r} \\ &= \int \left[ c_a \psi_a + c_b \psi_b \right]^{*} \hat{A} \left[ c_a \psi_a + c_b \psi_b \right] d{\bf r}\\ &= \int \left[ c_a \psi_a + c_b \psi_b \right]^{*} \left[ a c_a \psi_a + b c_b \psi_b \right] d{\bf r}\\ &= a \vert c_a\vert^2 \int \psi_a^{*} \psi_a d{\bf r} + b c_a^{*} c_b \int \psi_a^{*} \psi_b d{\bf r} + a c_b^{*} c_a \int \psi_b^{*} \psi_a d{\bf r} + b \vert c_b\vert^2 \int \psi_b^{*} \psi_b d{\bf r}\\ &= a \vert c_a\vert^2 + b \vert c_b\vert^2. \end{aligned} \end{equation} \label{22.2.6} \]This result shows that the average value of \(A\) is a weighted average of eigenvalues, with the weights being the squares of the coefficients of the eigenvectors in the overall wavefunction.2This page titled 21.2: Eigenfunctions and Eigenvalues is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati.
7,863
21.3: Common Operators in Quantum Mechanics
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/21%3A_Operators_and_Mathematical_Background/21.03%3A_Common_Operators_in_Quantum_Mechanics
Some common operators occurring in quantum mechanics are collected in the table below:In the sections below we analyze in details two main operators for the energy and the angular momentum.The main quantity that quantum mechanics is interested in is the total energy of the system, \(E\). The operator corresponding to this quantity is called Hamiltonian:\[ \hat{H} = - \dfrac{\hbar^2}{2} \sum_i \dfrac{1}{m_i} \nabla_i^2 + V, \label{22.3.1} \]where \(i\) is an index over all the particles of the system. Using the formalism of operators in conjunction with Equation \ref{22.3.1}, we can write the TISEq just simply as:\[ \hat{H} \psi = E\psi. \label{22.3.2} \]Comparing Equation \ref{22.3.1} to the classical analog in Equation 18.3.2, we notice how the first term in the Hamiltonian operator represents the corresponding kinetic energy operator, \(\hat{K}\), while the second term represents the potential energy operator, \(\hat{V}\). For a one-electron system—such as the ones we studied in chapter 20—we can write:\[ \hat{K}=- \dfrac{\hbar^2}{2m} \left(\dfrac{\partial^2}{\partial x^2} + \dfrac{\partial^2}{\partial y^2} + \dfrac{\partial^2}{\partial z^2} \right) = \dfrac{\hbar^2}{2m} \nabla^2, \label{22.3.3} \]which is universal and applies to all systems. The potential energy operator \(\hat{V}\) is what differentiate each system. Using Equation \ref{22.3.2}, we can then simply obtain the TISEq for each of the first three models discussed in chapter 20 by simply using:\[\begin{equation} \begin{aligned} \text{Free particle:}\qquad \hat{V} &= 0, \\ \text{Particle in a box:}\qquad \hat{V} &= 0 \; \text{inside the box, } \hat{V} = \infty \; \text{outside the box},\\ \text{Harmonic oscillator:}\qquad \hat{V} &= \dfrac{1}{2}kx^2. \\ \end{aligned} \end{equation} \label{22.3.4} \]While these three cases are trivial to solve, the case of the rigid rotor is more complicated to solve, since the kinetic energy operator needs to be solved in spherical polar coordinates, as we will show in the next section.To write the kinetic energy operator \(\hat{K}\) for the rigid rotor, we need to express the Laplacian, \(\nabla^2\), in spherical polar coordinates:\[ \nabla^2=\nabla^2_r - \dfrac{\hat{L}^2}{r^2}, \label{22.3.5} \]where \(\nabla_r^2 = \dfrac{1}{r^2}\dfrac{\partial}{\partial r} \left( r^2\dfrac{\partial}{\partial r} \right)\) is the radial Laplacian, and \(\hat{L}^2\) is the square of the total angular momentum operator, which is:\[\begin{equation} \begin{aligned} \hat{L}^2 &=\hat{L}\cdot\hat{L}=\left(\mathbf{i}\hat{L}_x+\mathbf{j}\hat{L}_y+\mathbf{k}\hat{L}_z\right)\cdot\left(\mathbf{i}\hat{L}_x+\mathbf{j}\hat{L}_y+\mathbf{k}\hat{L}_z \right) \\ &=\hat{L}_x^2+\hat{L}_y^2+\hat{L}_z^2, \end{aligned} \end{equation} \label{22.3.6} \]with \(\left\{\mathbf{i},\mathbf{j},\mathbf{k}\right\}\) the unitary vectors in three-dimensional space. The component along each direction, \(\left\{\hat{L}_x,\hat{L}_y,\hat{L}_z\right\}\), are then expressed in cartesian coordinates using to the following formulas:\[\begin{equation} \begin{aligned} \hat{L}_x &= -i\hbar\left(y\dfrac{\partial}{\partial z} - z \dfrac{\partial}{\partial y} \right), \\ \hat{L}_y &= -i \hbar \left(z\dfrac{\partial}{\partial x} - x \dfrac{\partial}{\partial z} \right), \\ \hat{L}_z &= -i \hbar \left(x\dfrac{\partial}{\partial y} - y \dfrac{\partial}{\partial x} \right). \end{aligned} \end{equation} \label{22.3.7} \]The eigenvalues equation corresponding to the total angular momentum is:\[ \hat{L}^2 Y(\theta, \varphi) = \hbar^2 \ell(\ell+1) Y_{\ell}^{m_{\ell}}(\theta, \varphi), \label{22.3.8} \]where \(\ell\) is the azimuthal quantum number and \(Y_{\ell}^m(\theta, \varphi)\) are the spherical harmonics, both of which we already encountered in chapter 20. Recall once again that each energy level \(E_{\ell}\) is \((2\ell+1)\)-fold degenerate in \(m_{\ell}\), since \(m_{\ell}\) can have values \(-\ell, -\ell+1, \ldots, \ell-1, \ell\). This means that there are \((2\ell+1)\) states with the same energy \(E_{\ell}\), each characterized by the magnetic magnetic quantum number \(m_{\ell}\). This quantum number can be determined using the following eigenvalues equation:\[ \hat{L}_z Y(\theta, \varphi) = \hbar m_{\ell} Y_{\ell}^{m_{\ell}}(\theta, \varphi). \label{22.3.9} \]The interpretation of these results is rather complicated, since the angular momenta are quantum operators and they cannot be drawn as vectors like in classical mechanics. Nevertheless, it is common to depict them heuristically as in figure \(\PageIndex{1}\),1 where a set of states with quantum numbers \(\ell =2\), and \(m_{\ell}=-2,-1,0,1,2\) are reported. Since \(|L|={\sqrt {L^{2}}}=\hbar {\sqrt {6}}\), the vectors are all shown with length \(\hbar \sqrt{6}\). The rings represent the fact that \(L_{z}\) is known with certainty, but \(L_{x}\) and \(L_{y}\) are unknown; therefore every classical vector with the appropriate length and \(z\)-component is drawn, forming a cone. The expected value of the angular momentum for a given ensemble of systems in the quantum state characterized by \(\ell\) and \(m_{\ell}\), could be somewhere on this cone but it cannot be defined for a single system.This page titled 21.3: Common Operators in Quantum Mechanics is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati.
7,864
22.1: Stern-Gerlach Experiment
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/22%3A_Spin/22.01%3A_Stern-Gerlach_Experiment
In 1920, Otto Stern and Walter Gerlach designed an experiment that unintentionally led to the discovery that electrons have their own individual, continuous spin even as they move along their orbital of an atom. The experiment was done by putting a silver foil in an oven to vaporize its atoms. The silver atoms were collected into a beam that passed through an inhomogeneous magnetic field. The result was that the magnetic beam split the beam into two (and only two) separate ones. The Stern–Gerlach experiment demonstrated that the spatial orientation of angular momentum is quantized into two components (up and down). Thus an atomic-scale system was shown to have intrinsically quantum properties. The experiment is normally conducted using electrically neutral particles such as silver atoms. This avoids the large deflection in the path of a charged particle moving through a magnetic field and allows spin-dependent effects to dominate.If the particle is treated as a classical spinning magnetic dipole, it will precess in a magnetic field because of the torque that the magnetic field exerts on the dipole. If it moves through a homogeneous magnetic field, the forces exerted on opposite ends of the dipole cancel each other out and the trajectory of the particle is unaffected. However, if the magnetic field is inhomogeneous then the force on one end of the dipole will be slightly greater than the opposing force on the other end, so that there is a net force which deflects the particle’s trajectory. If the particles were classical spinning objects, one would expect the distribution of their spin angular momentum vectors to be random and continuous. Each particle would be deflected by an amount proportional to its magnetic moment, producing some density distribution on the detector screen. Instead, the particles passing through the Stern–Gerlach apparatus are equally distributed among two possible values, with half of them ending up at an upper spot (“spin up”), and the other half at the lower spot (“spin down”). Since the particles are deflected by a magnetic field, spin is a magnetic property that is associated to some intrinsic form of angular momentum. As we saw in chapter 6, the quantization of the angular momentum gives energy levels that are \((2\ell+1)\)-fold degenerate. Since along the direction of the magnet we observe only two possible eigenvalues for the spin, we conclude the following value for \(s\):\[ 2s+1=2 \quad\Rightarrow\quad s=\dfrac{1}{2}. \label{23.1.1} \]The Stern-Gerlach experiment proves that electrons are spin-\(\dfrac{1}{2}\) particles. These have only two possible spin angular momentum values measured along any axis, \(+\dfrac {\hbar }{2}\) or \(-\dfrac {\hbar }{2}\), a purely quantum mechanical phenomenon. Because its value is always the same, it is regarded as an intrinsic property of electrons, and is sometimes known as “intrinsic angular momentum” (to distinguish it from orbital angular momentum, which can vary and depends on the presence of other particles).The act of observing (measuring) the momentum along the \(z\) direction corresponds to the operator \(\hat{S}_z\), which project the value of the total spin operator \(\hat{S}^2\) along the \(z\) axis. The eigenvalues of the projector operator are:\[ \hat{S}_z \phi = \hbar m_s \phi, \label{23.1.2} \]where \(m_s=\left\{-s,+s\right\}=\left\{-\dfrac{1}{2},+\dfrac{1}{2}\right\}\) is the spin quantum number along the \(z\) component. The eigenvalues for the total spin operator \(\hat{S}^2\)—similarly to the angular momentum operator \(\hat{L}^2\) seen in Equation 22.3.6—are:\[ \hat{S}^2 \phi = \hbar^2 s(s+1) \phi, \label{23.1.3} \]The initial state of the particles in the Stern-Gerlach experiment is given by the following wave function:\[ \phi = c_1\, \phi_{\uparrow} + c_2 \,\phi_{\downarrow}, \label{23.1.4} \]where \(\uparrow=+\dfrac{\hbar}{2}\), \(\downarrow=-\dfrac{\hbar}{2}\), and the coefficients \(c_1\) and \(c_2\) are complex numbers. In this initial state, spin can point in any direction. The expectation value of the operator \(\hat{S}_z\) (the quantity that the Stern-Gerlach experiment measures), can be obtained using Equation 22.2.6:\[ \begin{aligned} &= \int \phi^{*} \hat{S}_z \phi \, d\mathbf{s} \\ &= +\dfrac{\hbar}{2} \vert c_1\vert^2 -\dfrac{\hbar}{2} \vert c_2\vert^2, \end{aligned} \label{23.1.5} \]where the integration is performed along a special coordinate \(\mathbf{s}\) composed of only two values, and the coefficient \(c_1\) and \(c_2\) are complex numbers. Applying the normalization condition, Equation 6.2.4 we can obtain:\[ |c_{1}|^{2}+|c_{2}|^{2}=1 \quad\longrightarrow\quad |c_{1}|^{2}=|c_{2}|^{2}=\dfrac{1}{2}. \label{23.1.6} \]This equation is not sufficient to determine the values of the coefficients since they are complex numbers. Equation \ref{23.1.6}, however, tells us that the squared magnitudes of the coefficients can be interpreted as probabilities of outcome from the experiment. This is true because their values are obtained from the normalization condition, and the normalization condition guarantees that the system is observed with probability equal to one. Summarizing, since we started with random initial directions, each of the two states, \(\phi_{\uparrow}\) and \(\phi_{\downarrow}\), will be observed with equal probability of \(\dfrac{1}{2}\).This page titled 22.1: Stern-Gerlach Experiment is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati.
7,865
22.2: Sequential Stern-Gerlach Experiments
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/22%3A_Spin/22.02%3A_Sequential_Stern-Gerlach_Experiments
An interesting result can be obtain if we link multiple Stern–Gerlach apparatuses into one experiment and we perform the measurement along two orthogonal directions in space. As we showed in the previous section, all particles leaving the first Stern-Gerlach apparatus are in an eigenstate of the \(\hat{S}_z\) operator (i.e., their spin is either “up or”down” with respect to the \(z\)-direction). We can then take either one of the two resulting beams (for simplicity let’s take the “spin up” output), and perform another spin measurement on it. If the second measurement is also aligned along the \(z\)-direction then only one outcome will be measured, since all particles are already in the “spin up” eigenstate of \(\hat{S}_z\). In other words, the measurement of a particle being in an eigenstate of the corresponding operator leaves the state unchanged.If, however, we perform the spin measurement along a direction perpendicular to the original \(z\)-axis (i.e., the \(x\)-axis) then the output will equally distribute among “spin up” or ”spin down” in the \(x\)-direction, which in order to avoid confusion, we can call “spin left” and “spin right”. Thus, even though we knew the state of the particles beforehand, in this case the measurement resulted in a random spin flip in either of the measurement directions. Mathematically, this property is expressed by the nonvanishing of the commutator of the spin operators:\[ \left[\hat{S}_z,\hat{S}_x \right] \neq 0. \label{23.2.1} \]We can finally repeat the measurement a third time, with the magnet aligned along the original \(z\)-direction. According to classical physics, after the second apparatus, we would expect to have one beam with characteristic “spin up” and “spin left”, and another with characteristic “spin up” and “spin right”. The outcome of the third measurement along the original \(z\)-axis should be one output with characteristic “spin up”, regardless to which beam the magnet is applied (since the “spin down” component should have been “filtered out” by the first experiment, and the “spin left” and “spin right” component should be filtered out by the third magnet). This is not what is observed. The output of the third measurement is—once again—two beams in the \(z\) direction, one with “spin up” characteristric and the other with “spin down”.This experiment shows that spin is not a classical property. The Stern-Gerlach apparatus does not behave as a simple filter, selecting beams with one specific pre-determined characteristic. The second measurement along the \(x\) axis destroys the previous determination of the angular momentum in the \(z\) direction. This means that this property cannot be measured on two perpendicular directions at the same time.This page titled 22.2: Sequential Stern-Gerlach Experiments is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati.
7,866
22.3: Spin Operators
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/22%3A_Spin/22.03%3A_Spin_Operators
The mathematics of quantum mechanics tell us that \(\hat{S}_z\) and \(\hat{S}_x\) do not commute. When two operators do not commute, the two measurable quantities that are associated with them cannot be known at the same time.In 3-dimensional space there are three directions that are orthogonal to each other \(\left\{x,y,z\right\}\). Thus, we can define a third spin projection operator along the \(y\) direction, \(\hat{S}_y\), corresponding to a new set of Stern-Gerlach experiments where the second magnet is oriented along a direction that is orthogonal to the two that we consider in the previous section. The total spin operator, \(\hat{S}^2\), can then be constructed similarly to the total angular momentum operator of Equation 22.3.5, as:\[\begin{equation} \begin{aligned} \hat{S}^2 &=\hat{S}\cdot\hat{S}=\left(\mathbf{i}\hat{S}_x+\mathbf{j}\hat{S}_y+\mathbf{k}\hat{S}_z\right)\cdot\left(\mathbf{i}\hat{S}_x+\mathbf{j}\hat{S}_y+\mathbf{k}\hat{S}_z \right) \\ &=\hat{S}_x^2+\hat{S}_y^2+\hat{S}_z^2, \end{aligned} \end{equation}\label{23.3.1} \]with \(\left\{\mathbf{i},\mathbf{j},\mathbf{k}\right\}\) the unitary vectors in three-dimensional space.Wolfgang Pauli explicitly derived the relationships between all three spin projection operators. Assuming the magnetic field along the \(z\) axis, Pauli’s relations can be written using simple equations involving the two possible eigenstates \(\phi_{\uparrow}\) and \(\phi_{\downarrow}\):\[\begin{equation} \begin{aligned} \hat{S}_x \phi_{\uparrow} = \dfrac{\hbar}{2} \phi_{\downarrow} \qquad \hat{S}_y \phi_{\uparrow} &= \dfrac{\hbar}{2} i \phi_{\downarrow} \qquad \hat{S}_z \phi_{\uparrow} = \dfrac{\hbar}{2} \phi_{\uparrow} \\ \hat{S}_x \phi_{\downarrow} = \dfrac{\hbar}{2} \phi_{\uparrow} \qquad \hat{S}_y \phi_{\downarrow} &= - \dfrac{\hbar}{2} i \phi_{\uparrow} \qquad \hat{S}_z \phi_{\downarrow} = -\dfrac{\hbar}{2} \phi_{\downarrow}, \end{aligned} \end{equation}\label{23.3.2} \]where \(i\) is the imaginary unit (\(i^2=-1\)). In other words, for \(\hat{S}_z\) we have eigenvalue equations, while the remaining components have the effect of permuting state \(\phi_{\uparrow}\) with state \(\phi_{\downarrow}\) after multiplication by suitable constants. We can use these equations, together with Equation 23.1.7, to calculate the commutator for each couple of spin projector operators:\[\begin{equation} \begin{aligned} \left[\hat{S}_x, \hat{S}_y\right] &= i\hat{S}_z \\ \left[\hat{S}_y, \hat{S}_z\right] &= i\hat{S}_x \\ \left[\hat{S}_z, \hat{S}_x\right] &= i\hat{S}_y, \end{aligned} \end{equation}\label{23.3.3} \]which prove that the three projection operators do not commute with each other.Proof of Commutator Between Spin Projection Operators.The equations in \ref{23.3.3} can be proved by writing the full eigenvalue equation and solving it using the definition of commutator, Equation 23.1.7, in conjunction with Pauli’s relation, Equations \ref{23.3.2}. For example, for the first couple:\[\begin{equation} \begin{aligned} \left[\hat{S}_x, \hat{S}_y\right] \phi_{\uparrow} &= \hat{S}_x\hat{S}_y\phi_{\uparrow}-\hat{S}_y\hat{S}_x\phi_{\uparrow} \\ &= \hat{S}_x \left(\dfrac{\hbar}{2}i \phi_{\downarrow} \right)-\hat{S}_y \left(\dfrac{\hbar}{2} \phi_{\downarrow} \right) \\ &= \dfrac{\hbar}{2}i \left(\dfrac{\hbar}{2}i \phi_{\downarrow} \right)- \dfrac{\hbar}{2} \left(\dfrac{\hbar}{2} \phi_{\downarrow} \right) \\ &= \left(\dfrac{\hbar}{4}+\dfrac{\hbar}{4}\right)i\phi_{\uparrow} \\ &= \dfrac{\hbar}{2}i \phi_{\uparrow} \\ &= i\hat{S}_z \phi_{\uparrow} \end{aligned} \end{equation}\label{23.3.4} \]This page titled 22.3: Spin Operators is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati.
7,867
23.1: Postulate 1- The Wave Function Postulate
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/23%3A_Postulates_of_Quantum_Mechanics/23.01%3A_Postulate_1-_The_Wave_Function_Postulate
The state of a quantum mechanical system is completely specified by a function \(\Psi({\bf r}, t)\) that depends on the coordinates of the particle(s) and on time. This function, called the wave function or state function, has the important property that \(\Psi^{*}({\bf r}, t)\Psi({\bf r}, t) d\tau\) is the probability that the particle lies in the volume element \(d\tau\) located at \({\bf r}\) at time \(t\).The wave function must satisfy certain mathematical conditions because of this probabilistic interpretation. For the case of a single particle, the probability of finding it somewhere is 1, so that we have the normalization condition\[ \int_{-\infty}^{\infty} \Psi^{*}({\bf r}, t) \Psi({\bf r}, t) d\tau = 1 \label{24.1.1} \]It is customary to also normalize many-particle wave functions to 1. As we already saw for the particle in a box in chapter 20, a consequence of the first postulate is that the wave function must also be single-valued, continuous, and finite, so that derivatives can be defined and calculated at each point in space. This consequence allows for operators (which typically involve derivation) to be applied without mathematical issues.This page titled 23.1: Postulate 1- The Wave Function Postulate is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati.
7,868
23.3: Postulate 3- Individual Measurements
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/23%3A_Postulates_of_Quantum_Mechanics/23.03%3A_Postulate_3-_Individual_Measurements
In any measurement of the observable associated with operator \(\hat{A}\), the only values that will ever be observed are the eigenvalues \(a\) that satisfy the eigenvalue equation:\[ \hat{A} \Psi = a \Psi. \label{24.3.1} \]This postulate captures the central point of quantum mechanics: the values of dynamical variables can be quantized (although it is still possible to have a continuum of eigenvalues in the case of unbound states). If the system is in an eigenstate of \(\hat{A}\) with eigenvalue \(a\), then any measurement of the quantity \(A\) will yield \(a\). Although measurements must always yield an eigenvalue, the state does not have to be an eigenstate of \(\hat{A}\) initially.An arbitrary state can be expanded in the complete set of eigenvectors of \(\hat{A}\) \(\left(\hat{A}\Psi_i = a_i \Psi_i\right)\) as:\[ \Psi = \sum_i^{n} c_i \Psi_i, \label{24.3.2} \]where \(n\) may go to infinity. In this case, we only know that the measurement of \(A\) will yield one of the values \(a_i\), but we don’t know which one. However, we do know the probability that eigenvalue \(a_i\) will occur (it is the absolute value squared of the coefficient, \(\vert c_i\vert^2\), as we obtained already in chapter 22), leading to the fourth postulate below.This page titled 23.3: Postulate 3- Individual Measurements is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati.
7,870
23.4: Postulate 4- Expectation Values and Collapse of the Wavefunction
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/23%3A_Postulates_of_Quantum_Mechanics/23.04%3A_Postulate_4-_Expectation_Values_and_Collapse_of_the_Wavefunction
If a system is in a state described by a normalized wave function \(\Psi\), then the average value of the observable corresponding to \(\hat{A}\) is given by:\[ \langle A \rangle = \int_{-\infty}^{\infty} \Psi^{*} \hat{A} \Psi d\tau. \label{24.4.1} \]An important consequence of the fourth postulate is that, after measurement of \(\Psi\) yields some eigenvalue \(a_i\), the wave function immediately “collapses” into the corresponding eigenstate \(\Psi_i\). In other words, measurement affects the state of the system. This fact is used in many experimental tests of quantum mechanics, such as the Stern-Gerlach experiment. Think again at the sequential experiment that we discussed in chapter 23. The act of measuring the spin along one coordinate is not simply a “filtration” of some pre-existing feature of the wave function, but rather an act that changes the nature of the wave function itself, affecting the outcome of future experiments. To this act corresponds the collapse of the wave function, a process that remains unexplained to date. Notice how the controversy is not in the mathematics of the experiment, which we already discussed in the previous chapter without issues. The issues rather arise because we don’t know how to define the measurement act in itself (other than the fact that it is some form of quantum mechanical procedure with clear and well-defined macroscopic outcomes). This is the reason why the collapse of the wave function is also sometimes called the measurement problem of quantum mechanics, and it is still a source of research and debate among modern scientists.This page titled 23.4: Postulate 4- Expectation Values and Collapse of the Wavefunction is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati.
7,871
23.6: Postulate 6- Pauli Exclusion Principle
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/23%3A_Postulates_of_Quantum_Mechanics/23.06%3A_Postulate_6-_Pauli_Exclusion_Principle
The total wave function of a system with \(N\) spin-\(\dfrac{1}{2}\) particles (also called fermions) must be antisymmetric with respect to the interchange of all coordinates of one particle with those of another. For spin-1 particles (also called bosons), the wave function is symmetric:\[ \begin{aligned} \Psi\left({\bf r}_1,{\bf r}_2,\ldots, {\bf r}_N\right) &= - \Psi\left({\bf r}_2,{\bf r}_1,\ldots, {\bf r}_N\right) \quad \text{fermions}, \\ \Psi\left({\bf r}_1,{\bf r}_2,\ldots, {\bf r}_N\right) &= + \Psi\left({\bf r}_2,{\bf r}_1,\ldots, {\bf r}_N\right) \quad \text{bosons}. \end{aligned} \label{24.6.1} \]Electronic spin must be included in this set of coordinates. As we will see in chapter 26, the mathematical treatment of the antisymmetry postulate gives rise to the Pauli exclusion principle, which states that two or more identical fermions cannot occupy the same quantum state simultaneously (while bosons are perfectly capable of doing so).This page titled 23.6: Postulate 6- Pauli Exclusion Principle is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati.
7,873
24.1: The Double-slit Experiment
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/24%3A_Quantum_Weirdness/24.01%3A_The_Double-slit_Experiment
The double-slit experiment is considered by many the seminal experiment in quantum mechanics. The reason why we see it only at this advanced point is that its interpretation is not as straightforward as it might seem from a superficial analysis. The famous physicist Richard Feynman was so fond of this experiment that he used to say that all of quantum mechanics can be understood from carefully thinking through its implications.The premises of the experiment are very simple: cut two slits in a solid material (such as a sheet of metal), send light or electrons through them, and observe what happens on a screen position at some distance on the other side. The result of this experiment though are far from straightforward.Let’s first consider the single-slit case. If light consisted of classical particles, and these particles were sent in a straight line through a single-slit and allowed to strike a screen on the other side, we would expect to see a pattern corresponding to the size and shape of the slit. However, when this “single-slit experiment” is actually performed, the pattern on the screen is a diffraction pattern in which the light is spread out. The smaller the slit, the greater the angle of spread. This behavior is typical of waves, where diffraction explains the pattern as being the result of the interference of the waves with the slit.If one illuminates two parallel slits, the light from the two slits again interferes. Here the interference is a more pronounced pattern with a series of alternating light and dark bands. The width of the bands is a property of the frequency of the illuminating light. The pattern observed on the screen is the result of this interference, as shown in figure \(\PageIndex{1}\).1The interference pattern resulting from the double-slit experiment are observed not only with light, but also with a beam of electrons, and other small particles.The first twist in the plot is if we perform the experiment by sending individual particles (e.g, either individual photons, or individual electrons). Sending particles through a double-slit apparatus one at a time results in single particles appearing on the screen, as expected. Remarkably, however, an interference pattern emerges when these particles are allowed to build up one by one (figure \(\PageIndex{2}\))2. The resulting pattern on the screen is the same as if each individual particle had passed through both slits.This variation of the double-slit experiment demonstrates the wave–particle duality: the particle is measured as a single pulse at a single position, while the wave describes the probability of absorbing the particle at a specific place on the screen.A second twist happens if we place particle detectors at the slits with the intent of showing through which slit a particle goes. The interference pattern in this case will disappear.This experiment illustrates that photons (and electrons) can behave as either particles or waves, but cannot be observed as both at the same time. The simplest interpretation of this experiment is that the wave function of the photon collapses into a deterministic position due to the interaction with the detector on the slit, and the interference pattern is therefore lost. This result also proves that in order to measure (detect) a photon, we must interact with it, an act that changes its wave function.The interpretation of the results of this experiment is not simple. As for other situations in quantum mechanics, the problem arise not because we cannot describe the experiment in mathematical terms, but because the math that we need to describe it cannot be related to the macroscopic classical world we live in. According to the math, in fact, particles in the experiment are described exclusively in probabilistic terms (given by the square of the wave function). The macroscopic world, however, is not probabilistic, and outcomes of experiments can be univocally measured. Several different ways of reediming this controversy have been proposed, including for example the possibility that quantum mechanics is incomplete (the emergence of probability is due to the ignorance of some more fundamental deterministic feature of nature), or assuming that every time a measurement is done on a quantum system, the universe splits, and every possible measurable outcome is observed in different branches of our universe (we only happen to live in one of such branches, so we observe only one non-probabilistic result).3 The interpretation of quantum mechanics is still an unsolved problem in modern physics (luckily, it does not prevent us from using quantum mechanics in chemistry).︎This page titled 24.1: The Double-slit Experiment is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati.
7,874
24.2: Heisenberg's Uncertainty Principle
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/24%3A_Quantum_Weirdness/24.02%3A_Heisenberg's_Uncertainty_Principle
Let’s now revisit the simple case of a free particle. As we saw in chapter 20, the wave function that solved the TISEq:\[ \psi(x) = A \exp(\pm ikx), \label{25.2.1} \]is the equation of a plane wave along the \(x\) direction. This result is in agreement with the de Broglie hypothesis, which says that every object in the universe is a wave. If this wave function describes a particle with mass (such as an electron), freely moving along one spatial direction \(x\), it would be reasonable to ask the question: where is the particle located? Analyzing Equation \ref{25.2.1}, however, it is not possible to answer this question since \(\psi(x)\) is delocalized in space from \(x=-\infty\) to \(x=+\infty\).1 In other words, the particle position is extremely uncertain because it could be essentially anywhere along the wave.Thus for a free particle, the particle side of the wave-particle duality seems completely lost. We can, however, bring it back into the picture by writing the wave function as a sum of many plane waves, called a wave packet:\[ \psi (x)\propto \sum _{n}A_{n}\exp\left(\dfrac{ip_n x}{\hbar} \right), \label{25.2.2} \]where \(A_n\) represents the relative contribution of the mode \(p_n\) to the overall total. We are allowed to write the wave function this way because each individual plane wave is a solution of the TISEq, and as we already saw in chapter 22 and several other places, the sum of each individual solution is also a solution. An interesting consequence of writing the wave function as a wave packet is that when we sum different waves, they interfere with each other, and they might localize in some region of space. Thus for a wave function written as in Equation \ref{25.2.2}, the wave packet can become more localized. We may also make this procedure a step further to the continuum limit, where the wave function goes from a sum to an integral over all possible modes:\[ \psi (x)=\dfrac {1}{\sqrt{2\pi\hbar}}\int_{-\infty }^{\infty }\varphi (p)\cdot \exp \left(\dfrac{ip x}{\hbar} \right)\,dp, \label{25.2.3} \]where \(\varphi(p)\) represents the amplitude of these modes and is called the wave function in momentum space. In mathematical terms, we say that \(\varphi (p)\) is the Fourier transform of \(\psi (x)\) and that \(x\) and \(p\) are conjugate variables. Adding together all of these plane waves comes at a cost; namely, the momentum has become less precise since it becomes a mixture of waves of many different momenta.One way to quantify the precision of the position and momentum is the standard deviation, \(\sigma\). Since \(|\psi (x)|^{2}\) is a probability density function for position, we calculate its standard deviation. The precision of the position is improved—i.e., reduced \(\sigma_x\)—by using many plane waves, thereby weakening the precision of the momentum—i.e., increased \(\sigma_p\). Another way of stating this is that \(\sigma_x\) and \(\sigma_p\) have an inverse relationship (once we know one with absolute precision, the other becomes completely unknown). This fact was discovered by Werner Heisenberg and is now called the Heisenberg’s uncertainty principle. The mathematical treatment of this procedure results in the simple formula:\[ \sigma_{x}\sigma_{p} \geq \dfrac{\hbar }{2}. \label{25.2.4} \]The uncertainty principle can be extended to any couple of conjugated variables, including, for example, energy and time, angular momentum components along perpendicular directions, spin components along perpendicular directions, etc. It is also easy to show that conjugate variables in quantum mechanics correspond to non-commuting operators.2This page titled 24.2: Heisenberg's Uncertainty Principle is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati.
7,875
24.3: Tunneling
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/24%3A_Quantum_Weirdness/24.03%3A_Tunneling
Tunneling is a phenomenon where a particle may cross a barrier even if it does not have sufficient kinetic energy to overcome the potential of the barrier itself. In this situation, the particle is said to “tunnel through” the barrier following a purely quantum mechanical phenomenon (figure \(\PageIndex{1}\)).1To explain tunneling we must resort once again to the TISeq. A traveling or standing wave function incident on a non-infinite potential barrier (\(V_0\)) decays in the potential as a function of \(A_0\exp[-\alpha x]\), where \(A_0\) is the amplitude at the boundary, \(\alpha\) is proportional to the potential, and \(x\) is the distance into the potential. If a second well exists at infinite distance from the first well, the probability goes to zero, so the probability of a particle existing in the second well is zero. If a second well is brought closer to the first well, the amplitude of the wave function at this boundary is not zero, so the particle may tunnel into that well from the first well. It would appear that the particle is “leaking” through the barrier; it can travel through it without having to surmount it. An important point to keep in mind is that tunneling conserves energy. The final sum of the kinetic and potential energy of the system cannot exceed the initial sum. Therefore, the potential on both sides of the barrier does not need to be the same, but the sum of the ground state energy and the potential on the opposite side of the barrier may not be larger than the initial particle energy and potential.Tunneling can be described using the TISEq, Equation 22.3.1. For the tunneling problem we can take the potential \(V\) to be zero for all space, except for the region inside the barrier (between \(0\) and \(a\)):\[ V=\begin{cases} 0\quad&\text{if}\; -\inftychapter 20.This page titled 24.3: Tunneling is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati.
7,876
25.1: Many-Electron Wave Functions
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/25%3A_Many-Electron_Atoms/25.01%3A_Many-Electron_Wave_Functions
When we have more than one electron, the sixth postulate that we discussed in chapter 24 comes into place. In other words, we need to account for the spin of the electrons and we need the wave function to be antisymmetric with respect to exchange of the coordinates of any two electrons. In order to do so, we can define a new variable \({\bf x}\) which represents the set of all four coordinates associated with an electron: three spatial coordinates \({\bf r}\), and one spin coordinate \(\mathbf{s}\), i.e., \({\bf x} = \{ {\bf r}, {\bf s} \}\). We can then write the electronic wave function as \(\Psi({\bf x}_1, {\bf x}_2, \ldots, {\bf x}_N)\), and we require the sixth postulate to hold by writing:\[ \Psi\left({\bf x}_1,{\bf x}_2,\ldots, {\bf x}_N\right) = - \Psi\left({\bf x}_2,{\bf x}_1,\ldots, {\bf x}_N\right) \label{26.1.1} \]A very important step in simplifying \(\Psi({\bf x})\) is to expand it in terms of a set of one-electron functions. Since we need to take into account the spin coordinate as well, we can define a new function, called spin-orbital, by multiplying a spatial orbital by one of the two spin functions:\[ \begin{equation} \begin{aligned} \chi({\bf x}) &= \psi({\bf r}) \phi_{\uparrow}({\bf s}), \\ \chi({\bf x}) &= \psi({\bf r}) \phi_{\downarrow}({\bf s}). \end{aligned} \end{equation} \label{26.1.2} \]Notice that for a given spatial orbital \(\psi({\bf r})\), we can form two spin orbitals, one with \(\uparrow\) spin, and one with \(\downarrow\) spin (since the spin coordinate \({\bf s}\) has only two possible values, as already discussed in chapter 23). For the spatial orbitals we can use the same one-particle functions that solve the TISEq for the hydrogen atom, \(\psi_{n\ell m_{\ell}}({\bf r})\)(eq. 21.7 in chapter 21). Notice how each spin-orbital now depends on four quantum numbers, the three for the spatial part, \(n,\ell,m_{\ell}\), plus the spin quantum number \(m_s\). We need to keep in mind, however, that the spin-orbitals, \(\chi_{n\ell m_{\ell} m_{s}}\), are not analytic solutions to the TISEq, so the resulting wave function is not the exact wave function of the system, but just an approximation.Once we have defined one-electron spin-orbitals for each electron in the system, we can use them as the basis for our many-electron wave function. While doing so, we need to make sure to enforce the antisymmetry property of the overall wave function. We will start from the simplest case of an atom with two electrons with coordinates \(\mathbf{x}_1\) and \(\mathbf{x}_2\), which we put in two spin-orbitals \(\chi_1\) and \(\chi_2\). We can write the total wave function as a linear combination of the two spin-orbitals as:\[ \begin{equation} \begin{aligned} \Psi({\bf x}_1, {\bf x}_2) =& b_{11} \chi_1({\bf x}_1) \chi_1({\bf x}_2) + b_{12} \chi_1({\bf x}_1) \chi_2({\bf x}_2) + \\ & b_{21} \chi_2({\bf x}_1) \chi_1({\bf x}_2) + b_{22} \chi_2({\bf x}_1) \chi_2({\bf x}_2). \end{aligned} \end{equation} \label{26.1.3} \]We then notice that in order for the antisymmetry principle to be obeyed, we need \(b_{12} = -b_{21}\) and \(b_{11} = b_{22} = 0\), which give:\[ \Psi({\bf x}_1, {\bf x}_2) = b_{12} \left[ \chi_1({\bf x}_1) \chi_2({\bf x}_2) - \chi_2({\bf x}_1) \chi_1({\bf x}_2)\right]. \label{26.1.4} \]This wave function is sufficient to describe two-electron atoms and ions, such as helium. The numerical coefficient can be determined imposing the normalization condition, and is equal to \(b_{12} = \dfrac{1}{\sqrt{2}}\). For the ground state of helium, we can replace the spatial component of each spin-orbital with the \(1s\) hydrogenic orbital, \(\psi_{100}\), resulting in:\[ \begin{equation} \begin{aligned} \Psi({\bf x}_1, {\bf x}_2) &= \dfrac{1}{\sqrt{2}} \left[ \psi_{100}({\bf r}_1)\phi_{\uparrow} \; \psi_{100}({\bf r}_2)\phi_{\downarrow} - \psi_{100}({\bf r}_1)\phi_{\downarrow} \; \psi_{100}({\bf r}_2)\phi_{\uparrow} \right] \\ &= \psi_{100}({\bf r}_1)\psi_{100}({\bf r}_2) \dfrac{1}{\sqrt{2}} \left[ \phi_{\uparrow}\phi_{\downarrow} - \phi_{\downarrow}\phi_{\uparrow} \right], \end{aligned} \end{equation} \label{26.1.5} \]which clearly shows how we need just one spatial orbital, \(\psi_{100}\), to describe the system, while the antisymmetry is taken care by a suitable combination of spin functions, \(\dfrac{1}{\sqrt{2}} \left[ \phi_{\uparrow}\phi_{\downarrow} - \phi_{\downarrow}\phi_{\uparrow} \right]\). Notice also that we commit a small inaccuracy when we say: “two electron occupies one spin-orbital, one electron has spin up, and the other electron has spin down, with configuration: \([\uparrow\downarrow]\)”, as is typically found in general chemistry textbooks. The reality of the spin configuration is indeed more complicated, and the ground state of helium should be represented as \(\dfrac{1}{\sqrt{2}}\left[\uparrow\downarrow-\downarrow\uparrow\right]\).In order to generalize from two electrons to \(N\), we can first observe how Equation (26.1.4) could be easily constructed by placing the spin-orbitals into a \(2\times2\) matrix and calculating its determinant:\[ \Psi({\bf x}_1, {\bf x}_2)= \dfrac{1}{\sqrt{2}}{\begin{vmatrix} \chi_1({\bf x}_1)&\chi_2({\bf x}_1)\\\chi_1({\bf x}_2)&\chi_2({\bf x}_2) \end{vmatrix}}, \label{26.1.6} \]where each column contains one spin-orbital, each row contains the coordinates of a single electron, and the vertical bars around the matrix mean that we need to calculate its determinant. This notation is called the Slater determinant, and it is the preferred way of building any \(N\)-electron wave function. Slater determinants are useful because they can be easily bult for any case of \(N\) electrons in \(N\) spin-orbitals, and they also automatically enforce the antisymmetry of the resulting wave function. A general Slater determinant is written:\[ \Psi (\mathbf{x} _{1},\mathbf{x} _{2},\ldots ,\mathbf{x} _{N})={\dfrac {1}{\sqrt {N!}}}{\begin{vmatrix}\chi _{1}(\mathbf{x} _{1})&\chi _{2}(\mathbf{x} _{1})&\cdots &\chi _{N}(\mathbf{x} _{1})\\\chi _{1}(\mathbf{x} _{2})&\chi _{2}(\mathbf{x} _{2})&\cdots &\chi _{N}(\mathbf{x} _{2})\\\vdots &\vdots &\ddots &\vdots \\\chi _{1}(\mathbf{x} _{N})&\chi _{2}(\mathbf{x} _{N})&\cdots &\chi _{N}(\mathbf{x} _{N})\end{vmatrix}} = |\chi _{1},\chi _{2},\cdots ,\chi _{N}\rangle, \label{26.1.7} \]where the notation \(|\cdots\rangle\) is a shorthand to indicate the Slater determinant where only the diagonal elements are reported.This page titled 25.1: Many-Electron Wave Functions is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati.
7,877
25.2: Approximated Hamiltonians
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/25%3A_Many-Electron_Atoms/25.02%3A_Approximated_Hamiltonians
In order to solve the TISEq for a many-electron atom we also need to approximate the Hamiltonian, since analytic solution using the full Hamiltonian as in Equation 26.1 are impossible to find. The most significant approximation used in chemistry is called the variational method.The basic idea of the variational method is to guess a “trial” wave function for the problem consisting of some adjustable parameters called “variational parameters”. These parameters are adjusted until the energy of the trial wave function is minimized. The resulting trial wave function and its corresponding energy are variational method approximations to the exact wave function and energy.Why would it make sense that the best approximate trial wave function is the one with the lowest energy? This results from the Variational Theorem, which states that the energy of any trial wave function \(E\) is always an upper bound to the exact ground state energy \({\cal E}_0\). This can be proven easily. Let the trial wave function be denoted \(\Phi\). Any trial function can formally be expanded as a linear combination of the exact eigenfunctions \(\Psi_i\). Of course, in practice, we don’t know the \(\Psi_i\), since we are applying the variational method to a problem we can’t solve analytically. Nevertheless, that doesn’t prevent us from using the exact eigenfunctions in our proof, since they certainly exist and form a complete set, even if we don’t happen to know them. So, the trial wave function can be written:\[ \Phi = \sum_i c_i \Psi_i, \label{26.2.1} \]and the approximate energy corresponding to this wave function is:\[ E[\Phi] = \dfrac{\int \Phi^* {\hat H} \Phi d\mathbf{\tau}}{\int \Phi^* \Phi d\mathbf{\tau}}, \label{26.2.2} \]where \(\mathbf{\tau}=\left(\mathbf{r}_1,\mathbf{r}_2,\ldots,\mathbf{r}_N\right)\) is the ensemble of the spatial coordinates of each electron and the integral symbol is assumed as a \(3N\)-dimensional integration. Replacing the expansion over the exact wave functions, we obtain:\[ E[\Phi] = \dfrac{\sum_{ij} c_i^* c_j \int \Psi_i^* {\hat H} \Psi_jd\mathbf{\tau}}{ \sum_{ij} c_i^* c_j \int \Psi_i^* \Psi_jd\mathbf{\tau}}. \label{26.2.3} \]Since the functions \(\Psi_j\) are the exact eigenfunctions of \({\hat H}\), we can use \({\hat H} \Psi_j = {\cal E}_j \Psi_j\) to obtain:\[ E[\Phi] = \dfrac{\sum_{ij} c_i^* c_j {\cal E}_j \int \Psi_i^* \Psi_j d\mathbf{\tau}}{ \sum_{ij} c_i^* c_j \int \Psi_i^* \Psi_j d\mathbf{\tau}}. \label{26.2.4} \]Now using the fact that eigenfunctions of a Hermitian operator form an orthonormal set (or can be made to do so), we can write:\[ E[\Phi] = \dfrac{\sum_{i} c_i^* c_i {\cal E}_i}{\sum_{i} c_i^* c_i}. \label{26.2.5} \]We now subtract the exact ground state energy \({\cal E}_0\) from both sides to obtain\[ E[\Phi] - {\cal E}_0 = \dfrac{\sum_i c_i^* c_i ( {\cal E}_i - {\cal E}_0)}{ \sum_i c_i^* c_i}. \label{26.2.6} \]Since every term on the right-hand side is greater than or equal to zero, the left-hand side must also be greater than or equal to zero:\[ E[\Phi] \geq {\cal E}_0. \label{26.2.7} \]In other words, the energy of any approximate wave function is always greater than or equal to the exact ground state energy \({\cal E}_0\).This explains the strategy of the variational method: since the energy of any approximate trial function is always above the true energy, then any variations in the trial function which lower its energy are necessarily making the approximate energy closer to the exact answer. (The trial wave function is also a better approximation to the true ground state wave function as the energy is lowered, although not necessarily in every possible sense unless the limit \(\Phi = \Psi_0\) is reached).We now have all the ingredients to attempt the simplest approximated solution to the TISEq of a many-electron atom. We can start by writing the total wave function using the Slater determinant of Equation 26.1.7 in terms of spin-orbitals:\[ \Psi (\mathbf{x}_{1},\mathbf{x}_{2},\ldots ,\mathbf{x}_{N})= |\chi_{1},\chi_{2},\cdots ,\chi_{N}\rangle = |\psi_{1}\phi_{\uparrow},\psi_{1}\phi_{\downarrow},\cdots ,\psi_{\dfrac{N}{2}}\phi_{\uparrow},\psi_{\dfrac{N}{2}}\phi_{\downarrow}\rangle, \label{26.2.8} \]and then we can replace it into the TISEq for an \(N\)-electron system. This results into a set of \(N\) one-electron equations, one for each electron. When we attempt to solve each individual equation, however, we end up with a problem, since the potential energy in the Hamiltonian of Equation 26.1 does not have spherical symmetry because of the electron-electron repulsion term. As such, the one-electron TISEq cannot be simply solved in spherical polar coordinates, as we did for the hydrogen atom in chapter 21. The simplest way of circumventing the problem is to neglect the electron-electron repulsion term (i.e., assume that the electrons are not correlated and do not interact with each other). For a 2-electron atom this procedure is straightforward, since the Hamiltonian can be written as a sum of one-electron Hamiltonians:\[ \hat{H} =\hat{H}_1+\hat{H}_2, \label{26.2.9} \]with \(\hat{H}_1\) and \(\hat{H}_2\) looking identical to those used in the TISEq of the hydrogen atom. This one-particle Hamiltonian does not depend on the spin of the electron, and therefore, we can neglect the spin component of the Slater determinant and write the total wave function for the ground state of helium, Equation 26.1.4, simply as:\[ \Psi({\bf r}_1, {\bf r}_2) = \psi_{100}({\bf r}_1)\psi_{100}({\bf r}_2). \label{26.2.10} \]The overall TISEq reduces to a set of two single-particle equations:\[ \begin{aligned} \hat{H}_1 \psi_{100}({\bf r}_1) &= E_1\psi_{100}({\bf r}_1) \\ \hat{H}_2 \psi_{100}({\bf r}_2) &= E_2\psi_{100}({\bf r}_2), \end{aligned} \label{26.2.11} \]which can then be solved similarly to those for the hydrogen atom, and the solution be combined to give:\[ E = E_1+E_2. \label{26.2.12} \]In other words, the resulting energy eigenvalue for the ground state of the helium atom in this approximation is equal to twice the energy of a \(\psi_{100}\), \(1s\), orbital. The resulting approximated value for the energy of the helium atom is \(7,217 \text{ kJ/mol}\), compared with the exact value of \(7,620 \text{ kJ/mol}\).The nuclear charge \(Z\) in the \(\psi_{100}\) orbital can be used as a variational parameter in the variational method to obtain a more accurate value of the energy. This method provides a result for the ground-state energy of the helium atom of \(7,478 \text{ kJ/mol}\) (only \(142 \text{ kJ/mol}\) lower than the exact value), with the nuclear charge parameter minimized at \(Z_{\text{min}}=1.6875\). This new value of the nuclear charge can be interpreted as the effective nuclear charge that is felt by one electron when a second electron is present in the atom. This value is lower than the real nuclear charge (\(Z=2\)) because the interaction between the electron and the nuclei is shielded by presence of the second electron.This procedure can be extended to atoms with more than two electrons, resulting in the so-called Hartree-Fock method. The procedure, however, is not straightforward. We will explain it in more details in the next chapter, since it is the simplest approximation that also describes the chemical bond.This page titled 25.2: Approximated Hamiltonians is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati.
7,878
26.1: The Molecular Hamiltonian
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/26%3A_Introduction_to_Molecules/26.01%3A_The_Molecular_Hamiltonian
For a molecule, we can decompose the Hamiltonian operator as:\[ \hat{H} = \hat{K}_N +\hat{K}_{e} + \hat{V}_{NN} + \hat{V}_{eN} + \hat{K}_{ee} \label{27.1.1} \]where we have decomposed the kinetic energy operator into nuclear and electronic terms, \(\hat{K}_N\) and \(\hat{K}_e\), as well as the potential energy operator into terms representing the interactions between nuclei, \(\hat{V}_{NN}\), between electrons, \(\hat{V}_{ee}\), and between electrons and nuclei, \(\hat{V}_{eN}\). \end{aligned}\end{equation} \label{27.1.2} \]where \(M_i\), \(Z_i\), and \(\mathbf{R}_i\) are the mass, atomic number, and coordinates of nucleus \(i\), respectively, and all other symbols are the same as those used in Equation 26.1 for the many-electron atom Hamiltonian.The operator in Equation \ref{27.1.1} is known as the “exact” nonrelativistic Hamiltonian in field-free space. However, it is important to remember that it neglects at least two effects. Firstly, although the speed of an electron in a hydrogen atom is less than 1% of the speed of light, relativistic mass corrections can become appreciable for the inner electrons of heavier atoms. Secondly, we have neglected the spin-orbit effects, which is explained as follows. From the point of view of an electron, it is being orbited by a nucleus which produces a magnetic field (proportional to \({\bf L}\)); this field interacts with the electron’s magnetic moment (proportional to \({\bf S}\)), giving rise to a spin-orbit interaction (proportional to \({\bf L} \cdot {\bf S}\) for a diatomic.) Although spin-orbit effects can be important, they are generally neglected in quantum chemical calculations, and we will neglect them in the remainder of this textbook as well.This page titled 26.1: The Molecular Hamiltonian is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati.
7,879
26.2: The Born-Oppenheimer Approximation
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/26%3A_Introduction_to_Molecules/26.02%3A_The_Born-Oppenheimer_Approximation
As we already saw in the previous chapter, if a Hamiltonian is separable into two or more terms, then the total eigenfunctions are products of the individual eigenfunctions of the separated Hamiltonian terms. The total eigenvalues are then sums of individual eigenvalues of the separated Hamiltonian terms.For example. let’s consider a Hamiltonian that is separable into two terms, one involving coordinate \(q_1\) and the other involving coordinate \(q_2\):\[ \hat{H} = \hat{H}_1(q_1) + \hat{H}_2(q_2) \label{27.2.1} \]with the overall Schrödinger equation being:\[ \hat{H} \psi(q_1, q_2) = E \psi(q_1, q_2). \label{27.2.2} \]If we assume that the total wave function can be written in the form:\[ \psi(q_1, q_2) = \psi_1(q_1) \psi_2(q_2), \label{27.2.3} \]where \(\psi_1(q_1)\) and \(\psi_2(q_2)\) are eigenfunctions of \(\hat{H}_1\) and \(\hat{H}_2\) with eigenvalues \(E_1\) and \(E_2\), then:\[ \begin{equation} \begin{aligned}\displaystyle \hat{H} \psi(q_1, q_2) &= ( \hat{H}_1 + \hat{H}_2 ) \psi_1(q_1) \psi_2(q_2) \\ &= \hat{H}_1 \psi_1(q_1) \psi_2(q_2) + \hat{H}_2 \psi_1(q_1) \psi_2(q_2) \\ &= E_1 \psi_1(q_1) \psi_2(q_2) + E_2 \psi_1(q_1) \psi_2(q_2) \\ &= (E_1 + E_2) \psi_1(q_1) \psi_2(q_2) \\ &= E \psi(q_1, q_2) \end{aligned} \end{equation} \label{27.2.4} \]Thus the eigenfunctions of \(\hat{H}\) are products of the eigenfunctions of \(\hat{H}_1\) and \(\hat{H}_2\), and the eigenvalues are the sums of eigenvalues of \(\hat{H}_1\) and \(\hat{H}_2\).If we examine the nonrelativistic Hamiltonian in Equation 27.1.1, we see that the \(\hat{V}_{en}\) terms prevents us from cleanly separating the electronic and nuclear coordinates and writing the total wave function. If we neglect these terms, we can write the total wave function as:\[ \psi({\bf r}, {\bf R}) = \psi_e({\bf r}) \psi_N({\bf R}), \label{27.2.5} \]This approximation is called the Born-Oppenheimer approximation, and allows us to treat the nuclei as nearly fixed with respect to electron motion. The Born-Oppenheimer approximation is almost always quantitatively correct, since the nuclei are much heavier than the electrons and the (fast) motion of the latter does not affect the (slow) motion of the former. Using this approximation, we can fix the nuclear configuration at some value, \({\bf R_a}\), and solve for the electronic portion of the the wave function, which is dependent only parametrically on \({\bf R}\) (we write this wave function as \(\psi_e \left({\bf r}; {\bf R_a} \right)\), where the semicolon indicate the parametric dependence on the nuclear configuration). To solve the TISEq we can then write the electronic Hamiltonian as:\[ \hat{H}_{\text{e}} = \hat{K}_e({\bf r}) + \hat{V}_{eN}\left({\bf r}; {\bf R_a} \right) + \hat{V}_{ee}({\bf r}) \label{27.2.6} \]where we have also factored out the nuclear kinetic energy, \(\hat{K}_N\) (since it is smaller than \(\hat{K}_e\) by a factor of \(\dfrac{M_i}{m_e}\)), as well as \(\hat{V}_{NN}({\bf R})\). This latter approximation is justified, since in the Born-Oppenheimer approximation \({\bf R}\) is just a parameter, and \(\hat{V}_{NN}({\bf R_a})\) is a constant that shifts the eigenvalues only by some fixed amount. This electronic Hamiltonian results in the following TISEq:\[ \hat{H}_{e} \psi_e \left({\bf r}; {\bf R_a} \right) = E_{e} \psi_e \left({\bf r}; {\bf R_a} \right), \label{27.2.7} \]which is the equation that is used to explain the chemical bond in the next section. Notice that Equation \ref{27.2.7} is not the total TISEq of the system, since the nuclear eigenfunction and its eigenvalues (which can be obtained solving the Schrödinger equation with the nuclear Hamiltonian) are neglected. As a final note, in the remainder of this textbook we will confuse the term “total energy” with “total energy at fixed geometry”, as is customary in many other quantum chemistry textbooks (i.e., we are neglecting the nuclear kinetic energy). This is just \(E_{e}\) of Equation \ref{27.2.7}, plus the constant shift,\(\hat{V}_{NN}({\bf R_a})\), given by the nuclear-nuclear repulsion.This page titled 26.2: The Born-Oppenheimer Approximation is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati.
7,880
26.3: Solving the Electronic Eigenvalue Problem
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/26%3A_Introduction_to_Molecules/26.03%3A_Solving_the_Electronic_Eigenvalue_Problem
Once we have invoked the Born-Oppenheimer approximation, we can attempt to solve the electronic TISEq in Equation 27.2.7. However, for molecules with more than one electron, we need to—once again—keep in mind the antisymmetry of the wave function. This obviously means that we need to write the electronic wave function as a Slater determinant (i.e., all molecules but \(\mathrm{H}_2^+\) and a few related highly exotic ions). Once this is done, we can work on approximating the Hamiltonian, a task that is necessary because the presence of the electron-electron repulsion term forbids its analytic treatment. Similarly to the many-electron atom case, the simplest approximation to solve the molecular electronic TISEq is to use the variational method and to neglect the electron-electron repulsion. As we noticed in the previous chapter, this approximation is called the Hartree-Fock method.The main difference when we apply the variational principle to a molecular Slater determinant is that we need to build orbitals (one-electron wave functions) that encompass the entire molecule. This can be done by assuming that the atomic contributions to the molecular orbitals will closely resemble the orbitals that we obtained for the hydrogen atom. The total molecular orbital can then be built by linearly combine these atomic contributions. This method is called linear combination of atomic orbitals (LCAO). A consequence of the LCAO method is that the atomic orbitals on two different atomic centers are not necessarily orthogonal, and Equation 26.2.4 cannot be simplified easily. If we replace each atomic orbital \(\psi(\mathbf{r})\) with a linear combination of suitable basis functions \(f_i(\mathbf{r})\):\[ \psi(\mathbf{r}) = \sum_i^m c_{i} f_i(\mathbf{r}), \label{27.3.1} \]we can then use the following notation:\[ \displaystyle H_{ij} = \int \phi_i^* {\hat H} \phi_j d\mathbf{\tau}\;, \qquad \displaystyle S_{ij} = \int \phi_i^* \phi_jd\mathbf{\tau}, \label{27.3.2} \]to simplify Equation 26.2.4 to:\[ E[\Phi] = \dfrac{\sum_{ij} c_i^* c_j H_{ij}}{\sum_{ij} c_i^* c_j S_{ij}}. \label{27.3.3} \]Differentiating this energy with respect to the expansion coefficients \(c_i\) yields a non-trivial solution only if the following “secular determinant” equals zero:\[ \begin{vmatrix} H_{11}-ES_{11} & H_{12}-ES_{12} & \cdots & H_{1m}-ES_{1m}\\\ H_{21}-ES_{21} & H_{22}-ES_{22} & \cdots & H_{2m}-ES_{2m}\\\ \vdots & \vdots & \ddots & \vdots\\\ H_{m1}-ES_{m1} & H_{m2}-ES_{m2} & \cdots & H_{mm}-ES_{mm} \end{vmatrix}=0 \label{27.3.4} \]where \(m\) is the number of basis functions used to expand the atomic orbitals. Solving this set of equations with a Hamiltonian where the electron-electron correlation is neglected results is non-trivial, but possible. The reason for the complications comes from the fact that even if we are neglecting the direct interaction between electrons, each of them interact with the nuclei through an interaction that is screened by the average field of all other electrons, similarly to what we saw for the helium atom. This means that the Hamiltonian itself and the value of the coefficients \(c_i\) in the wave function mutually depend on each other. A solution to this problem can be achieved numerically using specialized computer programs that use a cycle called the self-consistent-field (SCF) procedure. Starting from an initial guess of the coefficients, an approximated Hamiltonian operator is built from them and used to solve Equation \ref{27.3.4}. This solution gives updated values of the coefficients, which can then be used to create an improved version of the approximated Hamiltonian. This procedure is repeated until both the coefficients and the operator do not change anymore. From this final solution, the energy of the molecule is then calculated.This page titled 26.3: Solving the Electronic Eigenvalue Problem is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati.
7,881
27.1: The Chemical Bond in the Hydrogen Molecular Cation
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/27%3A_The_Chemical_Bond_in_Diatomic_Molecules/27.01%3A_The_Chemical_Bond_in_the_Hydrogen_Molecular_Cation
This system has only one electron, but since its geometry is not spherical (figure \(\PageIndex{1}\)), the TISEq cannot be solved analytically as for the hydrogen atom.The electron is at point \(P\), while the two protons are at position \(A\) and \(B\) at a fixed distance \(R\). Using the Born-Oppenheimer approximation we can write the one-electron molecular Hamiltonian in a.u. as:\[ \hat{H} = \hat{H}_e+\dfrac{1}{R} = \left( -\dfrac{1}{2}\nabla^2-\dfrac{1}{\mathbf{r}_A}-\dfrac{1}{\mathbf{r}_B} \right)+\dfrac{1}{R} \label{28.1.1} \]As a first approximation to the variational wave function, we can build the one-electron molecular orbital (MO) by linearly combine two \(1s\) hydrogenic orbitals centered at \(A\) and \(B\), respectively:\[ \varphi = c_1 a + c_2 b, \label{28.1.2} \]with:\[ \begin{equation} \begin{aligned} a &= 1s_A = \left( \psi_{100} \right)_A\\ b &= 1s_B = \left( \psi_{100} \right)_B. \end{aligned} \end{equation}\label{28.1.3} \]Using Equation 27.3.2 and considering that the nuclei are identical, we can define the integrals \(H_{aa}=H_{bb}, H_{ab}=H_{ba}\) and \(S_{ab}=S\) (while \(S_{aa}=1\) because the hydrogen atom orbitals are normalized). The secular equation, Equation 27.3.4 can then be written:\[ \begin{vmatrix} H_{aa}-E & H_{ab}-ES \\\ H_{ab}-ES & H_{aa}-E \end{vmatrix}=0 \label{28.1.4} \]The expansion of the determinant results into:\[ \begin{equation} \begin{aligned} (H_{aa}-E)^2 &=(H_{ab}-ES)^2 \\ H_{aa}-E &= \pm (H_{ab}-ES), \\ \end{aligned} \end{equation}\label{28.1.5} \]with roots:\[ \begin{equation} \begin{aligned} E_{+} &= \dfrac{H_{aa}+H_{ab}}{1+S} = H_{aa}+\dfrac{H_{ba}-SH_{aa}}{1+S}, \\ E_{-} &= \dfrac{H_{aa}-H_{ab}}{1-S} = H_{aa}-\dfrac{H_{ba}-SH_{aa}}{1-S}, \end{aligned} \end{equation}\label{28.1.6} \]the first corresponding to the ground state, the second to the first excited state. Solving for the best value for the coefficients of the linear combination for the ground state \(E_{+}\), we obtain:\[ c_1=c_2=\dfrac{1}{\sqrt{2+2S}}, \label{28.1.7} \]which gives the bonding MO:\[ \varphi_{+}=\dfrac{a+b}{\sqrt{2+2S}}. \label{28.1.8} \]Proceeding similarly for the excited state, we obtain:\[ c_1=\dfrac{1}{\sqrt{2-2S}}\;\quad c_2=-\dfrac{1}{\sqrt{2-2S}}, \label{28.1.9} \]which gives the antibonding MO:\[ \varphi_{-}=\dfrac{b-a}{\sqrt{2-2S}}. \label{28.1.10} \]These results can be summarized in the molecular orbital diagram of figure \(\PageIndex{2}\) We notice that the splitting of the doubly degenerate atomic level under the interaction is non-symmetric for \(S\neq0\), the antibonding level being more repulsive and the bonding less attractive than the symmetric case occurring for \(S = 0\).Calculating the values for the integrals and repeating these calculations for different internuclear distances, \(R\), results in the plot of figure \(\PageIndex{3}\) As we see from the plots, the ground state solution is negative for a vast portion of the plot. The energy is negative because the electronic energy calculated with the bonding orbital is lower than the nuclear repulsion. In other words, the creation of the molecular orbital stabilizes the molecular configuration versus the isolated fragments (one hydrogen atom and one proton).This page titled 27.1: The Chemical Bond in the Hydrogen Molecular Cation is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati.
7,882
27.2: The Chemical Bond in the Hydrogen Molecule
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/27%3A_The_Chemical_Bond_in_Diatomic_Molecules/27.02%3A_The_Chemical_Bond_in_the_Hydrogen_Molecule
We can now examine the formation of the two-electron chemical bond in the \(\text{H}_2\) molecule. With reference to figure \(\PageIndex{1}\), the molecular Hamiltonian for \(\text{H}_2\) in a.u. in the Born-Oppenheimer approximation will be:\[\begin{equation} \begin{aligned} \hat{H} &= \hat{H}_e+\dfrac{1}{R} \\ &=\left( -\dfrac{1}{2}\nabla^2_1-\dfrac{1}{\mathbf{r}_{A1}}-\dfrac{1}{\mathbf{r}_{B1}} \right)+\left( -\dfrac{1}{2}\nabla^2_2-\dfrac{1}{\mathbf{r}_{A2}}-\dfrac{1}{\mathbf{r}_{B2}} \right)+\dfrac{1}{r_{12}}+\dfrac{1}{R}\\ &= \hat{h}_1+\hat{h}_2+\dfrac{1}{r_{12}}+\dfrac{1}{R}, \end{aligned} \end{equation}\label{28.2.1} \]where \(\hat{h}\) is the one-electron Hamiltonian. As for the previous case, we can build the first approximation to the molecular wave function by considering two \(1s\) atomic orbitals \(a(\mathbf{r}_1)\) and \(b(\mathbf{r}_2)\) centered at \(A\) and \(B\), respectively, having an overlap \(S\). If we Neglect the electron-electron repulsion term, \(\dfrac{1}{r_{12}}\), the resulting Hartree-Fock equations are exactly the same as in the previous case. The most important difference, though, is that in this case we need to consider the spin of the two electrons. Proceeding similarly to what we have done for the many-electron atom in chapter 26, we can build an antisymmetric wave function for \(\text{H}_2\) using a Slater determinant of doubly occupied MOs. For the ground state, we can use the lowest energy orbital obtained from the solution of the Hartree-Fock equations, which we already obtained in Equation 28.1.8. Using a notation that is based on the symmetry of the molecule, this bonding orbital in \(\text{H}_2\) is usually called \(\sigma_g\), where \(\sigma\) refers to the \(\sigma\) bond that forms between the two atoms. The Slater determinant for the ground state is therefore:\(^1\)\[ \Psi (\mathbf{x}_{1},\mathbf{x}_{2})= |\sigma_{g}\phi_{\uparrow},\sigma_{g}\phi_{\downarrow}\rangle,=\sigma_{g}(\mathbf{r}_1)\sigma_{g}(\mathbf{r}_2) \dfrac{1}{\sqrt{2}} \left[ \phi_{\uparrow}\phi_{\downarrow} - \phi_{\downarrow}\phi_{\uparrow} \right], \label{28.2.2} \]where:\[ \sigma_{g}=\varphi_{+}=\dfrac{\left(\psi_{100}\right)_A+\left(\psi_{100}\right)_B}{\sqrt{2+2S}}. \label{28.2.3} \]The energies and the resulting MO diagram is similar to that for \(\mathrm{H}_2^+\), with the only difference that two electron will be described by the same \(\sigma_g\) MO (figure \(\PageIndex{2}\)).As for the many-electron atoms, the Hartree-Fock method is just an approximation to the exact solution. The accurate theoretical value for the bond energy at the bond distance of \(R_e=1.4\;a_0\) is \(E= -0.17447\;E_h\). The variational result obtained with the wave function in Equation \ref{28.2.2} is \(E= -0.12778\;E_h\), which is \(\sim 73 \%\) of the exact value. The variational coefficient (i.e., the orbital exponent, \(c_0\), that enters the \(1s\) orbital formula \(\psi_{100}=\dfrac{1}{\pi}\exp[c_0r]\)) is optimized at \(c_0=1.1695\), a value that shows how the orbitals significantly contract due to spherical polarization.If we scan the Born-Oppenheimer energy landscape using the wave function in Equation \ref{28.2.2} as we have done for \(\mathrm{H}_2^+\), we obtain the plot in figure \(\PageIndex{3}\).As we can see, the Hartree-Fock results for \(\mathrm{H}_2\) describes the formation of the bond qualitatively around the bond distance (minimum of the curve), but they fail to describe the molecule at dissociation. This happens because in Equation \ref{28.2.2} both electrons are in the same orbital with opposite spin (electrons are coupled), and the orbital is shared among both centers. At dissociation, this corresponds to an erroneous ionic dissociation state where both electron are localized on either one of the two centers (this center is therefore negatively charged), with the other proton left without electrons. This is in contrast with the correct dissociation, where each electron should be localized around each center (and therefore, it should be uncoupled from the other electron). This error is once again the result of the approximations that are necessary to treat the TISEq of a many-electron system. It is obviously not a failure of quantum mechanics, and it can be easily corrected using more accurate approximations on modern computers.This page titled 27.2: The Chemical Bond in the Hydrogen Molecule is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati.
7,883
28.1: The Chemical Bond in the Water Molecule Using a Minimal Basis
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/28%3A_The_Chemical_Bond_in_Polyatomic_Molecules/28.01%3A_The_Chemical_Bond_in_the_Water_Molecule_Using_a_Minimal_Basis
For a minimal representation of the two hydrogen atoms, we need two \(1s\) functions, one centered on each atom. Oxygen has electrons in the second principal quantum level, so we will need one \(1s\), one \(2s\), and three \(2p\) functions (one each of \(p_x\), \(p_y\), and \(p_z\)). Summarizing, for a minimal representation of the water wave function we need five orbitals on oxygen, plus one each on the hydrogen atoms, for a total of 7 functions. From these atomic functions, we can build a total wave function using the LCAO method of chapter 27, and then we can use the variational principle, in conjunction with the Hartree—Fock (HF) method, to build and solve a secular determinant that looks is similar to that in Equation 27.3.4, with \(m=7\) being the total number of basis functions. The approximated Hamiltonian operator in the HF method is called the Fock operator, and it can be divided into one-electron integrals, comprising the kinetic and potential energy contributions:\[\begin{equation} \begin{aligned} \displaystyle K_{ij} &= \int \phi_i^* {\hat K} \phi_j\; d\mathbf{\tau}=\int \phi_i^* {\left(-\dfrac{1}{2}\nabla^2\right)} \phi_j\; d\mathbf{\tau} \\ \displaystyle V_{ij} &= \int \phi_i^* {\hat V} \phi_j\;d\mathbf{\tau} = \int \phi_i^* {\left(-\sum_k^{\mathrm{nuclei}}\dfrac{Z_k}{r_k}\right)} \phi_j\; d\mathbf{\tau} , \end{aligned} \end{equation} \label{29.1.1} \]as well as two-electron integrals describing the coulomb repulsion between electrons:\[ V_{ijkl} = \iint \phi_i^* \phi_j^* {\hat r}_{12} \phi_k \phi_l\; d\mathbf{\tau_1}d\mathbf{\tau_2}=\iint \phi_i^* \phi_j^* \left(\dfrac{1}{r_{12}}\right) \phi_k \phi_l\; d\mathbf{\tau_1}d\mathbf{\tau_2}. \label{29.1.2} \]Despite the minimal basis set, the total number of integrals that need to be calculated for water is large, since \(i\), \(j\), \(k\), and \(l\) can be any one of the 7 basis functions. Hence there are \(7\times7=49\) kinetic energy integrals, and the same number of potential energy integrals for each nucleus, resulting in \(7\times 7 \times 3 = 147\). The grand total of one-electron integrals is thus 196. For the two-electron integrals, we have \(7 \times 7 \times 7 \times 7 = 2{,}401\) integrals to calculate. Overall for this simple calculation on water, we need almost \(2{,}600\) integrals.1All this to find \(5\) occupied molecular orbitals from which to form a final Slater determinant (\(10\) electrons, two to an orbital, so \(5\) orbitals). The situation sounds horrible, but it should be recognized that the solutions to all of the integrals are known to be analytic formulae involving only interatomic distances, cartesian exponents, and the values of a single exponent in the atomic functions. If we use slightly simpler gaussian functions instead of the more complicated hydrogenic solutions, the total number of floating-point operations to solve the integrals is roughly \(1{,}000{,}000\). In computer speak that’s one megaflop (megaflop = million FLoating-point OPerations). A modern digital computer processor can achieve gigaflop per second performance, so the computer can accomplish all these calculations in under one second. An additional way in which things can be improved is to recognize that the molecule has symmetries that can be exploited to reduce the number of total integrals that needs to be calculated.This page titled 28.1: The Chemical Bond in the Water Molecule Using a Minimal Basis is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati.
7,884
28.2: Hartree-Fock Calculation for Water
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/28%3A_The_Chemical_Bond_in_Polyatomic_Molecules/28.02%3A_Hartree-Fock_Calculation_for_Water
To find the Hartree-Fock (HF) molecular orbitals (MOs) we need to solve the following secular determinant:\[ \begin{vmatrix} F_{11}-ES_{11} & F_{12}-ES_{12} & \cdots & F_{17}-ES_{17}\\\ F_{21}-ES_{21} & F_{22}-ES_{22} & \cdots & F_{27}-ES_{27}\\\ \vdots & \vdots & \ddots & \vdots\\\ H_{71}-ES_{71} & F_{72}-ES_{72} & \cdots & F_{77}-ES_{77} \end{vmatrix}=0 \label{29.2.1} \]with \(S_{ij}\) being the overlap integrals of Equation 27.3.2, and \(F_{ij}\) the matrix elements of the Fock operator, defined using the one- and two-electron integrals in Equation 29.1.1 and Equation 29.1.2 as:\[ F_{ij} = K_{ij} + V_{ij} + \sum_{kl} P_{kl} \left[ V_{ijkl} -\frac{1}{2}V_{ikjl} \right], \label{29.2.2} \]with the density matrix elements \(P_{kl}\) defined as:\[ P_{kl} = 2 \sum_{i}^{\mathrm{occupied}} a_{ki}a_{li}, \label{29.2.3} \]where the \(a\) values are the coefficients of the basis functions in the occupied molecular orbitals. These values will be determined using the SCF procedure, which proceeds as follows: At the first step we simply guess what these are, then we iterate through solution of the secular determinant to derive new coefficients and we continue to do so until self-consistency is reached (i.e. the \(N+1\) step provides coefficients and energies that are equal to those in the \(N\) step).We can try to solve the SCF procedure for water using a fixed geometry of the nuclei close to the experimental structure: O-H bond lengths of \(0.95\,\dot{A}\) and a valence bond angle at oxygen of \(104.5^\circ\). To do so, we can use a minimal basis functions composed of the following seven orbitals: basis function #1 is an oxygen \(1s\) orbital, #2 is an oxygen \(2s\) orbital, #3 is an oxygen \(2p_x\) orbital, #4 is an oxygen \(2p_y\) orbital, #5 is an oxygen \(2p_z\) orbital, #6 is one hydrogen \(1s\) orbital, and #7 is the other hydrogen \(1s\) orbital. The corresponding integrals introduced in the previous section can be calculated using a quantum chemistry code. The calculated overlap matrix elements are:\[ \mathbf{S}=\begin{bmatrix} \mathrm{O}\;1s & \mathrm{O}\;2s & \mathrm{O}\;2p_x & \mathrm{O}\;2p_y & \mathrm{O}\;2p_z & \mathrm{H}_a\;1s & \mathrm{H}_b\;1s & \\\ 1.000 & & & & & & &\mathrm{O}\;1s \\\ 0.237 & 1.000 & & & & & &\mathrm{O}\;2s \\\ 0.000 & 0.000 & 1.000 & & & & &\mathrm{O}\;2p_x \\\ 0.000 & 0.000 & 0.000 & 1.000 & & & &\mathrm{O}\;2p_y \\\ 0.000 & 0.000 & 0.000 & 0.000 & 1.000 & & &\mathrm{O}\;2p_z \\\ 0.055 & 0.479 & 0.000 & 0.313 & -0.242 & 1.000 & &\mathrm{H}_a\;1s \\\ 0.055 & 0.479 & 0.000 & -0.313 & -0.242 & 0.256 & 1.000&\mathrm{H}_b\;1s \end{bmatrix} \label{29.2.4} \]There are many noteworthy features in \(\mathbf{S}\). First, it is shown in a lower packed triangular form because every element \(j,i\) is the same as the element \(i,j\) by symmetry, and every diagonal element is \(1\) because the basis functions are normalized. Note that, again by symmetry, every \(p\) orbital on oxygen is orthogonal (overlap = zero) with every \(s\) orbital and with each other, but the two \(s\) orbitals do overlap (this is due to the fact that they are not pure hydrogenic orbitals—which would indeed be orthogonal—but they have been optimized, so \(S_{12} = 0.237\)). Note also that the oxygen \(1s\) orbital overlaps about an order of magnitude less with any hydrogen \(1s\) orbital than does the oxygen \(2s\) orbital, reflecting how much more rapidly the first quantum-level orbital decays compared to the second. Note that by symmetry the oxygen \(p_x\) cannot overlap with the hydrogen \(1s\) functions (positive overlap below the plane exactly cancels negative overlap above the plane) and that the oxygen \(p_y\) overlaps with the two hydrogen \(1s\) orbitals equally in magnitude but with different sign because the \(p\) orbital has different phase at its different ends. Finally, the overlap of the \(p_z\) is identical with each H \(1s\) because it is not changing which lobe it uses to interact. The kinetic energy matrix (in a.u.) is:\[ \mathbf{K}= \begin{bmatrix} 29.003 & & & & & & \\\ -0.168 & 0.808 & & & & & \\\ 0.000 & 0.000 & 2.529 & & & & \\\ 0.000 & 0.000 & 0.000 & 2.529 & & & \\\ 0.000 & 0.000 & 0.000 & 0.000 & 2.529 & & \\\ -0.002 & 0.132 & 0.000 & 0.229 & -0.177 & 0.760 & \\\ -0.002 & 0.132 & 0.000 & -0.229 & -0.177 & 0.009 & 0.760 \end{bmatrix} \label{29.2.5} \]Notice that every diagonal term is much larger than any off-diagonal term. Recall that each each kinetic energy integral, Equation 29.1.1, involves the Laplacian operator, \(\nabla^2\). The Laplacian reports back the sum of second derivatives in all coordinate directions. That is, it is a measure of how fast the slope of the function is changing in various directions. If we take two atomic orbitals \(\mu\) and \(\nu\) far apart from each other, then since gaussians go to zero at least exponentially fast with distance, \(\nu\) is likely to be very flat where \(\mu\) is large. The second derivative of a flat function is zero. So, every point in the integration will be roughly the amplitude of \(\mu\) times zero, and not much will accumulate. For the diagonal element, on the other hand, the interesting second derivatives will occur where the function has maximum amplitude (amongst other places) so the accumulation should be much larger. Notice also that off-diagonal terms can be negative. That is because there is no real physical meaning to a kinetic energy expectation value involving two different orbitals. It is just an integral that appears in the complete secular determinant. Symmetry again keeps \(p\) orbitals from mixing with \(s\) orbitals or with each other. The nuclear attraction matrix is:\[ \mathbf{V}= \begin{bmatrix} -61.733 & & & & & & \\\ -7.447 & -10.151 & & & & & \\\ 0.000 & 0.000 & -9.926 & & & & \\\ 0.000 & 0.000 & 0.000 & -10.152 & & & \\\ 0.000 & 0.000 & 0.000 & 0.000 & -10.088 & & \\\ -1.778 & -3.920 & 0.000 & -0.228 & -0.184 & -5.867 & \\\ -1.778 & -3.920 & 0.000 & 0.228 & 0.184 & -1.652 & -5.867 \end{bmatrix} \label{29.2.6} \]Again, diagonal elements are bigger than off-diagonal elements because the \(1/r\) operator acting on a basis function \(\nu\) will ensure that the largest contribution to the overall integral will come from the nucleus \(k\) on which basis function \(\nu\) resides. Unless \(\mu\) also has significant amplitude around that nucleus, it will multiply the result by roughly zero and the whole integral will be small. Again, positive values can arise when two different functions are involved even though electrons in a single orbital must always be attracted to nuclei and thus diagonal elements must always be negative. Note that the \(p\) orbitals all have different nuclear attractions. That is because, although they all have the same attraction to the O nucleus, they have different amplitudes at the H nuclei. The \(p_x\) orbital has the smallest amplitude at the H nuclei (zero, since they are in its nodal plane), so it has the smallest nuclear attraction integral. The \(p_z\) orbital has somewhat smaller amplitude at the H nuclei than the \(p_y\) orbital because the bond angle is greater than \(90^\circ\) (it is \(104.5^\circ\); if it were \(90^\circ\) the O-H bonds would bisect the \(p_y\) and \(p_z\) orbitals and their amplitudes at the H nuclei would necessarily be the same). Thus, the nuclear attraction integral for the latter orbital is slightly smaller than for the former.The sum of the kinetic and nuclear attraction integrals is usually called the one- electron or core part of the Fock matrix and abbreviated \(\mathbf{h}\) (i.e., \(\mathbf{h} = \mathbf{K} + \mathbf{V}\)). One then writes \(\mathbf{F} = \mathbf{h} + \mathbf{G}\) where \(\mathbf{F}\) is the Fock matrix, \(\mathbf{h}\) is the one-electron matrix, and \(\mathbf{G}\) is the remaining part of the Fock matrix coming from the two-electron four-index integrals (cf Equation \ref{29.2.2}). To compute those two-electron integrals, however, we need the density matrix, which itself comes from the occupied MO coefficients. So, we need an initial guess at those coefficients. We can get such a guess many ways, but ultimately any guess is as good as any other. With these coefficients we can compute the density matrix using Equation \ref{29.2.3}:\[ \mathbf{P}=\begin{bmatrix} 2.108 & & & & & & \\\ -0.456 & 2.010 & & & & & \\\ 0.000 & 0.000 & 2.000 & & & & \\\ 0.000 & 0.000 & 0.000 & 0.737 & & & \\\ -0.104 & 0.618 & 0.000 & 0.000 & 1.215 & & \\\ -0.022 & -0.059 & 0.000 & 0.539 & -0.482 & 0.606 & \\\ -0.022 & -0.059 & 0.000 & -0.539 & -0.482 & -0.183 & 0.606 \end{bmatrix} \label{29.2.7} \]With \(\mathbf{P}\), we can compute the remaining contribution of \(\mathbf{G}\) to the Fock matrix. We will not list all 406 two-electron integrals here. Instead, we will simply write the total Fock matrix:\[ \mathbf{F}= \begin{bmatrix} -20.236 & & & & & & \\\ -5.163 & -2.453 & & & & & \\\ 0.000 & 0.000 & -0.395 & & & & \\\ 0.000 & 0.000 & 0.000 & -0.327 & & & \\\ 0.029 & 0.130 & 0.000 & 0.000 & -0.353 & & \\\ -1.216 & -1.037 & 0.000 & -0.398 & 0.372 & -0.588 & \\\ -1.216 & -1.037 & 0.000 & 0.398 & 0.372 & -0.403 & -0.588 \end{bmatrix} \label{29.2.8} \]So, we’re finally ready to solve the secular determinant, since we have \(\mathbf{F}\) and \(\mathbf{S}\) fully formed. When we do that, and then solve for the MO coefficients for each root \(E\), we get new occupied MOs. Then, we iterate again, and again, and again, until we are satisfied that further iterations will not change either our (i) energy, (ii) density matrix, or (iii) MO coefficients (it’s up to the quantum chemist to decide what is considered satisfactory).In our water calculation, if we monitor the energy at each step we find:\[ \begin{equation} \begin{aligned} E(RHF) &= \; -74.893\,002\,803\qquad\text{a.u. after 1 cycles} \\ E(RHF) &= \; -74.961\,289\,145\qquad\text{a.u. after 2 cycles} \\ E(RHF) &= \; -74.961\,707\,247\qquad\text{a.u. after 3 cycles} \\ E(RHF) &= \; -74.961\,751\,946\qquad\text{a.u. after 4 cycles} \\ E(RHF) &= \; -74.961\,753\,962\qquad\text{a.u. after 5 cycles} \\ E(RHF) &= \; -74.961\,754\,063\qquad\text{a.u. after 6 cycles} \\ E(RHF) &= \; -74.961\,754\,063\qquad\text{a.u. after 7 cycles} \\ \end{aligned} \end{equation}\label{29.2.9} \]Which means that our original guess was really not too bad—off by a bit less than \(0.1\text{ a.u.}\) or roughly \(60\text{ kcal mol}^{-1}\). Our guess energy is too high, as the variational principle guarantees that it must be. Our first iteration through the secular determinant picks up nearly \(0.07\text{ a.u.}\), our next iteration an additional \(0.000\,42\) or so, and by the end we are converged to within 1 nanohartree (\(0.000\,000 6\text{ kcal mol}^{-1}\)).The final optimized MOs for water are:\[ \begin{equation} \begin{matrix} & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\\ E & -20.24094 & -1.27218 & -.62173 & -.45392 & -.39176 & .61293 & .75095 \\\ \\\ 1 & .99411 & -.23251 & .00000 & -.10356 & .00000 & -.13340 & .00000 \\\ 2 & .02672 & .83085 & .00000 & .53920 & .00000 & .89746 & .00000 \\\ 3 & .00000 & .00000 & .00000 & .00000 & 1.0000 & .00000 & .00000 \\\ 4 & .00000 & .00000 & .60677 & .00000 & .00000 & .00000 & .99474 \\\ 5 & -.00442 & -.13216 & .00000 & .77828 & .00000 & -.74288 & .00000 \\\ 6 & -.00605 & .15919 & .44453 & -.27494 & .00000 & -.80246 & -.84542 \\\ 7 & -.00605 & .15919 & -.44453 & -.27494 & .00000 & -.80246 & .84542 \\\ \end{matrix} \end{equation}\label{29.2.10} \]where the first row reports the eigenvalues of each MO, in \(E_h\) (i.e., the energy of one electron in the MO). The sum of all of the occupied MO energies should be an underestimation of the total electronic energy because electron-electron repulsion will have been double counted. So, if we sum the occupied orbital energies (times two, since there are two electrons in each orbital), we get \(2(-20.24094{-}1.27218{-}0.62173{-}0.45392{-}0.39176)=-45.961\,060\). If we now subtract the electron-electron repulsion energy \(38.265\,406\) we get \(-84.226\,466\). If we add the nuclear repulsion energy \(9.264\,701\) to this we get a total energy \(-74.961\,765\). The difference between this and the converged result above (\(-74.961\,754\)) can be attributed to rounding in the MO energies, which are truncated after 5 places. Notice that the five occupied MOs all have negative energies. So, their electrons are bound within the molecule. The unoccupied MOs (called “virtual” MOs) all have positive energies, meaning that the molecule will not spontaneously accept an electron from another source.This page titled 28.2: Hartree-Fock Calculation for Water is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati.
7,885
28.3: Shapes and Energies of Molecular Orbitals
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/28%3A_The_Chemical_Bond_in_Polyatomic_Molecules/28.03%3A_Shapes_and_Energies_of_Molecular_Orbitals
If we analyze the optimized coefficients of the occupied MOs reported in Equation 29.2.10, we observe that the lowest energy orbital (by a lot!) is a nearly pure oxygen \(1s\) orbital since the coefficient of the oxygen \(1s\) basis function is very nearly 1 and all other coefficients are rather close to 0. Note, however, that the coefficient is not really a percentage measure. That’s because the basis functions are not necessarily orthogonal to one another. Let’s consider the next molecular orbital up, number 2. It has a dominant contribution from the oxygen \(2s\) basis function, but non-trivial contributions from many other basis functions as well. In order to understand which kind of orbital it is, it is useful to try to visualize some of its properties. For example, recall that the square of the orbital at a particular point in space represents a probability density. As such, we can map values of the square of each orbital on a grid in 3-dimensional space, and then pick a value of probability density, say \(0.04\; a_0^{-3}\), and plot that as a contour surface (remember that a probability density is a 4-dimensional quantity, so we need to take a slice at some constant density to be able to plot it in 3-D). That surface is called an “isodensity” surface. In addition to the square of the function, we can also regions where the wave function is positive blue and regions where it’s negative red. The five occupied and two unoccupied MOs mapped from their one-electron wave functions are plotted in figuere \(\PageIndex{1}\).Going back to the Lewis structure of water as taught in general chemistry courses, it says that there is one pair of electrons in one O–H \(\sigma\) bond, one pair in another identical such \(\sigma\) bond, and two equivalent pairs that constitute the lone pairs on oxygen. The two lone pairs and the O–H bonds should by pointing towards the apices of a tetrahedron because they are all considered to be \(sp^3\) hybridized.As you can see, the MOs look nothing like the Lewis picture. Instead, amongst other details, there is one lone pair that is pure \(p\) (not \(sp^3\)), another that is, if anything, \(sp^2\)-like, but also enjoys contribution from hydrogen \(1s\) components. There is one orbital that looks like both O–H \(\sigma\) bonds are present, but another that has an odd “bonding-all-over” character to it.Is it really possible that for something as simple as water all the things you’ve ever been told about the Lewis structure are wrong? Water must have two equivalent lone pairs, right?It turns out that the molecular orbital results can be tested with spectroscopic experiments, and suffice to say, they agree perfectly.But the \(sp^3\)-hybridized picture of water works well, for example, to explain its hydrogen-bonding behavior: In liquid water each water molecule makes two hydrogen bonds to other water molecules and accepts two more from different water molecules and the final structure has a net lattice-like form that is tetrahedral at each oxygen atom. How can the above MOs explain that? The key point to remember is that another molecule does not see the individual orbitals of water, it just sees the final effect of all of those electrons and nuclei together. To explain the tetrahedral H-bond lattice we can plot some constant level of electron density (i.e. $0.02) and map onto this isodensity surface the values of the electrostatic potential. We can find these values by bringing a positive test charge onto that surface and recording how much would it find itself attracted (because of a net negative electrostatic potential) or repelled (because of a net positive electrostatic potential). This is done in figure \(\PageIndex{2}\). Notice how the negative potential is entirely on the oxygen side and the positive potential entirely on the hydrogens side. Moreover, the negative potential splays out to the tetrahedral points and the positive potential does too (those points for the purple region being roughly where the H atoms are).​​​​This page titled 28.3: Shapes and Energies of Molecular Orbitals is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati.
7,886
29.1: Rotational Spectroscopy
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/29%3A_Spectroscopy/29.01%3A_Rotational_Spectroscopy
Rotational spectroscopy is concerned with the measurement of the energies of transitions between quantized rotational states of molecules in the gas phase. Rotational transitions of molecules are usually measured in the range \(1-10\; \text{cm}^{-1}\) (microwave radiation) and rotational spectroscopy is therefore usually referred to as microwave spectroscopy.Rotational spectroscopy is actively used by astrophysicists to explore the chemical composition of the interstellar medium using radio telescopes.The rotational energies are derived theoretically by considering the molecules to be rigid rotors and applying the same treatment that we saw in chapter 20. Correction terms might be applied to account for deviation from the ideal rigid rotor case. As we saw in chapter 20, the quantized rotational energy levels of a rigid rotor depend on the angular moment of inertia, which in turn depends on the masses of the nuclei and the internuclear distance. Reversing the theoretical procedure of obtaining the energy levels from the distances, we can use the experimental energy levels to derive very precise values of molecular bond lengths (and in some complex case, also of angles). We will discuss below the simplest case of a diatomic molecule. For non-linear molecules, the moments of inertia are multiple, and only a few analytical method of solving the TISEq are available. For the most complicated cases, numerical methods can be used.Transitions between rotational states can be observed in molecules with a permanent electric dipole moment. The rigid rotor is a good starting point from which to construct a model of a rotating molecule. It is assumed that component atoms are point masses connected by rigid bonds. A linear molecule lies on a single axis and each atom moves on the surface of a sphere around the center of mass. The two degrees of rotational freedom correspond to the spherical coordinates, \(\theta\) and \(\varphi\), which describe the direction of the molecular axis. The quantum state is determined by two quantum numbers \(J\) and \(M\). \(J\) defines the magnitude of the rotational angular momentum, and \(M\) its component about an axis fixed in space, such as an external electric or magnetic field. In the absence of external fields, the energy depends only on \(J\). Under the rigid rotor model, the rotational energy levels, \(F(J)\), of the molecule can be expressed as:\[ F\left(J\right)=BJ\left(J+1\right)\qquad J=0,1,2,\ldots \label{30.1.1} \]where \(B\) is the rotational constant of the molecule and is related to its moment of inertia. In a diatomic molecule the moment of inertia about an axis perpendicular to the molecular axis is unique, so:\[ B=\dfrac{h}{8\pi ^{2}cI}, \label{30.1.2} \]with:\[ I=\dfrac{m_1m_2}{m_1 +m_2}d^2, \label{30.1.3} \]where \(m_1\) and \(m_2\) are the masses of the atoms and \(d\) is the distance between them.The selection rule for rotational spectroscopy dictate that during emission or absorption the rotational quantum number has to change by unity:\[ \Delta J = J^{\prime } - J^{\prime \prime } = \pm 1, \label{30.1.4} \]where \(J^{\prime }\) denotes the lower level and \(J^{\prime \prime }\) denotes the upper level involved in the transition. Thus, the locations of the lines in a rotational spectrum will be given by\[ {\tilde \nu }_{J^{\prime }}\leftrightarrow J^{\prime \prime }=F\left(J^{\prime }\right)-F\left(J^{\prime \prime }\right)=2B\left(J^{\prime \prime }+1\right)\qquad J^{\prime \prime }=0,1,2,\ldots \label{30.1.5} \]The diagram illustrates rotational transitions that obey the \(\Delta J=1\) selection rule is in figure \(\PageIndex{1}\).\(^1\) The dashed lines show how these transitions map onto features that can be observed experimentally. Adjacent \(J^{\prime \prime} \leftarrow J^{\prime }\) transitions are separated by \(2B\) in the observed spectrum. Frequency or wavenumber units can also be used for the \(x\) axis of this plot.The probability of a transition taking place is the most important factor influencing the intensity of an observed rotational line. This probability is proportional to the population of the initial state involved in the transition. The population of a rotational state depends on two factors. The number of molecules in an excited state with quantum number \(J\), relative to the number of molecules in the ground state, \(N_J/N_0\) is given by the Boltzmann distribution:\[ \dfrac{N_J}{N_0}=e^{-\dfrac{E_J}{kT}} =\exp\left[-\dfrac {BhcJ(J+1)}{kT}\right], \label{30.1.6} \]where \(k\) is the Boltzmann constant and \(T\) is the absolute temperature. This factor decreases as \(J\) increases. The second factor is the degeneracy of the rotational state, which is equal to \(2J+1\). This factor increases as \(J\) increases. Combining the two factors we obtain:\[ \mathrm{population} \propto (2J+1)\exp\left[\dfrac{E_J}{kT}\right], \label{30.1.7} \]in agreement with the experimental shape of rotational spectra of diatomic molecules.This page titled 29.1: Rotational Spectroscopy is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati.
7,887
29.2: Vibrational Spectroscopy
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/29%3A_Spectroscopy/29.02%3A_Vibrational_Spectroscopy
Vibrational spectroscopy is concerned with the measurement of the energies of transitions between quantized vibrational states of molecules in the gas phase. These transitions usually occur in the middle infrared (IR) region of the electromagnetic wave at approximately \(4,000-400\;\text{cm}^{-1}\) (\(2.5-25\;\mu \text{m}\)). In the gas phase, vibrational transitions are almost always accompanied by changes in rotational energy. Transitions involving changes in both vibrational and rotational states are usually abbreviated as rovibrational transitions. Since changes in rotational energy levels are typically much smaller than changes in vibrational energy levels, changes in rotational state are said to give fine structure to the vibrational spectrum. For a given vibrational transition, the same theoretical treatment that we saw in the previous section for pure rotational spectroscopy gives the rotational quantum numbers, energy levels, and selection rules.As we have done in the previous section, we will discuss below the simplest case of a diatomic molecule. For non-linear molecules the spectra becomes complicated to calculate, but their interpretation remains an important tool for the analysis of chemical structures.Diatomic molecules with the general formula \(\mathrm{AB}\) have one normal mode of vibration involving stretching of the \(\mathrm{A}-\mathrm{B}\) bond. The vibrational term values, \(G(v)\) can be calculated with the harmonic approximation that we discussed in chapter 20. The resulting equidistant energy levels depend on one vibrational quantum number \(v\):\[ G(v) = \omega_e \left( v + \dfrac{1}{2} \right), \label{30.2.1} \]where \(\omega_e\) is the harmonic frequency around equilibrium. When the molecule is in the gas phase, it can rotate about an axis, perpendicular to the molecular axis, passing through the center of mass of the molecule. As we discussed in the previous section, the rotational energy is also quantized, and depend on the rotational quantum number \(J\). The values of the ro-vibrational states are found (in wavenumbers) by combining the expressions for vibration and rotation: \[ G(v)+F_{v}(J)=\left[\omega_e \left(v + \dfrac{1}{2} \right) +B_{v}J(J+1)\right], \label{30.2.2} \]where \(F_{v}(J)\) are the rotational levels at each vibrational state \(v\).\(^1\)The selection rule for electric dipole allowed ro-vibrational transitions, in the case of a diamagnetic diatomic molecule is:\[ \Delta v=\pm 1\ (\pm 2,\pm 3,\ldots),\; \Delta J=\pm 1. \label{30.2.3} \]The transition with \(\Delta v =\pm 1\) is known as the fundamental transition, while the others are called overtones. The selection rule has two consequences:A typical rovibrational spectrum is reported in figure \(\PageIndex{1}\) for the \(\mathrm{CO}\) molecule.\(^2\) The intensity of the signals is—once again—proportional to the initial population of the levels. Notice how the signals in the spectrum are divided among two sides, the P-branch to the left, and the R-branch to the right. These signals correspond to the transitions reported in figure \(\PageIndex{2}\).\(^3\) Notice how the transitions corresponding to the Q-branch are forbidden by the selection rules, and therefore not observed in the experimental spectrum. The position of the missing Q-branch, however, can be easily obtained from the experimental spectrum as the missing signal between the P- and R- branches. Since the Q-branch transitions do not involve changes in the rotational energy level, their value is directly proportional to \(\omega_e\). This fact makes rovibrational spectroscopy an important experimental tool in the determination of bond distances of diatomic molecules.The quantum mechanics for homonuclear diatomic molecules is qualitatively the same as for heteronuclear diatomic molecules, but the selection rules governing transitions are different. Since the electric dipole moment of the homonuclear diatomics is zero, the fundamental vibrational transition is electric-dipole-forbidden and the molecules are infrared inactive.The spectra of these molecules can be observed by a type of IR spectroscopy that is subject to different selection rules. This technique is called Raman spectroscopy, and allows identification of the rovibrational spectra of homonuclear diatomic molecules because their molecular vibration is Raman-allowed.This page titled 29.2: Vibrational Spectroscopy is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati.
7,888
29.3: Electronic Spectroscopy
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/29%3A_Spectroscopy/29.03%3A_Electronic_Spectroscopy
Electronic spectroscopy is concerned with the measurement of the energies of transitions between quantized electronic states of molecules. Electronic transitions are always associated with simultaneous changes in vibrational levels. In the gas phase vibronic transitions are also accompanied by changes in rotational energy.Electronic transitions are typically observed in the visible and ultraviolet regions, in the wavelength range approximately \(200-700\; \text{nm }\) (\(50,000-14,000\; \text{cm}^{-1}\)). When the electronic and vibrational energy changes are drastically different, vibronic coupling (mixing of electronic and vibrational wave functions) can be neglected and the energy of a vibronic level can be taken as the sum of the electronic and vibrational (and rotational) energies; that is, the Born–Oppenheimer approximation applies. The overall molecular energy depends not only on the electronic state but also on the vibrational and rotational quantum numbers, \(v\) and \(J\). In this context, it is conventional to add a double prime \(\left(v^{\prime\prime},J^{\prime\prime}\right)\) for levels of the electronic ground state and a single prime \(\left(v^{\prime},J^{\prime}\right)\) for electronically excited states.Each electronic transition may show vibrational coarse structure, and for molecules in the gas phase, rotational fine structure. This is true even when the molecule has a zero dipole moment and therefore has no vibration-rotation infrared spectrum or pure rotational microwave spectrum.It is necessary to distinguish between absorption and emission spectra. With absorption the molecule starts in the ground electronic state, and usually also in the vibrational ground state \(v^{\prime\prime}=0\) because at ordinary temperatures the energy necessary for vibrational excitation is large compared to the average thermal energy. The molecule is excited to another electronic state and to many possible vibrational states \(v^{\prime}=0,1,2,3,\ldots\). With emission, the molecule can start in various populated vibrational states, and finishes in the electronic ground state in one of many populated vibrational levels. The emission spectrum is more complicated than the absorption spectrum of the same molecule because there are more changes in vibrational energy level.As we did for the previous two cases, we will concentrate below on the electronic absorption spectroscopy of diatomic molecules.The vibronic spectra of diatomic molecules in the gas phase also show rotational fine structure. Each line in a vibrational progression will show P- and R- branches. For some electronic transitions there will also be a Q-branch. The transition energies of the lines for a particular vibronic transition are given (in wavenumbers) by:\[ G(J^{\prime },J^{\prime \prime })={\bar \nu }_{v^{\prime }-v^{\prime \prime }}+B^{\prime }J^{\prime }(J^{\prime }+1)-B^{\prime \prime }J^{\prime \prime }(J^{\prime \prime }+1). \label{30.3.1} \]The values of the rotational constants, \(B^{\prime}\) and \(B^{\prime\prime}\) may differ appreciably because the bond length in the electronic excited state may be quite different from the bond length in the ground state. The rotational constant is inversely proportional to the square of the bond length. Usually \(B^{\prime} \bar{\nu}_P &=\bar{\nu}_{v^{\prime}-v^{\prime \prime}}+B^{\prime}\left(J^{\prime \prime}-1\right) J^{\prime \prime}-B^{\prime \prime} J^{\prime \prime}\left(J^{\prime \prime}+1\right) \\ &=\bar{\nu}_{v^{\prime}-v^{\prime \prime}}-\left(B^{\prime}+B^{\prime \prime}\right) J^{\prime \prime}+\left(B^{\prime}-B^{\prime \prime}\right) J^{\prime \prime} \end{aligned} \label{30.3.2} \]Similarly, for the R-branch \(J^{\prime\prime }=J^{\prime }-1\), and:\[ \begin{aligned} {\bar \nu }_{R} &={\bar \nu}_{v^{\prime}-v^{\prime\prime}}+B^{\prime}J^{\prime}(J^{\prime}+1)-B^{\prime\prime}J^{\prime}(J^{\prime}-1) \\ &={\bar \nu }_{v^{\prime}-v^{\prime\prime}}+(B^{\prime}+B^{\prime\prime})J^{\prime}+(B^{\prime}-B^{\prime\prime}){J^{\prime}}^{2}. \end{aligned} \label{30.3.3} \]Thus, the wavenumbers of transitions in both P- and R- branches are given, to a first approximation, by the single formula:\[ {\bar \nu }_{P,R}={\bar \nu }_{v^{\prime },v^{\prime \prime }}+(B^{\prime }+B^{\prime \prime })m+(B^{\prime }-B^{\prime \prime })m^{2},\quad m=\pm 1,\pm 2\, \ldots. \label{30.3.4} \]Here positive \(m\) values refer to the R-branch (with \(m=+J^{\prime}=J^{\prime\prime}+1\)) and negative values refer to the P-branch (with \(m=-J^{\prime\prime}\)).The intensity of allowed vibronic transitions is governed by the Franck-Condon principle, which states that during an electronic transition, a change from one vibrational energy level to another will be more likely to happen if the two vibrational wave functions overlap more significantly. A diagrammatic representation of electronic spectroscopy and the Frack-Condon principle for a diatomic molecule is presented in figure \(\PageIndex{1}\).\(^1\)This page titled 29.3: Electronic Spectroscopy is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati.
7,889
4.1: Reaction Enthalpies
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/04%3A_Thermochemistry/4.01%3A_Reaction_Enthalpies
In chapter 3, we have discussed thermodynamical changes in energy in the absence of chemical reactions. When a chemical reaction takes place, some bonds break and/or some new one form. This process either absorbs or releases the energy contained in these bonds. For a proper thermodynamic treatment of the system, this extra energy must be included in the net balance.In this chapter, we will consider the heat associated with chemical reactions. Since most chemical reactions happen at constant atmospheric pressure (isobaric conditions) in the lab, we can use eq. 3.1.13 to replace the inexact differential of the heat with the exact differential of the state function called enthalpy. The advantage of this transformation is that it allows us to study the heat associated with chemical reactions at constant pressure independently of their path. If we call the molecules at the beginning of the reaction “reactants” and the molecules at the end of the reaction “products,” the heat associated with the reaction (rxn) is defined as:\[ \Delta_{\text{rxn}} H = H_{\text{products}}-H_{\text{reactants}} \; . \nonumber \]For example, if we take a simple reaction of the form:\[ \mathrm{A} + \mathrm{B} \rightarrow \mathrm{C} + \mathrm{D}, \nonumber \]the heat at constant pressure is equal to the enthalpy of reaction, which is calculated as:\[ Q_P = \Delta_{\text{rxn}} H = \underbrace{ \left (H_{\mathrm{C}}+H_{\mathrm{D}} \right) }_{\text{products}} - \underbrace{\left( H_{\mathrm{A}}+H_{\mathrm{B}}\right)}_{\text{reactants}}. \label{4.1.2} \]Using the chemistry sign convention, Definition: System-centric, reactions are classified in terms of the sign of their reaction enthalpies as follows:\(\Delta_{\text{rxn}} H > 0 \Rightarrow\) Endothermic reaction (heat is gained by the system). \(\Delta_{\text{rxn}} H < 0 \Rightarrow\) Exothermic reaction (heat is lost by the system).If we expand the sample reaction to account for its stoichiometry:\[ a\mathrm{A} + b\mathrm{B} \rightarrow c\mathrm{C} + d\mathrm{D}\; , \nonumber \]where \(a,b,c,d\) are the stoichiometric coefficients of species \(\mathrm{A,B,C,D}\). Equation \ref{4.1.2} can be rewritten as:\[ Q_P = \Delta_{\text{rxn}} H = \underbrace{\left( cH_{\mathrm{C}}+dH_{\mathrm{D}} \right) }_{\text{products}} - \underbrace{ \left( aH_{\mathrm{A}}+bH_{\mathrm{B}} \right)}_{\text{reactants}}, \label{4.1.3} \]while for the most general case we can write it:\[ \Delta_{\text{rxn}} H = \sum_i \nu_i H_i, \nonumber \]where \(\nu_i\) is the stoichiometric coefficient of species \(i\) with its own sign. The signs of the stoichiometric are defined according to Equation \ref{4.1.3} as:\(\nu_i\) is positive if \(i\) is a product. \(\nu_i\) is negative if \(i\) is a reactant.This page titled 4.1: Reaction Enthalpies is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,892
4.2: Standard Enthalpies of Formation
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/04%3A_Thermochemistry/4.02%3A_Standard_Enthalpies_of_Formation
In principle, we could use eq. 4.1.3 to calculate the reaction enthalpy associated with any reaction. However, to do so, the absolute enthalpies \(H_i\) of reactants and products would be required. Unfortunately, absolute enthalpies are not known—and theoretically unknowable, since this would require an absolute zero for the enthalpy scale, which does not exist.\(^{1}\) To prevent this problem, enthalpies relative to a defined reference state must be used. This reference state is defined at the constituent elements in their standard state, and the enthalpies of 1 mol of substance in this reference state are called standard enthalpies of formation.The standard enthalpy of formation of compound \(i\), \(\Delta_{\mathrm{f}} H^{\; -\kern-7pt{\ominus}\kern-7pt-}_i \), is the change of enthalpy during the formation of 1 mol of \(i\) from its constituent elements, with all substances in their standard states.The standard pressure is defined at \(P^{\; -\kern-7pt{\ominus}\kern-7pt-} = 100 \; \mathrm{kPa} = 1 \; \mathrm{bar}\).\(^2\) There is no standard temperature, but standard enthalpies of formation are usually reported at room temperature, \(T = 298.15 \; \mathrm{K}\). Standard states are indicated with the symbol \(\; -\kern-7pt{\ominus}\kern-7pt-\) and they are defined for elements as the form in which such element is most stable at standard pressure (for example, for hydrogen, carbon, and oxygen the standard states are \(\mathrm{H}_{2(g)}, \mathrm{C}_{(s,\text{graphite})}, \text{and }\mathrm{O}_{2(g)}\), respectively).\(^3\)For example, the standard enthalpies of formation of some common compounds at \(T = 298.15 \; \mathrm{K}\) are calculated from the following reactions:\[\begin{equation} \begin{aligned} \mathrm{C}_{(s,\text{graphite})}+\mathrm{O}_{2(g)} \rightarrow \mathrm{CO}_{2(g)} \qquad & \Delta_{\mathrm{f}} H_{\mathrm{CO}_{2(g)}}^{\; -\kern-7pt{\ominus}\kern-7pt-}= -394 \; \text{kJ/mol} \\ \mathrm{C}_{(s,\text{graphite})}+2 \mathrm{H}_{2(g)} \rightarrow \mathrm{CH}_{4(g)} \qquad & \Delta_{\mathrm{f}} H_{\mathrm{CH}_{4(g)}}^{\; -\kern-7pt{\ominus}\kern-7pt-}= -75 \; \text{kJ/mol} \\ \mathrm{H}_{2(g)}+\frac{1}{2} \mathrm{O}_{2(g)} \rightarrow \mathrm{H}_2 \mathrm{O}_{(l)} \qquad & \Delta_{\mathrm{f}} H_{\mathrm{H}_2 \mathrm{O}_{(l)}}^{\; -\kern-7pt{\ominus}\kern-7pt-}= -286 \; \text{kJ/mol} \end{aligned} \end{equation} \nonumber \]A comprehensive list of standard enthalpies of formation of inorganic and organic compounds is also reported in appendix 16.This page titled 4.2: Standard Enthalpies of Formation is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,893
4.3: Hess's Law
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/04%3A_Thermochemistry/4.03%3A_Hess's_Law
The calculation of a standard reaction enthalpy can be performed using the following cycle:\[\begin{equation} \begin{aligned} \text{reactants} & \quad \xrightarrow{\Delta_{\text{rxn}} H^{\; -\kern-5pt{\ominus}\kern-5pt-}} \quad \text{products} \\ \scriptstyle{-\Delta_{\text{f}} H_{\text{reactants}}^{\; -\kern-5pt{\ominus}\kern-5pt-}} \quad \bigg\downarrow \quad & \qquad \qquad \qquad \qquad \scriptstyle{\bigg\uparrow \; \Delta_{\text{f}} H_{\text{products}}^{\; -\kern-5pt{\ominus}\kern-5pt-}} \\ \text{"elements in } & \text{their standard reference state"} \end{aligned} \end{equation} \label{4.3.1} \]This process is summarized by the simple formula:\[ \Delta_{\text{rxn}} H^{\; -\kern-7pt{\ominus}\kern-7pt-}= \Delta_{\mathrm{f}} H_{\text{products}}^{\; -\kern-7pt{\ominus}\kern-7pt-}- \Delta_{\mathrm{f}} H_{\text{reactants}}^{\; -\kern-7pt{\ominus}\kern-7pt-}. \label{4.3.2} \]Notice how there is a negative sign in front of the enthalpy of formation of the reactants because they are normally defined for the reactions that go from the elements to the reactants and not vice-versa. To close the cycle in Equation \ref{4.3.1}, however, we should go from the reactants to the elements, and therefore we must invert the sign in front of the formation enthalpies of the reactants. Equation \ref{4.3.2} can be generalized using the same technique used to derive eq. 4.1.4, resulting in:\[ \Delta_{\text{rxn}} H^{\; -\kern-7pt{\ominus}\kern-7pt-}= \sum_i \nu_i \Delta_{\mathrm{f}} H_i^{\; -\kern-7pt{\ominus}\kern-7pt-}, \label{4.3.3} \]which is a mathematical expression of the law that is known as Hess’s Law. Hess’s law is valid at constant pressure because, at those conditions, the heat of reaction—a path function—is equal to the enthalpy of reaction—a state function. Therefore, the enthalpy of a reaction depends exclusively on the initial and final state, and it can be obtained via the pathway that passes through the elements in their standard state (the formation pathway).Calculate the standard enthalpy of formation at 298 K for the combustion of 1 mol of methane, using the data in eq. 4.2.1.The reaction that is under consideration is: \[ \mathrm{CH}_{4(g)} + 2 \mathrm{O}_{2(g)} \rightarrow \mathrm{CO}_{2(g)} + 2 \mathrm{H}_2 \mathrm{O}_{(l)} \qquad \Delta_{\mathrm{f}} H_{\mathrm{CH}_{4(g)}}^{\; -\kern-7pt{\ominus}\kern-7pt-}= ? \nonumber \]Using Hess’s Law, Equation \ref{4.3.3}, the enthalpy of formation for methane is:\[ \Delta_{\text{rxn}} H^{\; -\kern-7pt{\ominus}\kern-7pt-}= \Delta_{\text{f}} H^{\; -\kern-7pt{\ominus}\kern-7pt-}_{\mathrm{CO}_{2(g)}} + 2 \Delta_{\text{f}} H^{\; -\kern-7pt{\ominus}\kern-7pt-}_{\mathrm{H}_{2}O_{(l)}} - \Delta_{\text{f}} H^{\; -\kern-7pt{\ominus}\kern-7pt-}_{\mathrm{CH}_{4(g)}} - 2 \underbrace{\Delta_{\text{f}} H^{\; -\kern-7pt{\ominus}\kern-7pt-}_{\mathrm{O}_{2(g)}}}_{=0} \nonumber \]whose values are reported in eq. 4.2.1. Notice that the formation enthalpy of \(O_{2(g)}\) is zero, since it is an element in its standard state. The final result is:\[ \Delta_{\text{rxn}} H^{\; -\kern-7pt{\ominus}\kern-7pt-}= \overbrace{-394}^{\Delta_{\text{f}} H^{\; -\kern-5pt{\ominus}\kern-5pt-}_{\mathrm{CO}_{2(g)}}} +2 \overbrace{(-286)}^{\Delta_{\text{f}} H^{\; -\kern-5pt{\ominus}\kern-5pt-}_{\mathrm{H}_{2}O_{(l)}}} - \overbrace{(-75)}^{\Delta_{\text{f}} H^{\; -\kern-5pt{\ominus}\kern-5pt-}_{\mathrm{CH}_{4(g)}}} = -891 \mathrm{kJ/mol}. \nonumber \]where the negative sign indicates that the reaction is exothermic (see eq. 4.1.1), as we should expect. The cycle that we used to solve this exercise can be summarized with :\[ \begin{aligned} \mathrm{CH}_{4(g)} + & 2 \mathrm{O}_{2(g)} \quad \xrightarrow{\Delta_{\text{rxn}} H^{\; -\kern-5pt{\ominus}\kern-5pt-}} \quad \mathrm{CO}_{2(g)} + 2 \mathrm{H}_2 \mathrm{O}_{(l)} \\ \scriptstyle{-\Delta_{\text{f}} H_{\mathrm{CH}_{4(g)},\mathrm{O}_{2(g)}}^{\; -\kern-5pt{\ominus}\kern-5pt-}} & \searrow \qquad \qquad \qquad \qquad \qquad \nearrow \; \scriptstyle{\Delta_{\text{f}} H_{\text{CO}_{2(g)},\mathrm{H}_{2(g)}}^{\; -\kern-5pt{\ominus}\kern-5pt-}}\\ & \qquad \mathrm{H}_{2(g)}, \mathrm{C}_{(s,\text{graphite})}, \mathrm{O}_{2(g)} \end{aligned} \nonumber \]Notice that at standard pressure and \(T = 298 \; \mathrm{K}\) water is in liquid form. However, when we burn methane, the heat associated with the exothermic reaction immediately vaporize the water. Substances in different states of matter have different formation enthalpies, and \(\Delta_{\text{f}} H^{\; -\kern-7pt{\ominus}\kern-7pt-}_{\mathrm{H}_{2}O_{(l)}} = -242 \ \mathrm{kJ/mol}\). The difference between the formation enthalpies of the same substance in different states represents the latent heat that separates them. For example, for water:\[ \begin{aligned} \Delta_{\text{vap}} H^{\; -\kern-7pt{\ominus}\kern-7pt-}_{\mathrm{H}_2O} & = \Delta_{\text{f}} H^{\; -\kern-7pt{\ominus}\kern-7pt-}_{\mathrm{H}_{2}O_{(g)}} - \Delta_{\text{f}} H^{\; -\kern-7pt{\ominus}\kern-7pt-}_{\mathrm{H}_{2}O_{(l)}} \\ & = (-242) - (-286) = + 44 \; \text{kJ/mol} \end{aligned} \nonumber \]which is the latent heat of vaporization for water, \(\Delta_{\text{vap}} H^{\; -\kern-7pt{\ominus}\kern-7pt-}_{\mathrm{H}_2O}\). The latent heat is positive to indicate that the system absorbs energy in going from the liquid to the gaseous state (and it will release energy when going the opposite direction from gas to liquid).This page titled 4.3: Hess's Law is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,894
4.4: Calculations of Enthalpies of Reaction at T ≠ 298 K
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/04%3A_Thermochemistry/4.04%3A_Calculations_of_Enthalpies_of_Reaction_at___T__298_K
Standard enthalpies of formation are usually reported at room temperature (\(T\) = 298 K), but enthalpies of formation at any temperature \(T'\) can be calculated from the values at 298 K using eqs. (2.4) and (3.13):\[ \begin{aligned} dH = C_P dT \rightarrow & \int_{H_{T=298}^{-\kern-6pt{\ominus}\kern-6pt-}}^{H_{T'}} dH = \int_{T=298}^{T'} C_P dT \\ & H_{T'}^{-\kern-6pt{\ominus}\kern-6pt-}- H_{T=298}^{-\kern-6pt{\ominus}\kern-6pt-}= \int_{T=298}^{T'} C_P dT \\ & H_{T'}^{-\kern-6pt{\ominus}\kern-6pt-}= H_{T=298}^{-\kern-6pt{\ominus}\kern-6pt-}+ \int_{T=298}^{T'} C_P dT, \end{aligned} \tag{4.9} \label{4.9} \]which, in conjunction with Hess’s Law (Equation \ref{4.8}), results in:\[ \Delta_{\text{rxn}} H_{T'}^{-\kern-6pt{\ominus}\kern-6pt-}= \Delta_{\text{rxn}} H_{T=298}^{-\kern-6pt{\ominus}\kern-6pt-}+ \int_{T=298}^{T'} \Delta C_P dT, \tag{4.10} \label{4.10} \]with \(\Delta C_P = \sum_i \nu_i C_{P,i}\).Calculate \(\Delta_{\text{rxn}}H\) of the following reaction at 398 K, knowing that \(\Delta_{\text{rxn}}H^{-\kern-6pt{\ominus}\kern-6pt-}\) at 298 K is -283.0 kJ/mol, and the following \(C_P\) values: \(\mathrm{CO}_{(g)}\) = 29 J/(mol K), \(\mathrm{O}_{2(g)}\) = 30 J/(mol K), \(\mathrm{CO}_{2(g)}\) = 38 J/(mol K):\[ \mathrm{CO}_{(g)}+\frac{1}{2}\mathrm{O}_{2(g)} \rightarrow \mathrm{CO}_{2(g)}, \nonumber \]Using Equation \ref{4.10} we obtain:\[ \Delta_{\text{rxn}} H^{398} = \overbrace{-283.0}^{\Delta_{\text{rxn}}H^{-\kern-6pt{\ominus}\kern-6pt-}} + \int_{298}^{398} ( \overbrace{38}^{C_P^{\mathrm{CO}_2}} -\overbrace{29}^{C_P^{\mathrm{CO}}} -\frac{1}{2}\overbrace{30}^{C_P^{\mathrm{O}_2}} ) \times 10^{-3} dT, \nonumber \]which, assuming that the heat capacities does not depend on the temperature, becomes:\[ \begin{aligned} \Delta_{\text{rxn}} H^{398} &= -283.0 + \left(38-29-\frac{1}{2}30 \right) \times 10^{-3} \\ &= -283.6 \; \text{kJ/mol}. \end{aligned} \nonumber \]As we notice from this result, a difference in temperature of 100 K translates into a change in \(\Delta_{\text{rxn}}H^{-\kern-6pt{\ominus}\kern-6pt-}\) of this reaction of only 0.6 kJ/mol. This is a trend that is often observed, and values of \(\Delta_{\text{rxn}}H\) are very weakly dependent on changes in temperature for most chemical reactions. This numerical result can also be compared with the amount that is experimentally measured for \(\Delta_{\text{rxn}}H^{398}\) of this reaction, which is –283.67 kJ/mol. This comparison strongly supports the assumption that we used to solve the integral in Equation \ref{4.10}, confirming that the heat capacities are mostly independent of temperature.This page titled 4.4: Calculations of Enthalpies of Reaction at T ≠ 298 K is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,895
5.1: Carnot Cycle
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/05%3A_Thermodynamic_Cycles/5.01%3A_Carnot_Cycle
The main contribution of Carnot to thermodynamics is his abstraction of the steam engine’s essential features into a more general and idealized heat engine. The definition of Carnot’s idealized cycle is as follows:A Carnot cycle is an idealized process composed of two isothermal and two adiabatic transformations. Each transformation is either an expansion or a compression of an ideal gas. All transformations are assumed to be reversible, and no energy is lost to mechanical friction.A Carnot cycle connects two “heat reservoirs” at temperatures \(T_h\) (hot) and \(T_l\) (low), respectively. The reservoirs have a large thermal capacity so that their temperatures are unaffected by the cycle. The system is composed exclusively by the ideal gas, which is the only substance that changes temperature throughout the cycle. If we report the four transformations of a Carnot cycle on a \(PV\) diagram, we obtain the following plot:At this stage heat is released from the hot reservoir and is absorbed by the ideal gas particles within the system. Thus, the temperature of the system rises. The high temperature causes the gas particles to expand; pushing the piston upwards and doing work on the surroundings.Starting the analysis of the cycle from point \(A\) in ,\(^1\) the first transformation we encounter is an isothermal expansion at \(T_h\). Since the transformation is isothermal:\[ \Delta U_1 = \overbrace{W_1}^{<0} + \overbrace{Q_1}^{>0} = 0 \Rightarrow Q_1 = -W_1, \nonumber \]and heat and work can be calculated for this stage using Equation 2.4.14:\[\begin{equation} \begin{aligned} Q_1 & = \left| Q_h \right| = nRT_h \overbrace{\ln \dfrac{V_B}{V_A}}^{>0 \text{ since } V_B>V_A} > 0, \\ W_1 & = -Q_1 = - nRT_h \ln \dfrac{V_B}{V_A} < 0, \end{aligned} \end{equation} \nonumber \]where we denoted \(\left| Q_h \right|\) the absolute value of the heat that gets into the system from the hot reservoir.At this stage expansion continues, however there is no heat exchange between system and surroundings. Thus, the system is undergoing adiabatic expansion. The expansion allows the ideal gas particles to cool, decreasing the temperature of the system.The second transformation is an adiabatic expansion between \(T_h\) and \(T_l\). Since we are at adiabatic conditions:\[ Q_2 = 0 \Rightarrow \Delta U_2 = W_2, \nonumber \]and the negative energy (expansion work) can be calculated using:\[ \Delta U_2 = W_2 = n \underbrace{\int_{T_h}^{T_l} C_V dT}_{<0 \text{ since } T_\mathrm{l}0} + \overbrace{Q_3}^{<0} = 0 \Rightarrow Q_3 = -W_3, \nonumber \]and:\[\begin{equation} \begin{aligned} Q_3 & = \left| Q_l \right| = nRT_l \overbrace{\ln \dfrac{V_D}{V_C}}^{<0 \text{ since } V_D 0, \end{aligned} \end{equation} \nonumber \]where \(\left| Q_l \right|\) is the absolute value of the heat that gets out of the system to the cold reservoir (\(\left| Q_l \right|\) being the heat entering the system).No heat exchange occurs at this stage, however, the surroundings continue to do work on the system. Adiabatic compression occurs which raises the temperature of the system as well as the location of the piston back to its original state (prior to stage one).The fourth and final transformation is an adiabatic comprssion that restores the system to point \(A\), bringing it from \(T_l\) to \(T_h\). Similarly to stage 3:\[ Q_4 = 0 \Rightarrow \Delta U_4 = W_4, \nonumber \]Since we are at adiabatic conditions. The energy associated with this process is now positive (compression work), and can be calculated using:\[ \Delta U_4 = W_4 = n \underbrace{\int_{T_l}^{T_h} C_V dT}_{>0 \text{ since } T_\mathrm{h}>T_\mathrm{l}} > 0. \nonumber \]Notice how \(\Delta U_4 = - \Delta U_2\) because \(\int_x^y=-\int_y^x\).This page titled 5.1: Carnot Cycle is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,896
5.2: Energy, Heat, and Work in the Carnot Cycle
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/05%3A_Thermodynamic_Cycles/5.02%3A_Energy_Heat_and_Work_in_the_Carnot_Cycle
Summarizing the results of the previous sections, the total amount of energy for a Carnot cycle is:\[\begin{equation} \begin{aligned} \Delta U_{\text{TOT}} & = \Delta U_1+\Delta U_2+\Delta U_3+\Delta U_4 \\ & = 0 + n \int_{T_h}^{T_l} C_V dT + 0 + n \int_{T_l}^{T_h} C_V dT \\ & = n \int_{T_h}^{T_l} C_V dT - n \int_{T_h}^{T_l} C_V dT = 0 \\ \end{aligned} \end{equation} \nonumber \]which is obviously zero, since \(\oint dU=0\). The amounts of work and heat, however, are not zero, since \(Q\) and \(W\) are path functions. Therefore:\[\begin{equation} \begin{aligned} W_{\text{TOT}} & = W_1+W_2+W_3+W_4 \\ & = - nRT_h \ln \dfrac{V_B}{V_A} + n \int_{T_h}^{T_l} C_V dT - nRT_l \ln \dfrac{V_D}{V_C} + n \int_{T_l}^{T_h} C_V dT \\ & = - nRT_h \ln \dfrac{V_B}{V_A} - nRT_l \ln \dfrac{V_D}{V_C}, \\ \end{aligned} \end{equation} \nonumber \]which, considering that \(V_C/V_D=V_B/V_A\), reduces to:\[ W_{\text{TOT}} = - nR \left( T_h-T_l \right) \ln \dfrac{V_B}{V_A} < 0, \nonumber \]which is negative, because \(T_h>T_l\) and \(V_B>V_A\). Negative work means that the work is done by the system. In other words, the system is performing \(PV\)-work by transferring heat from a hot reservoir to a cold one via a Carnot cycle. On the other hand, for the heat:\[\begin{equation} \begin{aligned} Q_{\text{TOT}} & = Q_1+Q_2+Q_3+Q_4 \\ & = Q_h + 0 + Q_l + 0 \\ & = nRT_h \ln \dfrac{V_B}{V_A} + nRT_l \ln \dfrac{V_D}{V_C} \\ & = nR \left( T_h-T_l \right) \ln \dfrac{V_B}{V_A} = -W_{\text{TOT}}, \end{aligned} \end{equation} \nonumber \]which, simplifies to:\[ W_{\text{TOT}}=-(Q_1+Q_3), \nonumber \]and, replacing \(Q_1\) and \(Q_3\) with the absolute values of the heats drawn from the hot and cold reservoirs, \(\left| Q_h \right|\), and \(\left| Q_l \right|\) respectively:\[ \left| W_{\text{TOT}} \right| = \left| Q_h \right| - \left| Q_l \right|, \nonumber \]or, in other words, more heat is extracted from the hot reservoir than it is put into the cold one. The difference between the absolute value of these amounts of heat gives the total work of the cycle. This process is depicted in .Up to this point, we have discussed Carnot cycles working in the hot \(\rightarrow\) cold direction (\(A\) \(\rightarrow\) \(B\) \(\rightarrow\) \(C\) \(\rightarrow\) \(D\) \(\rightarrow\) \(A\)), since this is the primary mode of operation of heat engines that produce work. However, a heat engine could also—in principle—work in the reversed cold \(rightarrow\) hot direction (\(A\) \(\rightarrow\) \(D\) \(\rightarrow\) \(C\) \(\rightarrow\) \(B\) \(\rightarrow\) \(A\)). Write the equations for heat, work, and energy of each stage of a Carnot cycle going the opposite direction than the one discussed in sections 5.1 and 5.2.When the heat engine works in reverse order, the formulas remain the same, but all the signs in front of \(Q\), \(W\), and \(U\) will be reversed. In this case, the total work would get into the systems, and heat would be transferred from the cold reservoir to the hot one. would be modified as: This reversed mode of operation is the basic principle behind refrigerators and air conditioning.This page titled 5.2: Energy, Heat, and Work in the Carnot Cycle is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,897
5.3: Efficiency of a Carnot Cycle
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/05%3A_Thermodynamic_Cycles/5.03%3A_Efficiency_of_a_Carnot_Cycle
The efficiency (\(\varepsilon\)) of a cycle is defined as the ratio between the absolute value of the work extracted from the cycle (\(\left| W_{\text{TOT}} \right|\)) and the heat that gets into the system (\(\left| Q_h \right|\)):\[ \varepsilon = \dfrac{\left| W_{\text{TOT}} \right|}{\left| Q_h \right|} =\dfrac{-W_{\text{TOT}}}{Q_1} \label{5.3.1} \]where the minus sign in front of the work is necessary because the efficiency is defined as a positive number. Replacing Equation 5.2.5 into eq. \ref{5.3.1}, we obtain:\[ \varepsilon = \dfrac{Q_3+Q_1}{Q_1} = 1+\dfrac{Q_3}{Q_1}. \nonumber \]If we go back to Equation \ref{5.3.1} and we replace Equation 5.2.3 for \(W_{\mathrm{TOT}}\) and Equation 5.1.3 for \(Q_1\), we obtain:\[ \varepsilon = \dfrac{nR \left( T_h - T_l \right) \ln V_B/V_A}{nRT_h \ln V_B/V_A} = \dfrac{T_h-T_l}{T_h}=1-\dfrac{T_l}{T_h }<1, \label{5.3.3} \]which proves that the efficiency of a Carnot cycle is strictly smaller than 1.\(^{1}\) In other words, no cycle can convert 100% of the heat into work it extracts from a hot reservoir. This finding had remarkable consequences on the entire thermodynamics field and set the foundation for the introduction of entropy. We will use eqs. \ref{5.3.1} and \ref{5.3.3} for this purpose in the next chapter, but we conclude the discussion on Carnot cycles by returning back to Lord Kelvin. In 1851 he used this finding to state his statement “It is impossible for a self-acting machine, unaided by any external agency, to convey heat from one body to another at a higher temperature. It is impossible, by means of inanimate material agency, to derive mechanical effect from any portion of matter by cooling it below the temperature of the coldest of the surrounding objects.”\(^{2}\) This statement conclusively disproved Joule’s original theories and demonstrated that there is some fundamental principle to govern the flow of heat beyond the first law of thermodynamics.This page titled 5.3: Efficiency of a Carnot Cycle is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,898
6.1: Entropy
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/06%3A_Second_Law_of_Thermodynamics/6.01%3A_Entropy
Let’s return to the definition of efficiency of a Carnot cycle and bring together eqs. 5.3.2 and 5.3.3:\[ \varepsilon = 1+\dfrac{Q_3}{Q_1} = 1-\dfrac{T_l}{T_h}. \nonumber \]Simplifying this equality, we obtain:\[ \dfrac{Q_3}{T_l} = -\dfrac{Q_1}{T_h}, \label{6.1.2} \]or alternatively:\[ \dfrac{Q_3}{T_l} + \dfrac{Q_1}{T_h} = 0. \label{6.1.3} \]The left hand side of Equation \ref{6.1.3} contains the sum of two quantities around the Carnot cycle, each calculated as \(\dfrac{Q_{\mathrm{REV}}}{T}\), with \(Q_{\mathrm{REV}}\) being the heat exchanged at reversible conditions (recall that according to Definition: Carnot Cycle each transformation in a Carnot cycle is reversible). Equation \ref{6.1.2} can be generalized to a sequence of connected Carnot cycles joining more than two isotherms by taking the summation across different temperatures:\[ \sum_i \dfrac{Q_{\mathrm{REV}}}{T_i} = 0, \label{6.1.4} \]where the summation happens across a sequence of Carnot cycles that connects different temperatures. Eqs. \label{6.1.3} and \ref{6.1.4} show that for a Carnot cycle—or a series of connected Carnot cycles—there exists a conserved quantity obtained by dividing the heat associated with each reversible stage and the temperature at which such heat is exchanged. If a quantity is conserved around a cycle, it must be independent on the path, and therefore it is a state function. Looking at similar equations, Clausius introduced in 1865 a new state function in thermodynamics, which he decided to call entropy and indicate with the letter \(S\):\[ S = \dfrac{Q_{\mathrm{REV}}}{T}. \nonumber \]We can use the new state function to generalize Equation \ref{6.1.4} to any reversible cycle in a \(PV\)-diagram by using the rules of calculus. First, we will slice \(S\) into an infinitesimal quantity:\[ dS = \dfrac{đQ_{\mathrm{REV}}}{T}, \nonumber \]then we can extend the summation across temperatures of Equation \ref{6.1.4} to a sum over infinitesimal quantities—that is the integral—around the cycle:\[ \oint dS = \oint \dfrac{đQ_{\mathrm{REV}}}{T} = 0. \nonumber \]This page titled 6.1: Entropy is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,899
6.2: Irreversible Cycles
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/06%3A_Second_Law_of_Thermodynamics/6.02%3A_Irreversible_Cycles
Up to this point, we have discussed reversible cycles only. Notice that the heat that enters the definition of entropy (Definition: Entropy) is the heat exchanged at reversible conditions since it is only at those conditions that the right-hand side of Equation 6.1.5 becomes a state function. What happens when we face an irreversible cycle? The efficiency of a Carnot cycle in Equation 5.3.3 is the maximum efficiency that an idealized thermodynamic cycle can reach. As such, any irreversible cycle will incontrovertibly have an efficiency smaller than the maximum efficiency of the idealized Carnot cycle. Therefore, Equation 6.1.1 for an irreversible cycle will not hold anymore and must be rewritten as:\[ \overbrace{1+\dfrac{Q_3}{Q_1}}^{\varepsilon_{\mathrm{IRR}}} < \overbrace{1-\dfrac{T_l}{T_h}}^{\varepsilon_{\mathrm{REV}}}, \label{6.2.1} \]and, following the same procedure used in section 6.1, we can rewrite Equation \ref{6.2.1} as:\[ \dfrac{Q^{\text{IRR}}_3}{Q^{\text{IRR}}_1} < - \dfrac{T_l}{T_h} \longrightarrow \dfrac{Q^{\text{IRR}}_3}{T_l} + \dfrac{Q^{\text{IRR}}_1}{T_h} < 0 \longrightarrow \sum_i \dfrac{Q_{\text{IRR}}}{T_i} < 0, \nonumber \]which can be generalized using calculus to:\[ \oint \dfrac{đQ_{\mathrm{IRR}}}{T} < 0. \label{6.2.3} \]Putting eqs. 6.1.6 and \ref{6.2.3} together, we obtain:\[ \oint \dfrac{đQ}{T} \leq 0, \label{6.2.4} \]where the equal sign holds for reversible transformations exclusively, while the inequality sign holds for irreversible ones. Equation \ref{6.2.4} is known as Clausius inequality.This page titled 6.2: Irreversible Cycles is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,900
6.3: The Second Law of Thermodynamics
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/06%3A_Second_Law_of_Thermodynamics/6.03%3A_The_Second_Law_of_Thermodynamics
Now we can consider an isolated system undergoing a cycle composed of an irreversible forward transformation (1 \(\rightarrow\) 2) and a reversible backward transformation (2 \(\rightarrow\) 1), as in .This cycle is similar to the cycle depicted in for the Joule’s expansion experiment. In this case, we have an intuitive understanding of the spontaneity of the irreversible expansion process, while the non-spontaneity of the backward compression. Since the cycle has at least one irreversible step, it is overall irreversible, and we can calculate:\[ \oint \dfrac{đQ_{\mathrm{IRR}}}{T} = \int_1^2 \dfrac{đQ_{\mathrm{IRR}}}{T} + \int_2^1 \dfrac{đQ_{\mathrm{REV}}}{T}. \nonumber \]We can then use Clausius inequality (Equation 6.2.4) to write:\[\begin{equation} \begin{aligned} \int_1^2 \dfrac{đQ_{\mathrm{IRR}}}{T} + \int_2^1 \dfrac{đQ_{\mathrm{REV}}}{T} < 0, \end{aligned} \end{equation} \nonumber \]which can be rearranged as:\[ \underbrace{\int_1^2 \dfrac{đQ_{\mathrm{REV}}}{T}}_{\int_1^2 dS = \Delta S} > \underbrace{\int_1^2 \dfrac{đQ_{\mathrm{IRR}}}{T}}_{=0}, \label{6.3.3} \]where we have used the fact that, for an isolated system (the universe), \(đQ_{\mathrm{IRR}}=0\). Equation \ref{6.3.3} can be rewritten as:\[ \Delta S > 0, \label{6.3.4} \]which proves that for any irreversible process in an isolated system, the entropy is increasing. Using Equation \ref{6.3.4} and considering that the only system that is truly isolated is the universe, we can write a concise statement for a new fundamental law of thermodynamics:For any spontaneous process, the entropy of the universe is increasing.This page titled 6.3: The Second Law of Thermodynamics is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,901
7.1: Calculation of ΔSsys
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/07%3A_Calculation_of_Entropy_and_the_Third_Law_of_Thermodynamics/7.01%3A_Calculation_of__Ssys
In general \(\Delta S^{\mathrm{sys}}\) can be calculated using either its Definition: Entropy, or its differential formula, Equation 6.1.5. In practice, it is always convenient to keep in mind that entropy is a state function, and as such it does not depend on the path. For this reason, we can break every transformation into elementary steps, and calculate the entropy on any path that goes from the initial state to the final state, such as, for example:\[\begin{equation} \begin{aligned} P_i, T_i & \quad \xrightarrow{ \Delta_{\text{TOT}} S_{\text{sys}} } \quad P_f, T_f \\ \scriptstyle{\Delta_1 S^{\text{sys}}} & \searrow \qquad \qquad \nearrow \; \scriptstyle{\Delta_2 S^{\text{sys}}} \\ & \qquad P_i, T_f \\ \\ \Delta_{\text{TOT}} S^{\text{sys}} & = \Delta_1 S^{\text{sys}} + \Delta_2 S^{\text{sys}}, \end{aligned} \end{equation} \nonumber \]with \(\Delta_1 S^{\text{sys}}\) calculated at constant \(P\), and \(\Delta_2 S^{\text{sys}}\) at constant \(T\). The most important elementary steps from which we can calculate the entropy resemble the prototypical processes for which we calculated the energy in section 3.1.\[ \Delta_{\mathrm{vap}} S = \dfrac{\Delta_{\mathrm{vap}}H}{T_B}, \nonumber \]with \(\Delta_{\mathrm{vap}}H\) being the enthalpy of vaporization of a substance, and \(T_B\) its boiling temperature.It is experimentally observed that the entropies of vaporization of many liquids have almost the same value of:\[ \Delta_{\mathrm{vap}} S \approx 10.5 R, \label{7.1.7} \]which corresponds in SI to the range of about 85–88 J/(mol K). This simple rule is named Trouton’s rule, after the French scientist that discovered it, Frederick Thomas Trouton.Calculate the standard entropy of vaporization of water knowing \(\Delta_{\mathrm{vap}} H_{\mathrm{H}_2\mathrm{O}}^{-\kern-6pt{\ominus}\kern-6pt-}= 44 \ \text{kJ/mol}\), as calculated in Exercise 4.3.1.Using Equation \ref{7.1.7}—and knowing that at standard conditions of \(P^{-\kern-6pt{\ominus}\kern-6pt-}= 1 \ \text{bar}\) the boiling temperature of water is 373 K—we calculate:\[ \Delta_{\mathrm{vap}} S_{\mathrm{H}_2\mathrm{O}}^{-\kern-6pt{\ominus}\kern-6pt-}= \dfrac{44 \times 10^3 \text{J/mol}}{373 \ \text{K}} = 118 \ \text{J/(mol K)}. \nonumber \]The entropy of vaporization of water is far from Trouton’s rule range of 85–88 J/(mol K) because of the hydrogen bond interactions between its molecules. Other similar exceptions are ethanol, formic acid, and hydrogen fluoride.Since adiabatic processes happen without the exchange of heat, \(đQ=0\), it would be tempting to think that \(\Delta S^{\mathrm{sys}} = 0\) for every one of them. A transformation at constant entropy (isentropic) is always, in fact, a reversible adiabatic process. However, the opposite case is not always true, and an irreversible adiabatic transformation is usually associated with a change in entropy. To explain this fact, we need to recall that the definition of entropy includes the heat exchanged at reversible conditions only. Therefore, for irreversible adiabatic processes \(\Delta S^{\mathrm{sys}} \neq 0\). The calculation of the entropy change for an irreversible adiabatic transformation requires a substantial effort, and we will not cover it at this stage. The situation for adiabatic processes can be summarized as follows:\[\begin{equation} \begin{aligned} \text{reversible:} \qquad & \dfrac{đQ_{\mathrm{REV}}}{T} = 0 \longrightarrow \Delta S^{\mathrm{sys}} = 0 \quad \text{(isentropic),}\\ \text{irreversible:} \qquad & \dfrac{đQ_{\mathrm{IRR}}}{T} = 0 \longrightarrow \Delta S^{\mathrm{sys}} \neq 0. \\ \end{aligned} \end{equation} \nonumber \]We can calculate the heat exchanged in a process that happens at constant volume, \(Q_V\), using Equation 2.3.2. Since the heat exchanged at those conditions equals the energy (Equation 3.1.7), and the energy is a state function, we can use \(Q_V\) regardless of the path (reversible or irreversible). The entropy associated with the process will then be:\[ \Delta S^{\mathrm{sys}} = \int_i^f \dfrac{đQ_{\mathrm{REV}}}{T} = \int_i^f nC_V \dfrac{dT}{T}, \nonumber \]which, assuming \(C_V\) independent of temperature and solving the integral on the right-hand side, becomes:\[ \Delta S^{\mathrm{sys}} \approx n C_V \ln \dfrac{T_f}{T_i}. \nonumber \]Similarly to the constant volume case, we can calculate the heat exchanged in a process that happens at constant pressure, \(Q_P\), using Equation 2.3.4. Again, similarly to the previous case, \(Q_P\) equals a state function (the enthalpy), and we can use it regardless of the path to calculate the entropy as:\[ \Delta S^{\mathrm{sys}} = \int_i^f \dfrac{đQ_{\mathrm{REV}}}{T} = \int_i^f nC_P \dfrac{dT}{T}, \nonumber \]which, assuming \(C_P\) independent of temperature and solving the integral on the right-hand side, becomes:\[ \Delta S^{\mathrm{sys}} \approx n C_P \ln \dfrac{T_f}{T_i}. \nonumber \]︎This page titled 7.1: Calculation of ΔSsys is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,902
7.2: Calculation of ΔSsurr
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/07%3A_Calculation_of_Entropy_and_the_Third_Law_of_Thermodynamics/7.02%3A_Calculation_of_Ssurr
While the entropy of the system can be broken down into simple cases and calculated using the formulas introduced above, the entropy of the surroundings does not require such a complicated treatment, and it can always be calculated as:\[ \Delta S^{\mathrm{surr}} = \dfrac{Q_{\text{surr}}}{T_{\text{surr}}}=\dfrac{-Q_{\text{sys}}}{T_{\text{surr}}}, \nonumber \]or, in differential form:\[ d S^{\mathrm{surr}} = \dfrac{đQ_{\text{surr}}}{T_{\text{surr}}}=\dfrac{-đQ_{\text{sys}}}{T_{\text{surr}}}, \nonumber \]where the substitution \(Q_{\text{surr}}=-Q_{\text{sys}}\) can be performed regardless of whether the transformation is reversible or not. In other words, the surroundings always absorb heat reversibly. To justify this statement, we need to restrict the analysis of the interaction between the system and the surroundings to just the vicinity of the system itself. Outside of a generally restricted region, the rest of the universe is so vast that it remains untouched by anything happening inside the system.\(^1\) To facilitate our comprehension, we might consider a system composed of a beaker on a workbench. We can then consider the room that the beaker is in as the immediate surroundings. To all effects, the beaker+room combination behaves as a system isolated from the rest of the universe. The room is obviously much larger than the beaker itself, and therefore every energy production that happens in the system will have minimal effect on the parameters of the room. For example, an exothermal chemical reaction occurring in the beaker will not affect the overall temperature of the room substantially. When we study our reaction, \(T_{\text{surr}}\) will be constant, and the transfer of heat from the reaction to the surroundings will happen at reversible conditions.Calculate the changes in entropy of the universe for the process of 1 mol of supercooled water, freezing at –10°C, knowing the following data: \(\Delta_{\mathrm{fus}}H = 6 \; \text{kJ/mol}\), \(C_P^{\mathrm{H}_2 \mathrm{O}_{(l)}}=76 \; \text{J/(mol K)}\), \(C_P^{\mathrm{H}_2 \mathrm{O}_{(s)}}=38 \; \text{J/(mol K)}\), and assuming both \(C_P\) to be independent on temperature.\(\Delta S^{\mathrm{sys}}\) for the process under consideration can be calculated using the following cycle:\[\begin{equation} \begin{aligned} \mathrm{H}_2 \mathrm{O}_{(l)} & \quad \xrightarrow{\quad \Delta S_{\text{sys}} \quad} \quad \mathrm{H}_2 \mathrm{O}_{(s)} \qquad \quad T=263\;K\\ \scriptstyle{\Delta S_1} \; \bigg\downarrow \quad & \qquad \qquad \qquad \qquad \scriptstyle{\bigg\uparrow \; \Delta S_3} \\ \mathrm{H}_2 \mathrm{O}_{(l)} & \quad \xrightarrow{\quad \Delta S_2 \qquad} \quad \mathrm{H}_2\mathrm{O}_{(s)} \qquad \; T=273\;K\\ \\ \Delta S^{\text{sys}} & = \Delta S_1 + \Delta S_2 + \Delta S_3 \end{aligned} \end{equation}\label{7.2.3} \]\(\Delta S_1\) and \(\Delta S_3\) are the isobaric heating and cooling processes of liquid and solid water, respectively, and can be calculated filling the given data into Equation 7.1.12. \(\Delta S_2\) is a phase change (isothermal process) and can be calculated translating Equation 7.1.6 to the freezing transformation. Overall:\[\begin{equation} \begin{aligned} \Delta S^{\text{sys}} & = \int_{263}^{273} \dfrac{C_P^{\mathrm{H}_2 \mathrm{O}_{(l)}}}{T}dT+\dfrac{-\Delta_{\mathrm{fus}}H}{273}+\int_{273}^{263} \dfrac{C_P^{\mathrm{H}_2 \mathrm{O}_{(s)}}}{T}dT \\ & = 76 \ln \dfrac{273}{263} - \dfrac{6 \times 10^3}{273} + 38 \ln \dfrac{263}{273}= -20.6 \; \text{J/K}. \end{aligned} \end{equation} \label{7.2.4} \]Don’t be confused by the fact that \(\Delta S^{\text{sys}}\) is negative. This is not the entropy of the universe! Hence it tells nothing about spontaneity! We can now calculate \(\Delta S^{\text{surr}}\) from \(Q_{\text{sys}}\), noting that we can calculate the enthalpy around the same cycle in Equation \ref{7.2.3}. To do that, we already have \(\Delta_{\mathrm{fus}}H\) from the given data, and we can calculate \(\Delta H_1\) and \(\Delta H_3\) using Equation 2.3.4.\[\begin{equation} \begin{aligned} Q^{\text{sys}} & = \Delta H = \int_{263}^{273} C_P^{\mathrm{H}_2 \mathrm{O}_{(l)}} dT + (-\Delta_{\mathrm{fus}}H) + \int_{273}^{263} C_P^{\mathrm{H}_2 \mathrm{O}_{(s)}}dT \\ & = 76 \times 10^{-3} - 6 + 38 \times 10^{-3} \\ &= -5.6 \; \text{kJ}. \\ \\ \Delta S^{\text{surr}} & = \dfrac{-Q_{\text{sys}}}{T}=\dfrac{5.6 \times 10^3}{263} = + 21.3 \; \text{J/K}. \\ \end{aligned} \end{equation} \label{7.2.5} \]Bringing \ref{7.2.3} and \ref{7.2.5} results together, we obtain:\[ \Delta S^{\text{universe}}=\Delta S^{\text{sys}} + \Delta S^{\text{surr}} = -20.6+21.3=+0.7 \; \text{J/K}. \nonumber \]Since the entropy changes in the universe are positive, the process is spontaneous, as expected.This page titled 7.2: Calculation of ΔSsurr is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,903
7.3: Clausius Theorem
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/07%3A_Calculation_of_Entropy_and_the_Third_Law_of_Thermodynamics/7.03%3A_Clausius_Theorem
By replacing Equation 7.2.2 into 7.2 we can write the differential change in the entropy of the system as:\[ d S^{\mathrm{sys}} = d S^{\mathrm{universe}} - d S^{\mathrm{surr}} = d S^{\mathrm{universe}} + \dfrac{đQ_{\text{sys}}}{T}. \nonumber \]According to the second law, for any spontaneous process \(d S^{\mathrm{universe}}\geq0\), and therefore, replacing it into Equation \(\PageIndex{1}\):\[ d S^{\mathrm{sys}} \geq \dfrac{đQ}{T}, \nonumber \]which is the mathematical expression of the so-called Clausius theorem. Eq. \(\PageIndex{2}\) distinguishes between three conditions:\[ \begin{aligned} d S^{\mathrm{sys}} > \dfrac{đQ}{T} \qquad &\text{spontaneous, irreversible transformation} \\ d S^{\mathrm{sys}} = \dfrac{đQ}{T} \qquad &\text{reversible transformation} \\ d S^{\mathrm{sys}} < \dfrac{đQ}{T} \qquad &\text{non-spontaneous, irreversible transformation}, \end{aligned} \nonumber \]Clausius theorem provides a useful criterion to infer the spontaneity of a process, especially in cases where it’s hard to calculate \(\Delta S^{\mathrm{universe}}\). Eq. \(\PageIndex{2}\) requires knowledge of quantities that are dependent on the system exclusively, such as the difference in entropy, the amount of heat that crosses the boundaries, and the temperature at which the process happens.1 If a process produces more entropy than the amount of heat that crosses the boundaries divided by the absolute temperature, it will be spontaneous. Vice versa, if the entropy produced is smaller than the amount of heat crossing the boundaries divided by the absolute temperature, the process will be non-spontaneous. The equality holds for systems in equilibrium with their surroundings, or for reversible processes since they happen through a series of equilibrium states. Measuring or calculating these quantities might not always be the simplest of calculations. We will return to the Clausius theorem in the next chapter when we seek more convenient indicators of spontaneity.This page titled 7.3: Clausius Theorem is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,904
7.4: The Third Law of Thermodynamics
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/07%3A_Calculation_of_Entropy_and_the_Third_Law_of_Thermodynamics/7.04%3A_The_Third_Law_of_Thermodynamics
In chapter 4, we have discussed how to calculate reaction enthalpies for any reaction, given the formation enthalpies of reactants and products. In this section, we will try to do the same for reaction entropies. In this case, however, our task is simplified by a fundamental law of thermodynamics, introduced by Walther Hermann Nernst (1864–1941) in 1906.1 The statement that was initially known as Nernst’s Theorem is now officially recognized as the third fundamental law of thermodynamics, and it has the following definition:The entropy of a perfectly ordered, pure, crystalline substance is zero at \(T=0 \; \text{K}\).This law sets an unambiguous zero of the entropy scale, similar to what happens with absolute zero in the temperature scale. The absolute value of the entropy of every substance can then be calculated in reference to this unambiguous zero. As such, absolute entropies are always positive. This is in stark contrast to what happened for the enthalpy. An unambiguous zero of the enthalpy scale is lacking, and standard formation enthalpies (which might be negative) must be agreed upon to calculate relative differences.In simpler terms, given a substance \(i\), we are not able to measure absolute values of its enthalpy \(H_i\) (and we must resort to known enthalpy differences, such as \(\Delta_{\mathrm{f}} H^{-\kern-6pt{\ominus}\kern-6pt-}\) at standard pressure). At the same time, for entropy, we can measure \(S_i\) thanks to the third law, and we usually report them as \(S_i^{-\kern-6pt{\ominus}\kern-6pt-}\). A comprehensive list of standard entropies of inorganic and organic compounds is reported in appendix 16. Reaction entropies can be calculated from the tabulated standard entropies as differences between products and reactants, using:\[ \Delta_{\text{rxn}} S^{-\kern-6pt{\ominus}\kern-6pt-}= \sum_i \nu_i S_i^{-\kern-6pt{\ominus}\kern-6pt-}, \nonumber \]with \(\nu_i\) being the usual stoichiometric coefficients with their signs given in Definition: Signs of the Stoichiometric Coefficients.The careful wording in the definition of the third law Definition: Third Law of Thermodynamics allows for the fact that some crystal might form with defects (i.e., not as a perfectly ordered crystal). In this case, a residual entropy will be present even at \(T=0 \; \text{K}\). However, this residual entropy can be removed, at least in theory, by forcing the substance into a perfectly ordered crystal.2An interesting corollary to the third law states that it is impossible to find a procedure that reduces the temperature of a substance to \(T=0 \; \text{K}\) in a finite number of steps.︎This page titled 7.4: The Third Law of Thermodynamics is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,905
8.1: Fundamental Equation of Thermodynamics
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/08%3A_Thermodynamic_Potentials/8.01%3A_Fundamental_Equation_of_Thermodynamics
Let’s summarize some of the results from the first and second law of thermodynamics that we have seen so far. For reversible processes in closed systems:\[\begin{equation} \begin{aligned} \text{From 1}^{\text{st}} \text{ Law:} \qquad \quad & dU = đQ_{\mathrm{REV}}-PdV \\ \text{From The Definition of Entropy:} \qquad \quad & dS = \dfrac{đQ_{\mathrm{REV}}}{T} \rightarrow đQ_{\mathrm{REV}} = TdS \\ \\ \Rightarrow \quad & dU = TdS - PdV. \end{aligned} \end{equation} \label{8.1.1} \]Equation \ref{8.1.1} is called the fundamental equation of thermodynamics since it combines the first and the second laws. Even though we started the derivation above by restricting to reversible transformations only, if we look carefully at Equation \ref{8.1.1}, we notice that it exclusively involves state functions. As such, it applies to both reversible and irreversible processes. The fundamental equation, however, remains constrained to closed systems. This fact restricts its utility for chemistry, since when a chemical reaction happens, the mass in the system will change, and the system is no longer closed.At the end of the 19th century, Josiah Willard Gibbs (1839–1903) proposed an important addition to the fundamental equation to account for chemical reactions. Gibbs was able to do so by introducing a new quantity that he called the chemical potential:The chemical potential is the amount of energy absorbed or released due to a change of the particle number of a given chemical species.The chemical potential of species \(i\) is usually abbreviated as \(\mu_i\), and it enters the fundamental equation of thermodynamics as:\[ dU = TdS-PdV+\sum_i\mu_i dn_i, \nonumber \]where \(dn_i\) is the differential change in the number of moles of substance \(i\), and the summation extends over all chemical species in the system.According to the fundamental equation, the internal energy of a system is a function of the three variables entropy, \(S\), volume, \(V\), and the numbers of moles \(\{n_i\}\).1 Because of their importance in determining the internal energy, these three variables are crucial in thermodynamics. Under several circumstances, however, they might not be the most convenient variables to use.2 To emphasize the important connections given by the fundamental equation, we can use the notation \(U(S,V,\{n_i\})\) and we can term \(S\), \(V\), and \(\{n_i\}\) natural variables of the energy.This page titled 8.1: Fundamental Equation of Thermodynamics is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,906
8.2: Thermodynamic Potentials
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/08%3A_Thermodynamic_Potentials/8.02%3A_Thermodynamic_Potentials
Starting from the fundamental equation, we can define new thermodynamic state functions that are more convenient to use under certain specific conditions. The new functions are determined by using a mathematical procedure called the Legendre transformation. A Legendre transformation is a linear change in variables that brings from an initial mathematical function to a new function obtained by subtracting one or more products of conjugate variables.1Taking the internal energy as defined in Equation 8.1.1, we can perform such procedure by subtracting products of the following conjugate variables pairs: \(T \text{ and } S\) or \(-P \text{ and } V\). This procedure aims to define new state functions that depend on more convenient natural variables.2 The new functions are called “thermodynamic potential energies,” or simply thermodynamic potentials.3 An example of this procedure is given by the definition of enthalpy that we have already seen in section 3.1.4. If we take the internal energy and subtract the product of two conjugate variables (\(-P\) and \(V\)), we obtain a new state function called enthalpy, as we did in Equation 3.1.9). Taking the differential of this definition, we obtain:\[ dH = dU +VdP +PdV, \label{8.2.1} \]and using the fundamental equation, Equation 8.1.2, to replace \(dU\), we obtain:\[\begin{equation} \begin{aligned} dH & = TdS -PdV +\sum_i\mu_i dn_i +VdP +PdV \\ & = TdS +VdP +\sum_i\mu_i dn_i. \label{8.2.2}\end{aligned} \end{equation} \]which is the fundamental equation for enthalpy. The natural variables of the enthalpy are \(S\), \(P\), and \(\{n_i\}\). The Legendre transformation has allowed us to go from \(U(S,V,\{n_i\})\) to \(H(S,P,\{n_i\})\) by replacing the dependence on the extensive variable, \(V\), with an intensive one, \(P\).Following the same procedure, we can perform another Legendre transformation to replace the entropy with a more convenient intensive variable such as the temperature. This can be done by defining a new function called the Helmholtz free energy, \(A\), as:\[ A = U -TS \label{8.2.3} \]which, taking the differential and using the fundamental equation (Equation \ref{8.2.2}) becomes:\[\begin{equation} \begin{aligned} dA &= dU -SdT -TdS = TdS - PdV +\sum_i \mu_i dn_i -SdT -TdS \\ &= -SdT -PdV +\sum_i \mu_i dn_i. \end{aligned} \end{equation} \label{8.2.4} \]The Helmholtz free energy is named after Hermann Ludwig Ferdinand von Helmholtz (1821—1894), and its natural variables are temperature, volume, and the number of moles.Finally, suppose we perform a Legendre transformation on the internal energy to replace both the entropy and the volume with intensive variables. In that case, we can define a new function called the Gibbs free energy, \(G\), as:\[ G = U -TS +PV \label{8.2.5} \]which, taking again the differential and using Equation \ref{8.2} becomes:\[ \begin{aligned} dG &= dU -SdT -TdS +VdP +PdV \\ &= TdS - PdV +\sum_i\mu_i dn_i -SdT -TdS +VdP +PdV \\ &= VdP -SdT +\sum_i\mu_i dn_i. \end{aligned} \label{8.2.6} \]The Gibbs free energy is named after Willard Gibbs himself, and its natural variables are temperature, pressure, and number of moles.A summary of the four thermodynamic potentials is given in the following table.The thermodynamic potentials are the analog of the potential energy in classical mechanics. Since the potential energy is interpreted as the capacity to do work, the thermodynamic potentials assume the following interpretations:Where non-mechanical work is defined as any type of work that is not expansion or compression (\(PV\)–work). A typical example of non-mechanical work is electrical work.This page titled 8.2: Thermodynamic Potentials is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,907
8.3: Free Energies
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/08%3A_Thermodynamic_Potentials/8.03%3A_Free_Energies
The Legendre transformation procedure translates all information contained in the original function to the new one. Therefore, \(H(S,P,\{n_i\})\), \(A(T,V,\{n_i\})\), and \(G(T,P,\{n_i\})\) all contain the same information that is in \(U(S,V,\{n_i\})\). However, the new functions depend on different natural variables, and they are useful at different conditions. For example, when we want to study chemical changes, we are interested in studying the term \(\sum_i\mu_i dn_i\) that appears in each thermodynamic potential. To do so, we need to isolate the chemical term by keeping all other natural variables constant. For example, changes in the chemical term will correspond to changes in the internal energy at constant \(S\) and constant \(V\):\[ dU(S,V,\{n_i\}) = \sum_i\mu_i dn_i \quad \text{if} \quad dS=dV=0. \label{8.3.1} \]Similarly:\[\begin{equation} \begin{aligned} dH(S,P,\{n_i\}) = \sum_i\mu_i dn_i \quad \text{if} \quad dS=dP=0, \\ dA(T,V,\{n_i\}) = \sum_i\mu_i dn_i \quad \text{if} \quad dT=dV=0, \\ dG(T,P,\{n_i\}) = \sum_i\mu_i dn_i \quad \text{if} \quad dT=dP=0. \end{aligned} \end{equation}\label{8.3.2} \]The latter two cases are particularly interesting since most of chemistry happens at either constant volume,1 or constant pressure.2 Since \(dS=0\) is not a requirement for both free energies to describe chemical changes, we can apply either of them to study non-isentropic processes. If a process is not isentropic, it either increases the entropy of the universe, or it decreases it. Therefore—according to the second law—it is either spontaneous or not. Using this concept in conjunction with Clausius theorem, we can devise new criteria for inferring the spontaneity of a process that depends exclusively on the free energies.Recalling Clausius theorem:\[ d S^{\mathrm{sys}} \geq \dfrac{đQ}{T_{\text{surr}}} \quad \longrightarrow \quad TdS \geq đQ, \label{8.3.3} \]we can consider the two cases: constant \(V\) (\(đQ_V=dU\), left), and constant \(P\) (\(đQ_P=dH\), right):\[\begin{equation} \begin{aligned} \text{constant} & \; V: & \qquad \qquad & \qquad \qquad & \text{constant} & \; P: \\ \\ TdS & \geq dU & & & TdS & \geq dH \\ \\ TdS -dU & \geq 0 & & & TdS -dH & \geq 0 \\ \end{aligned} \end{equation} \label{8.3.4} \]we can then simplify the definition of free energies, eqs. 8.2.4 and 8.2.6:\[\begin{equation} \begin{aligned} \text{constant} & \; T,V: & \qquad & \qquad & \text{constant} & \; T,P: \\ \\ (dA)_{T,V} &= dU -TdS & & & (dG)_{T,P} &= dH - TdS \\ \\ dU = (dA)_{T,V} &+TdS & & & dH = (dG)_{T,P} &+TdS \end{aligned}\end{equation} \label{8.3.5} \]and by merging \(dU\) and \(dH\) from eqs. \ref{8.3.5} into Clausius theorem expressed using eqs. \ref{8.3.4}, we obtain:\[\begin{equation} \begin{aligned} TdS -(dA)_{T,V} &- TdS \geq 0 & \qquad & \qquad & TdS -(dG)_{T,P} &- TdS \geq 0 \\ \\ (dA)_{T,V} & \leq 0 & \qquad & \qquad & (dG)_{T,P} & \leq 0. \\ \end{aligned} \end{equation} \label{8.3.6} \]These equations represent the conditions on \(dA\) and \(dG\) for inferring the spontaneity of a process, and can be summarized as follows:During a spontaneous process at constant temperature and volume, the Helmholtz free energy will decrease \((dA<0)\), until it reaches a stationary point at which the system will be at equilibrium \((dA=0)\). During a spontaneous process at constant temperature and pressure, the Gibbs free energy will decrease \((dG<0)\), until it reaches a stationary point at which the system will be at equilibrium \((dG=0)\).This page titled 8.3: Free Energies is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,908
8.4: Maxwell Relations
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/08%3A_Thermodynamic_Potentials/8.04%3A_Maxwell_Relations
Let’s consider the fundamental equations for the thermodynamic potentials that we have derived in section 8.1:\[\begin{equation} \begin{aligned} dU(S,V,\{n_i\}) &= \enspace T dS -P dV + \sum_i \mu_i dn_i \\ dH(S,P,\{n_i\}) &= \enspace T dS + V dP + \sum_i \mu_i dn_i \\ dA(T,V,\{n_i\}) &= -S dT -P dV + \sum_i \mu_i dn_i \\ dG(T,P,\{n_i\}) &= -S dT + V dP + \sum_i \mu_i dn_i\;. \end{aligned} \end{equation} \nonumber \]From the knowledge of the natural variable of each potential, we could reconstruct these formulas by using the total differential formula:\[\begin{equation} \begin{aligned} dU &= \underbrace{\left(\dfrac{\partial U}{\partial S} \right)_{V,\{n_i\}}}_{T} dS + \underbrace{\left(\dfrac{\partial U}{\partial V} \right)_{S,\{n_i\}}}_{-P} dV + \sum_i \underbrace{\left(\dfrac{\partial U}{\partial n_i} \right)_{S,V,\{n_{j \neq i}\}}}_{\mu_i} dn_i \\ dH &= \underbrace{\left(\dfrac{\partial H}{\partial S} \right)_{P,\{n_i\}}}_{T} dS + \underbrace{\left(\dfrac{\partial H}{\partial P} \right)_{S,\{n_i\}}}_{V} dP + \sum_i \underbrace{\left(\dfrac{\partial H}{\partial n_i} \right)_{S,P,\{n_{j \neq i}\}}}_{\mu_i} dn_i \\ dA &= \underbrace{\left(\dfrac{\partial A}{\partial T} \right)_{V,\{n_i\}}}_{-S} dT + \underbrace{\left(\dfrac{\partial A}{\partial V} \right)_{T,\{n_i\}}}_{-P} dV + \sum_i \underbrace{\left(\dfrac{\partial A}{\partial n_i} \right)_{T,V,\{n_{j \neq i}\}}}_{\mu_i} dn_i \\ dG &= \underbrace{\left(\dfrac{\partial G}{\partial T} \right)_{V,\{n_i\}}}_{-S} dT + \underbrace{\left(\dfrac{\partial G}{\partial P} \right)_{T,\{n_i\}}}_{V} dP + \sum_i \underbrace{\left(\dfrac{\partial G}{\partial n_i} \right)_{T,P,\{n_{j \neq i}\}}}_{\mu_i} dn_i\;, \end{aligned} \end{equation} \nonumber \]we can derive the following new definitions:\[\begin{equation} \begin{aligned} T &= \left(\dfrac{\partial U}{\partial S} \right)_{V,\{n_i\}} = \left(\dfrac{\partial H}{\partial S} \right)_{P,\{n_i\}} \\ -P &= \left(\dfrac{\partial U}{\partial V} \right)_{S,\{n_i\}} = \left(\dfrac{\partial A}{\partial V} \right)_{T,\{n_i\}} \\ V &= \left(\dfrac{\partial H}{\partial P} \right)_{S,\{n_i\}} = \left(\dfrac{\partial G}{\partial P} \right)_{T,\{n_i\}} \\ -S &= \left(\dfrac{\partial A}{\partial T} \right)_{V,\{n_i\}} = \left(\dfrac{\partial G}{\partial T} \right)_{V,\{n_i\}} \\ \text{and:} \\ \mu_i &= \left(\dfrac{\partial U}{\partial n_i} \right)_{S,V,\{n_{j \neq i}\}} = \left(\dfrac{\partial H}{\partial n_i} \right)_{S,P,\{n_{j \neq i}\}} \\ &= \left(\dfrac{\partial A}{\partial n_i} \right)_{T,V,\{n_{j \neq i}\}} = \left(\dfrac{\partial G}{\partial n_i} \right)_{T,P,\{n_{j \neq i}\}}\;. \end{aligned} \end{equation} \nonumber \]Since \(T\), \(P\), \(V\), and \(S\) are now defined as partial first derivatives of a thermodynamic potential, we can now take a second partial derivation with respect to a separate variable, and rely on Schwartz’s theorem to derive the following relations:\[\begin{equation} \begin{aligned} \dfrac{\partial^2 U }{\partial S \partial V} &=& +\left(\dfrac{\partial T}{\partial V}\right)_{S,\{n_{j \neq i}\}} &=& -\left(\dfrac{\partial P}{\partial S}\right)_{V,\{n_{j \neq i}\}} \\ \dfrac{\partial^2 H }{\partial S \partial P} &=& +\left(\dfrac{\partial T}{\partial P}\right)_{S,\{n_{j \neq i}\}} &=& +\left(\dfrac{\partial V}{\partial S}\right)_{P,\{n_{j \neq i}\}} \\ -\dfrac{\partial^2 A }{\partial T \partial V} &=& +\left(\dfrac{\partial S}{\partial V}\right)_{T,\{n_{j \neq i}\}} &=& +\left(\dfrac{\partial P}{\partial T}\right)_{V,\{n_{j \neq i}\}} \\ \dfrac{\partial^2 G }{\partial T \partial P} &=& -\left(\dfrac{\partial S}{\partial P}\right)_{T,\{n_{j \neq i}\}} &=& +\left(\dfrac{\partial V}{\partial T}\right)_{P,\{n_{j \neq i}\}} \end{aligned} \end{equation}\label{8.4.4} \]The relations in \ref{8.4.4} are called Maxwell relations,1 and are useful in experimental settings to relate quantities that are hard to measure with others that are more intuitive.Derive the last Maxwell relation in Equation \ref{8.4.4}.We can start our derivation from the definition of \(V\) and \(S\) as a partial derivative of \(G\):\[ V = \left(\dfrac{\partial G}{\partial P} \right)_{T,\{n_i\}} \qquad \text{and:} \qquad -S = \left(\dfrac{\partial G}{\partial T} \right)_{V,\{n_i\}}, \nonumber \]and then take a second partial derivative of each quantity with respect to the second variable:\[\begin{equation} \begin{aligned} \left(\dfrac{\partial V}{\partial T} \right)_{P,\{n_i\}} &=\dfrac{\partial}{\partial T}\left[ \left(\dfrac{\partial G}{\partial P} \right)_{T,\{n_i\}} \right]_{P,\{n_i\}} \\ \\ -\left(\dfrac{\partial S}{\partial P} \right)_{T,\{n_i\}} &=\dfrac{\partial}{\partial P}\left[ \left(\dfrac{\partial G}{\partial T} \right)_{P,\{n_i\}} \right]_{T,\{n_i\}} \;. \end{aligned} \end{equation} \nonumber \]These two derivatives are mixed partial second derivatives of \(G\) with respect to \(T\) and \(P\), and therefore, according to Schwartz’s theorem, they are equal to each other:\[\begin{equation} \begin{aligned} \dfrac{\partial}{\partial T}\left[ \left(\dfrac{\partial G}{\partial P} \right)_{T,\{n_i\}} \right]_{P,\{n_i\}} &= \dfrac{\partial}{\partial P}\left[ \left(\dfrac{\partial G}{\partial T} \right)_{P,\{n_i\}} \right]_{T,\{n_i\}}, \\ \\ \text{hence:} \\ \\ \left(\dfrac{\partial V}{\partial T} \right)_{P,\{n_i\}} &= -\left(\dfrac{\partial S}{\partial P} \right)_{T,\{n_i\}}, \end{aligned} \end{equation} \nonumber \]which is the last of Maxwell relations, as defined in Equation \ref{8.4.4}. This relation is particularly useful because it connects the quantity \(\left(\dfrac{\partial S}{\partial P} \right)_{T,\{n_i\}}\)—which is impossible to measure in a lab—with the quantity \(\left(\dfrac{\partial V}{\partial T} \right)_{P,\{n_i\}}\)—which is easier to measure from an experiment that determines isobaric volumetric thermal expansion coefficients.This page titled 8.4: Maxwell Relations is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,909
9.1: Gibbs Equation
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/09%3A_Gibbs_Free_Energy/9.01%3A_Gibbs_Equation
Recalling from chapter 8, the definition of \(G\) is:\[ G = U -TS +PV = H-TS, \nonumber \]which, taking the differential at constant \(T\) and \(P\), becomes:\[ dG = dH \; \overbrace{-SdT}^{=0} -TdS = dH -TdS. \nonumber \]Integrating Equation \ref{9.1.2} between the initial and final states of a process results in:\[\begin{equation} \begin{aligned} \int_i^f dG &= \int_i^f dH -T \int_i^f dS \\ \\ \Delta G &= \Delta H -T \Delta S \end{aligned} \end{equation} \label{9.1.2} \]which is the famous Gibbs equation for \(\Delta G\). Using Definition: Spontaneous Process, we can use \(\Delta G\) to infer the spontaneity of a chemical process that happens at constant \(T\) and \(P\) using \(\Delta G \leq 0\). If we set ourselves at standard conditions, we can calculate the standard Gibbs free energy of formation, \(\Delta_{\text{rxn}} G^{-\kern-6pt{\ominus}\kern-6pt-}\), for any reaction as:\[\begin{equation} \begin{aligned} \Delta_{\text{rxn}} G^{-\kern-6pt{\ominus}\kern-6pt-}&= \Delta_{\text{rxn}} H^{-\kern-6pt{\ominus}\kern-6pt-}-T \Delta_{\text{rxn}} S^{-\kern-6pt{\ominus}\kern-6pt-}\\ \\ &= \sum_i \nu_i \Delta_{\mathrm{f}} H_i^{-\kern-6pt{\ominus}\kern-6pt-}+ T \sum_i \nu_i S_i^{-\kern-6pt{\ominus}\kern-6pt-}, \end{aligned} \end{equation} \nonumber \]where \(\Delta_{\mathrm{f}} H_i^{-\kern-6pt{\ominus}\kern-6pt-}\) are the standard enthalpies of formation, \(S_i^{-\kern-6pt{\ominus}\kern-6pt-}\) are the standard entropies, and \(\nu_i\) are the stoichiometric coefficients for every species \(i\) involved in the reaction. All these quantities are commonly available, and we have already discussed their usage in chapters 4 and 7, respectively.\(^1\)The following four options are possible for \(\Delta G^{-\kern-6pt{\ominus}\kern-6pt-}\) of a chemical reaction:Or, in other words:A simple criterion to evaluate the entropic contribution of a reaction is to look at the total number of moles of the reactants and the products (as the sum of the stoichiometric coefficients). If the reaction is producing more molecules than it destroys \(\left( \left| \sum_\text{products} \nu_i \right| > \left| \sum_\text{reactants} \nu_i \right| \right)\), it will increase the entropy. Vice versa, if the total number of moles in a reaction is reducing \(\left( \left| \sum_\text{products} \nu_i \right| < \left| \sum_\text{reactants} \nu_i \right| \right)\), the entropy will also reduce.As we saw in section 8.2, the natural variables of the Gibbs free energy are the temperature, \(T\), the pressure, \(P\), and chemical composition, as the number of moles \(\{n_i\}\). The Gibbs free energy can therefore be expressed using the total differential as (see also, last formula in Equation 8.4.2):\[ dG(T,P,\{n_i\}) = \mkern-18mu \underbrace{\left(\dfrac{\partial G}{\partial T} \right)_{P,\{n_i\}}}_{\text{temperature dependence}} \mkern-36mu dT + \underbrace{\left(\dfrac{\partial G}{\partial P} \right)_{T,\{n_i\}}}_{\text{pressure dependence}} \mkern-36mu dP + \sum_i \underbrace{\left(\dfrac{\partial G}{\partial n_i} \right)_{T,P,\{n_{j \neq i}\}}}_{\text{composition dependence}} \mkern-36mu dn_i. \label{9.1.5} \]If we know the behavior of \(G\) as we vary each of the three natural variables independently of the other two, we can reconstruct the total differential \(dG\). Each of these terms represents a coefficient in Equation \ref{9.1.5}, which are given in Equation 8.4.3.This page titled 9.1: Gibbs Equation is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,910
9.2: Temperature Dependence of ΔG
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/09%3A_Gibbs_Free_Energy/9.02%3A_Temperature_Dependence_of_G
\[ \left(\dfrac{\partial G}{\partial T} \right)_{P,\{n_i\}}=-S \nonumber \]Let’s analyze the first coefficient that gives the dependence of the Gibbs energy on temperature. Since this coefficient is equal to \(-S\) and the entropy is always positive, \(G\) must decrease when \(T\) increases at constant \(P\) and \(\{n_i\}\), and vice versa.If we replace this coefficient for \(-S\) in the Gibbs equation, Equation 9.1.3, we obtain:\[ \Delta G = \Delta H + T \left(\dfrac{\partial \Delta G}{\partial T} \right)_{P,\{n_i\}}, \label{9.2.1} \]and since Equation \ref{9.2.1} includes both \(\Delta G\) and its partial derivative with respect to temperature \(\left(\dfrac{\partial \Delta G}{\partial T} \right)_{P,\{n_i\}}\) we need to rearrange it to include the temperature derivative only. To do so, we can start by evaluating the partial derivative of \(\left( \dfrac{\Delta G}{T} \right)\) using the chain rule:\[ \left[ \dfrac{\partial\left( \dfrac{\Delta G}{T} \right)}{\partial T} \right]_{P,\{n_i\}} = \dfrac{1}{T} \left(\dfrac{\partial \Delta G}{\partial T} \right)_{P,\{n_i\}} - \dfrac{1}{T^2}\Delta G, \label{9.2.2} \]which, replacing \(\Delta G\) from Equation \ref{9.2.1} into Equation \ref{9.2.2}, becomes:\[ \begin{equation} \begin{aligned} \left[ \dfrac{\partial\left( \dfrac{\Delta G}{T} \right)}{\partial T} \right]_{P,\{n_i\}} &= \dfrac{1}{T} \left(\dfrac{\partial \Delta G}{\partial T} \right)_{P,\{n_i\}} - \dfrac{1}{T^2} \left[ \Delta H + T \left(\dfrac{\partial \Delta G}{\partial T} \right)_{P,\{n_i\}} \right] \\ &= \dfrac{1}{T} \left(\dfrac{\partial \Delta G}{\partial T} \right)_{P,\{n_i\}}- \dfrac{\Delta H}{T^2}-\dfrac{1}{T} \left(\dfrac{\partial \Delta G}{\partial T} \right)_{P,\{n_i\}}, \end{aligned} \end{equation} \label{9.2.3} \]which simplifies to:\[ \begin{equation} \begin{aligned} \left[ \dfrac{\partial\left( \dfrac{\Delta G}{T} \right)}{\partial T} \right]_{P,\{n_i\}} &= - \dfrac{\Delta H}{T^2}. \end{aligned} \end{equation} \label{9.2.4} \]Equation \ref{9.2.4} is known as the Gibbs–Helmholtz equation, and is useful in its integrated form to calculate the Gibbs free energy for a chemical reaction at any temperature \(T\) by knowing just the standard Gibbs free energy of formation and the standard enthalpy of formation for the individual species, which are usually reported at \(T=298\;\text{K}\). The integration is performed as follows:\[ \begin{equation} \begin{aligned} \int_{T_i=298 \;\text{K}}^{T_f=T} \dfrac{\partial\left( \dfrac{\Delta_{\text{rxn}} G}{T} \right)}{\partial T} &= - \int_{T_i=298 \;\text{K}}^{T_f=T} \dfrac{\Delta_{\text{rxn}} H}{T^2} \\ \\ \dfrac{\Delta_{\text{rxn}} G^{-\kern-6pt{\ominus}\kern-6pt-}(T)}{T} &= \dfrac{\Delta_{\text{rxn}} G^{-\kern-6pt{\ominus}\kern-6pt-}}{298 \;\text{K}} + \Delta_{\text{rxn}} H^{-\kern-6pt{\ominus}\kern-6pt-}\left( \dfrac{1}{T^2} -\dfrac{1}{(298 \;\text{K})^2} \right), \end{aligned} \end{equation} \label{9.2.5} \]giving the integrated Gibbs–Helmholtz equation: \[ \dfrac{\Delta_{\text{rxn}} G^{-\kern-6pt{\ominus}\kern-6pt-}(T)}{T} = \dfrac{\sum_i \nu_i \Delta_{\text{f}} G_i^{-\kern-6pt{\ominus}\kern-6pt-}}{298 \;\text{K}} + \sum_i \nu_i \Delta_{\text{f}} H_i^{-\kern-6pt{\ominus}\kern-6pt-}\left( \dfrac{1}{T^2} -\dfrac{1}{(298 \;\text{K})^2} \right) \label{9.2.6} \]This page titled 9.2: Temperature Dependence of ΔG is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,911
9.3: Pressure Dependence of ΔG
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/The_Live_Textbook_of_Physical_Chemistry_(Peverati)/09%3A_Gibbs_Free_Energy/9.03%3A_Pressure_Dependence_of_G
\[ \left(\dfrac{\partial G}{\partial P} \right)_{T,\{n_i\}}=V \nonumber \]We can now turn the attention to the second coefficient that gives how the Gibbs free energy changes when the pressure change. To do this, we put the system at constant \(T\) and \(\{n_i\}\), and then we consider infinitesimal variations of \(G\). From Equation 8.2.6:\[ dG = VdP -SdT +\sum_i\mu_i dn_i \quad \xrightarrow{\text{constant}\; T,\{n_i\}} \quad dG = VdP, \label{9.3.1} \]which is the differential equation that we were looking for. To study changes of \(G\) for macroscopic changes in \(P\), we can integrate Equation \(\ref{9.3.1}\) between initial and final pressures, and considering an ideal gas, we obtain:\[\begin{equation} \begin{aligned} \int_i^f dG &= \int_i^f VdP \\ \Delta G &= nRT \int_i^f \dfrac{dP}{P} = nRT \ln \dfrac{P_f}{P_i}. \end{aligned} \end{equation} \label{9.3.2} \ \]If we take \(P_i = P^{-\kern-6pt{\ominus}\kern-6pt-}= 1 \, \text{bar}\), we can rewrite Equation \ref{9.3.2} as:\[ G = G^{-\kern-6pt{\ominus}\kern-6pt-}+ nRT \ln \dfrac{P_f}{P^{-\kern-6pt{\ominus}\kern-6pt-}}, \label{9.3.3} \]which is useful to convert standard Gibbs free energies of formation at pressures different than standard pressure, using:\[ \Delta_{\text{f}} G = \Delta_{\text{f}} G^{-\kern-6pt{\ominus}\kern-6pt-}+ nRT \ln \dfrac{P_f}{\underbrace{P^{-\kern-6pt{\ominus}\kern-6pt-}}_{=1 \; \text{bar}}} = \Delta_{\text{f}} G^{-\kern-6pt{\ominus}\kern-6pt-}+ nRT \ln P_f \label{9.3.4} \]For liquids and solids, \(V\) is essentially independent of \(P\) (liquids and solids are incompressible), and Equation \ref{9.3.1} can be integrated as:\[ \Delta G = \int_i^f VdP = V \int_i^f dP = V \Delta P.\label{9.3.5} \]The plots in show the remarkable difference in the behaviors of \(\Delta_{\text{f}} G\) for a gas and for a liquid, as obtained from eqs. \ref{9.3.2} and \ref{9.3.5}.This page titled 9.3: Pressure Dependence of ΔG is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Roberto Peverati via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
7,912