title
stringlengths 13
247
| url
stringlengths 35
578
| text
stringlengths 197
217k
| __index_level_0__
int64 1
8.68k
|
---|---|---|---|
8.2: Using Thermochemical Cycles to Find Enthalpy Changes
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/08%3A_Enthalpy_and_Thermochemical_Cycles/8.02%3A_Using_Thermochemical_Cycles_to_Find_Enthalpy_Changes | Because enthalpy is a state function, the enthalpy change in going between any two states of a system is independent of the path. For a series of changes that restore a system to its original state, the sum of all the enthalpy changes must be zero. This fact enables us to find the enthalpy changes for many processes for which it is difficult to measure heat and work directly. It is easiest to see what is involved by considering a specific example. shows a cyclic path, A\(\mathrm{\to }\)A*\(\mathrm{\to }\)B\(\mathrm{\to }\)…\(\mathrm{\to }\)A, superimposed on a not-to-scale presentation of the phase diagram for water. Let us look at the sublimation of ice at the melting point of pure water. The sublimation of ice is the conversion of pure ice to pure water vapor. (The melting point of pure water is the temperature at which pure ice is at equilibrium with pure liquid water at a pressure of one atmosphere; it is represented by points A and A* on the diagram. We want to find the enthalpy of sublimation at the temperature and pressure represented by points D and D*.)Points A, A*, D, and D* are all at the same temperature; this temperature is about 273.153 K or 0.003 C. (This temperature is very slightly greater than 273.15 K or 0 C—which is the temperature at which ice and water are at equilibrium in the presence of air at a total pressure of one atmosphere.) We want to calculate the enthalpy change for the equilibrium conversion of one mole of ice to gaseous water at the pressure where the solid–gas equilibrium line intersects the line \(T=\mathrm{273.153\ K}\approx \mathrm{0\ C}\).On the diagram, this sublimation pressure is represented as \(P_{sub}\) and the sublimation process is represented as the transition from D* to D. \(P_{sub}\) is less than the triple-point pressure of \(\mathrm{611\ Pa}\) or \(6.03\times {10}^{-3}\ \mathrm{atm}\). However, the difference is less than \(1.4\times {10}^{-5}\ \mathrm{atm}\) or \(\mathrm{1.4\ Pa}\). In equation form, the successive states traversed in this cycle are:A (ice at 0 C and 1 atm) \(\mathrm{\to }\)A* (water at 0 C and 1 atm) \(\mathrm{\to }\)B (water at 100 C and 1 atm) \(\mathrm{\to }\)B* (water vapor at 100 C and 1 atm) \(\mathrm{\to }\)C (water vapor at 100 C and \(P_{sub}\)) \(\mathrm{\to }\)D (water vapor at 0 C and \(P_{sub}\)) \(\mathrm{\to }\)D*(ice at 0 C and \(P_{sub}\)) \(\mathrm{\to }\)A (ice at 0 C and 1 atm)We select these steps because it is experimentally straightforward to find the enthalpy change for all of them except the sublimation step (D*\(\mathrm{\to }\)D). All of these steps can be carried out reversibly. This strategy is useful in general. We make extensive use of reversible cycles to find thermodynamic information for chemical systems. The enthalpy changes for these steps are\(H_2O\) (s, 0 C, 1 atm) \(\mathrm{\to }\) \(H_2O\) (liq, 0 C, 1 atm) \[\Delta H\left(\mathrm{A}\mathrm{\to }{\mathrm{A}}^{\mathrm{*}}\right)={\Delta }_{fus}H \nonumber \]\(H_2O\) (liq, 0 C, 1 atm) \(\mathrm{\to }\) \(H_2O\) (liq, 100 C, 1 atm) \[\Delta H\left({\mathrm{A}}^{\mathrm{*}}\mathrm{\to }\mathrm{B}\right)=\int^{372.15\ \mathrm{K}}_{273.15\ \mathrm{K}}{C_P\left(H_2O,\ \mathrm{liq}\right)\ dT} \nonumber \]\(H_2O\) (liq, 100 C, 1 atm) \(\mathrm{\to }\) \(H_2O\) (g, 100 C, 1 atm) \[\Delta H\left(\mathrm{B}\mathrm{\to }{\mathrm{B}}^{\mathrm{*}}\right)={\Delta }_{vap}H \nonumber \]\(H_2O\) (g, 100 C, 1 atm) \(\mathrm{\to }\) \(H_2O\) (g, 100 C, \(P_{sub}\)) \[\Delta H\left({\mathrm{B}}^{\mathrm{*}}\mathrm{\to }\mathrm{C}\right)=\int^{P=P_{sub}}_{P=1}{{\left(\frac{\partial H\left(H_2O,\ \mathrm{g}\right)}{\partial P}\right)}_T\ dP\approx 0} \nonumber \]\(H_2O\) (g, 100 C, \(P_{sub}\)) \(\mathrm{\to }\)\(H_2O\) (g, 0 C, \(P_{sub}\)) \[\Delta H\left(\mathrm{C}\mathrm{\to }\mathrm{D}\right)=\int^{272.15\ \mathrm{K}}_{373.15\ \mathrm{K}}{C_P\left(H_2O,\ \mathrm{g}\right)\ dT} \nonumber \]\(H_2O\) (g, 0 C, \(P_{sub}\)) \(\mathrm{\to }\) \(H_2O\) (s, 0 C, \(P_{sub}\)) \[\Delta H\left(\mathrm{D}\mathrm{\to }{\mathrm{D}}^{\mathrm{*}}\right)={-\Delta }_{sub}H \nonumber \]\(H_2O\) (s, 0 C, \(P_{sub}\)) \(\mathrm{\to }\)\(H_2O\) (s, 0 C, 1 atm)\[\Delta H\left({\mathrm{D}}^{\mathrm{*}}\mathrm{\to }\mathrm{A}\right)=\int^{P=1}_{P=P_{sub}}{{\left(\frac{\partial H\left(H_2O,\ \mathrm{s}\right)}{\partial P}\right)}_T\ dP\approx 0} \nonumber \] Summing the enthalpy changes around the cycle gives\[0={\Delta }_{fus}H+\int^{372.15\ \mathrm{K}}_{273.15\ \mathrm{K}}{C_P\left(H_2O,\ \mathrm{liq}\right)\ dT}+{\Delta }_{vap}H \nonumber \] \[+\Delta H\left({\mathrm{B}}^{\mathrm{*}}\mathrm{\to }\mathrm{C}\right)+\int^{272.15\ \mathrm{K}}_{373.15\ \mathrm{K}}{C_P\left(H_2O,\ \mathrm{g}\right)\ dT}{-\Delta }_{sub}H \nonumber \] \[+\Delta H\left({\mathrm{D}}^{\mathrm{*}}\mathrm{\to }\mathrm{A}\right) \nonumber \]Using results that we find in the next section, \(H\left({\mathrm{B}}^{\mathrm{*}}\mathrm{\to }\mathrm{C}\right)\approx 0\) and \(\Delta H\left({\mathrm{D}}^{\mathrm{*}}\mathrm{\to }\mathrm{A}\right)\approx 0\), we have\[0={\Delta }_{fus}H+\int^{372.15\ \mathrm{K}}_{273.15\ \mathrm{K}}{C_P\left(H_2O,\ \mathrm{liq}\right)\ dT}+{\Delta }_{vap}H \nonumber \] \[\ \ +\int^{272.15\ \mathrm{K}}_{373.15\ \mathrm{K}}{C_P\left(H_2O,\ \mathrm{g}\right)\ dT}{-\Delta }_{sub}H \nonumber \]The enthalpy of fusion, the enthalpy of vaporization, and the heat capacities are measurable in straightforward experiments. Their values are given in standard compilations, so we are now able to evaluate \({\Delta }_{sub}H\), a quantity that is not susceptible to direct measurement, from other thermodynamic quantities that are. (See Problem 8.)This page titled 8.2: Using Thermochemical Cycles to Find Enthalpy Changes is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Paul Ellgen via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,222 |
8.3: How Enthalpy Depends on Pressure
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/08%3A_Enthalpy_and_Thermochemical_Cycles/8.03%3A_How_Enthalpy_Depends_on_Pressure | Let us look briefly at the approximations \(\Delta H\left({\mathrm{B}}^{\mathrm{*}}\mathrm{\to }\mathrm{C}\right)\approx 0\) and \(\Delta H\left({\mathrm{D}}^{\mathrm{*}}\mathrm{\to }\mathrm{A}\right)\approx 0\) that we used in Section 8.2. In these steps, the pressure changes while the temperature remains constant. In Chapter 10, we find a general relationship for the pressure-dependence of a system’s enthalpy: \[{\left(\frac{\partial H}{\partial P}\right)}_T=-T{\left(\frac{\partial V}{\partial T}\right)}_P+V \nonumber \]This evaluates to zero for an ideal gas and to a negligible quantity for many other systems.For liquids and solids, information on the variation of volume with temperature is collected in tables as the coefficient of thermal expansion, \(\alpha\), where\[\alpha =\frac{1}{V}{\left(\frac{\partial V}{\partial T}\right)}_P \nonumber \]Consequently, the dependence of enthalpy on pressure is given by \[{\left(\frac{\partial H}{\partial P}\right)}_T=V\left(1-\alpha T\right) \nonumber \]For ice, \(\alpha \approx 50\times {10}^{-6}\ {\mathrm{K}}^{-1}\) and the molar volume near 0 C is \(\mathrm{19.65}\ {\mathrm{cm}}^3\ {\mathrm{mol}}^{-1}\). The enthalpy change for compressing one mole of ice from the sublimation pressure to 1 atm is \(\Delta H\left({\mathrm{D}}^{\mathrm{*}}\mathrm{\to }\mathrm{A}\right)=2\ \mathrm{J}\mathrm{\ }{\mathrm{mol}}^{-1}\).To find the enthalpy change for expanding one mole of water vapor at 100 C from 1 atm to the sublimation pressure, we use the virial equation and tabulated coefficients for water vapor to calculate \({\left({\partial H}/{\partial P}\right)}_{\mathrm{398\ K}}\). We find \(\Delta H\left({\mathrm{B}}^{\mathrm{*}}\mathrm{\to }\mathrm{C}\right)=220\ \mathrm{J}\ {\mathrm{mol}}^{-1}\). (See problem 9.)This page titled 8.3: How Enthalpy Depends on Pressure is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Paul Ellgen via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,223 |
8.4: Standard States and Enthalpies of Formation
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/08%3A_Enthalpy_and_Thermochemical_Cycles/8.04%3A_Standard_States_and_Enthalpies_of_Formation | A useful convention makes it possible to tabulate enthalpy data for individual compounds in such a way that the enthalpy change for any chemical reaction can be calculated from the tabulated information for the reaction’s reactants and products. The convention comprises the following rules:I. At any particular temperature, we define the standard state of any liquid or solid substance to be the most stable form of that substance at a pressure of one bar. For example, for water at \(-10\) C, the standard state is ice at a pressure of one bar; at \(+10\) C, it is liquid water at a pressure of one bar.II. At any particular temperature, we define the standard state of a gas to be the ideal gas standard state at that temperature. By the ideal gas standard state, we mean a finite low pressure at which the real gas behaves as an ideal gas. We know that it is possible to find such a pressure, because any gas behaves as an ideal gas at a sufficiently low pressure. Since the enthalpy of an ideal gas is independent of pressure, we can also think of a substance in its ideal gas standard state as a hypothetical substance whose pressure is one bar but whose molar enthalpy is that of the real gas at an arbitrarily low pressure.III. For any substance at any particular temperature, we define the standard enthalpy of formation as the enthalpy change for a reaction in which the product is one mole of the substance and the reactants are the compound’s constituent elements in their standard states.For water at –10 C, this reaction is\[H_2\left(\mathrm{g},-10\ \mathrm{C},\ 1\ \mathrm{bar}\right)+\ \frac{1}{2}\ O_2\left(\mathrm{g},-10\ \mathrm{C},\ 1\ \mathrm{bar}\right) \ \mathrm{\to } H_2O\left(\mathrm{s},-10\ \mathrm{C},\ 1\ \mathrm{bar}\right) \nonumber \]For water at +10 C, it is\[H_2\left(\mathrm{g},+10\ \mathrm{C},\ 1\ \mathrm{bar}\right)+\ \frac{1}{2}\ O_2\left(\mathrm{g},+10\ \mathrm{C},\ 1\ \mathrm{bar}\right) \ \mathrm{\to } H_2O\left(\mathrm{liq},+10\ \mathrm{C},\ 1\ \mathrm{bar}\right) \nonumber \]For water at +110 C, it is\[H_2\left(g,+110\ \mathrm{C},\ 1\ \mathrm{bar}\right)+\ \frac{1}{2}\ O_2\left(\mathrm{g},+110\ \mathrm{C},\ 1\ \mathrm{bar}\right) \ \mathrm{\to } H_2O\left(\mathrm{g},+110\ \mathrm{C},\ 1\ \mathrm{bar}\right) \nonumber \]IV. The standard enthalpy of formation is given the symbol \(\boldsymbol{\Delta }_{\boldsymbol{f}} \boldsymbol{H}^{\boldsymbol{o}}\), where the superscript degree sign indicates that the reactants and products are all in their standard states. The subscript, \(\boldsymbol{f}\), indicates that the enthalpy change is for the formation of the indicated compound from its elements. Frequently, the compound and other conditions are specified in parentheses following the symbol. The solid, liquid, and gas states are usually indicated by the letters “s”, “\(\ell\)” (or “liq”), and “g”, respectively. The letter “c” is sometimes used to indicate that the substance is in a crystalline state. In this context, specification of the gas state normally means the ideal gas standard state.Thermochemical-data tables that include standard enthalpies of formation can be found in a number of publications or on the internet. For some substances, values are available at a number of temperatures. For substances for which less data is available, these tables usually give the value of the standard enthalpy of formation at 298.15 K. (In this context, 298.15 K is frequently abbreviated to 298 K.)V. For any element at any particular temperature, we define the standard enthalpy of formation to be zero. When we define standard enthalpies of formation, we choose the elements in their standard states as a common reference state for the enthalpies of all substances at a given temperature. While we could choose any arbitrary value for the enthalpy of an element in its standard state, choosing it to be zero is particularly convenient.This page titled 8.4: Standard States and Enthalpies of Formation is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Paul Ellgen via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,224 |
8.5: The Ideal Gas Standard State
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/08%3A_Enthalpy_and_Thermochemical_Cycles/8.05%3A_The_Ideal_Gas_Standard_State | The ideal gas standard state is a useful invention, which has additional advantages that emerge as our development proceeds. For permanent gases—gases whose behavior is approximately ideal anyway—there is a negligible difference between the enthalpy in the ideal gas state and the enthalpy at 1 bar.For volatile substances that are normally liquid or solid at 1 bar, the ideal gas standard state becomes a second standard state. For such substances, data tables frequently give the standard enthalpy of formation for both the condensed phase (designated \({\Delta }_fH^o\left(\mathrm{liq}\right)\) or \({\Delta }_fH^o\left(\mathrm{s}\right)\)) and the ideal gas standard state (designated \({\Delta }_fH^o\left(\mathrm{g}\right)\)). For example, the CODATA\({}^{1}\) values for the standard enthalpies of formation for liquid and ideal-gas methanol are \(-\mathrm{239.2}\) and \(-201.0\ \mathrm{k}\mathrm{J}\ {\mathrm{mol}}^{-1}\), respectively, at 298.15 K. The difference between these values is the enthalpy change in vaporizing one mole of liquid methanol to its ideal gas standard state at 298.15 K:\[CH_3OH\left(\mathrm{liq},\ 298.15\mathrm{\ K},\ 1\ \mathrm{bar}\right) \mathrm{\to } CH_3OH\left(\mathrm{ideal\ gas},\ 298.15\mathrm{\ K},\ \sim 0\ \mathrm{bar}\right) \nonumber \]Since this is the difference between the enthalpy of methanol in its standard state as an ideal gas and methanol in its standard state as a liquid, we can call this difference the standard enthalpy of vaporization for methanol:\[{\Delta }_{vap}H^o={\Delta }_fH^o\left(\mathrm{g,\ 298.15\ K,\ }\sim 0\mathrm{\ bar}\right) \ -{\Delta }_fH^o\left(\mathrm{g,\ 298.15\ K,\ }1\mathrm{\ bar}\right) \ =37.40\ \mathrm{k}\mathrm{J}\ {\mathrm{mol}}^{-1} \nonumber \]This is not a reversible process, because liquid methanol at 1 bar is not at equilibrium with its vapor at an arbitrarily low pressure at 298.15 K.Note that \({\Delta }_{vap}H^o\) is not the same as the ordinary enthalpy of vaporization, \({\Delta }_{vap}H\). The ordinary enthalpy of vaporization is the enthalpy change for the reversible vaporization of liquid methanol to real methanol vapor at a pressure of 1 atm and the normal boiling temperature. We write it without the superscript degree sign because methanol vapor is not produced in its standard state. For methanol, the normal boiling point and enthalpy of vaporization\({}^{2}\) are \(337.8\ \mathrm{K}\) and \(35.21\ \mathrm{k}\mathrm{J}\ {\mathrm{mol}}^{-1}\), respectively.We can devise a cycle that relates these two vaporization processes to one another: Summing the steps below yields the process for vaporizing liquid methanol in its standard state to methanol vapor in its standard state.Thus, we have\[{\Delta }_{vap}H^o={\Delta }_{\left(1\right)}H+{\Delta }_{\left(2\right)}H+{\Delta }_{vap}H+{\Delta }_{\left(4\right)}H+{\Delta }_{\left(5\right)}H \nonumber \]\({\Delta }_{\left(1\right)}H\) and \({\Delta }_{\left(5\right)}H\) can be evaluated by integrating the heat capacities for the liquid and gas, respectively. \({\Delta }_{\left(2\right)}H\) and \({\Delta }_{\left(4\right)}H\) can be evaluated by integrating \({\left({\partial H}/{\partial P}\right)}_T\) for the liquid and gas, respectively.\(\ {\Delta }_{\left(2\right)}H\) is negligible. (For the evaluation of these quantities, see problem 10.)This page titled 8.5: The Ideal Gas Standard State is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Paul Ellgen via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,225 |
8.6: Standard Enthalpies of Reaction
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/08%3A_Enthalpy_and_Thermochemical_Cycles/8.06%3A_Standard_Enthalpies_of_Reaction | The benefit of these conventions is that, at any particular temperature, the standard enthalpy change for a reaction\[aA+bB+\dots \ \to \ cC+dD+\dots\nonumber \]which we designate as \(\Delta_rH^o\), is given by\[\Delta_rH^o=\underbrace{c{\Delta}_fH^o\left(C\right)+d{\Delta}_fH^o\left(D\right)+\dots}_{\text{product enthalpies}} - \underbrace{a{\Delta}_fH^o\left(A\right)-b\Delta_fH^o\left(B\right)-\dots}_{\text{reactant enthalpies}}\nonumber \]If we have the enthalpies of formation, we can compute the enthalpy change for the reaction. We can demonstrate this by writing out the chemical equations corresponding to the formation of A, B, C, and D from their elements. When we multiply these chemical equations by the appropriately signed stoichiometric coefficient and add them, we obtain the chemical equation for the indicated reaction of A and B to give C and D. (See below.) Because enthalpy is a state function, the enthalpy change that we calculate this way will be valid for any process that converts the specified reactants into the specified products.The oxidation of methane to methanol is a reaction that illustrates the value of this approach. The normal products in the oxidation of methane are, of course, carbon dioxide and water. If the reaction is done with an excess of methane, a portion of the carbon-containing product will be carbon monoxide rather than carbon dioxide. In any circumstance, methanol is, at best, a trace product. Nevertheless, it would be very desirable to devise a catalyst that quantitatively—or nearly quantitatively—converted methane to methanol according to the equation\[\ce{CH4 + 1/2O_2 -> CH3OH}\nonumber \](This is frequently called a selective oxidation, to distinguish it from the non-selective oxidation that produces carbon dioxide and water.)If the catalyst were not inordinately expensive or short-lived, and the operating pressure were sufficiently low, this would be an economical method for the manufacture of methanol. (Methanol is currently manufactured from methane. However, the process involves two steps and requires a substantial capital investment.) If the cost of manufacturing methanol could be decreased sufficiently, it would become economically feasible to convert natural gas, which cannot be transported economically unless it is feasible to build a pipeline for the purpose, into liquid methanol, which is readily transported by ship. (At present, the economic feasibility of marine transport of liquefied natural gas, LNG, is marginal, but it appears to be improving.) This technology would make it possible to utilize the fuel value of known natural gas resources that are presently useless because they are located too far from population centers.When we contemplate trying to develop a catalyst and a manufacturing plant to carry out this reaction, we soon discover reasons for wanting to know the enthalpy change. One is that the oxidative manufacture of methanol will be exothermic, so burning the methanol produced will yield less heat than would be produced by burning the methane from which it was produced. We want to know how much heat energy is lost in this way.Another reason is that a manufacturing plant will have to control the temperature of the oxidation reaction in order to maintain optimal performance. (If the temperature is too low, the reaction rate will be too slow. If the temperature is too high, the catalyst may be deactivated in a short time, and the production of carbon oxides will probably be excessive.) A chemical engineer designing a plant will need to know how much heat is produced so that he can provide adequate cooling equipment.Because we do not know how to carry out this reaction, we cannot measure its enthalpy change directly. However, if we have the enthalpies of formation for methane and methanol, we can compute this enthalpy change:\[\ce{C(s) + 2H_2 (g) + 1/2O2 (g) -> CH3OH (g) })\nonumber \]\[\Delta H=\Delta_fH^o\left(CH_3OH,\ g\right)\nonumber \]\[CH_4\left(g\right)\to C\left(\mathrm{s}\right)+2\ H_2\left(g\right)\nonumber \]\[\Delta H={-\Delta }_fH^o\left(CH_4,g\right)\nonumber \]\[1/2 O_2\left(g\right)\to 1/2 O_2\left(g\right)\nonumber \]\[\Delta H=-1/2 \Delta_fH^o\left(O_2,g\right)=0\nonumber \]Summing the reactions gives\[\ce{ CH4 (g) + 1/2O2 (g) \to CH3OH (g)}\nonumber \]\[\Delta H=\Delta_rH^o\nonumber \]and summing the enthalpy changes gives\[\Delta_rH^o=\Delta_fH^o (CH_3OH, g) -\Delta_f H^o (CH_4, g)- 1/2 \Delta_fH^o\left(O_2,g\right)\nonumber \]The diagram in shows how these conventions, and the fact that enthalpy is a state function, work together to produce, for the reaction \(aA+bB+\dots \to cC+dD+\dots\), the result that the standard reaction enthalpy is given by\[\Delta_rH^o={c\ \Delta }_fH^o\left(C\right)+{d \Delta }_fH^o\left(D\right)+\dots -{a \Delta }_fH^o\left(A\right)-{b\ \Delta }_fH^o\left(B\right)-\dots\nonumber \]This cycle highlights another aspect of the conventions that we have developed. Note that \(\Delta_rH^o\) is the difference between the enthalpies of formation of the separated products and the enthalpies of formation of the separated reactants. We often talk about \(\Delta_rH^o\) as if it were the enthalpy change that would occur if we mixed \(a\) moles of \(A\) with \(b\) moles of \(B\) and the reaction proceeded quantitatively to yield a mixture containing \(c\) moles of \(C\) and \(d\) moles of \(D\). This is usually a good approximation. However, to relate rigorously the standard enthalpy of reaction to the enthalpy change that would occur in a real system in which this reaction took place, it is necessary to recognize that there can be enthalpy changes associated with the pressure–volume changes and with the processes of mixing the reactants and separating the products.Let us suppose that the reactants and products are gases in their hypothetical ideal-gas states at 1 bar, and that we carry out the reaction by mixing the reactants in a sealed pressure vessel. We suppose that the reaction is then initiated and that the products are formed rapidly, reaching some new pressure and an elevated temperature. (To be specific, we could imagine the reaction be the combustion of methane. We would mix known amounts of methane and oxygen in a pressure vessel and initiate the reaction using an electrical spark.) We allow the temperature to return to the original temperature of the reactants; there is an accompanying pressure change.Experimentally, we measure the heat evolved as the mixed reactants are converted to the mixed products, at the original temperature. To complete the process corresponding to the standard enthalpy change, however, we must also separate the products and bring them to a pressure of 1 bar. That is, the standard enthalpy of reaction and the enthalpy change we would measure are related by the following sequence of changes, where the middle equation corresponds to the process whose enthalpy change we actually measure.\[{\left(aA+bB\right)}_{\mathrm{separate\ reactants\ at\ }P = 1 \text{ bar}}\to {\left(aA+bB\right)}_{\mathrm{homogenous\ mixture\ at\ }P}\nonumber \] \[\Delta H_{\mathrm{compression}}\nonumber \]\[{\left(aA+bB\right)}_{\mathrm{separate\ reactants\ at\ }P}\to {\left(aA+bB\right)}_{\mathrm{homogeneous\ mixture\ at\ }P}\nonumber \] \[\Delta H_{\mathrm{mixing}}\nonumber \] \[{\left(aA+bB\right)}_{\mathrm{homogeneous\ mixture\ at\ }P}\to {\left(cC+dD\right)}_{\mathrm{homogeneous\ mixture\ at\ }P^*}\nonumber \] \[\Delta H_{\mathrm{measured}}\nonumber \] \[{\left(cC+dD\right)}_{\mathrm{homogeneous\ mixture\ at\ }P^*}\to {\left(cC+dD\right)}_{\mathrm{separate\ products\ at\ }P^*}\nonumber \] \[\Delta H_{\mathrm{separation}}\nonumber \] \[{\left(cC+dD\right)}_{\mathrm{separate\ products\ at\ }P^*}\to {\left(cC+dD\right)}_{\mathrm{separate\ products\ at\ }P=1\mathrm{\ bar\ }}\nonumber \] \[\Delta H_{\mathrm{expansion}}\nonumber \]Summing the reaction equations gives\[{\left(aA+bB\right)}_{\mathrm{separate\ reactants\ at\ }P=1\ \mathrm{bar}}\to {\left(cC+dD\right)}_{\mathrm{separate\ products\ at\ }P=1\mathrm{\ bar\ }}\nonumber \] \[\Delta_rH^o\nonumber \]and summing the enthalpy changes for the series of steps gives the standard enthalpy change for the reaction:\[\Delta_rH^o=\Delta H_{\mathrm{compression}}+\Delta H_{\mathrm{mixing}}+\Delta H_{\mathrm{measured}} + \Delta H_{\mathrm{separation}}+\Delta H_{\mathrm{expansion}}\nonumber \]It turns out that the enthalpy changes for the compression, mixing, separation, and expansion processes are usually small compared to \(\Delta_rH^o\). This is the principal justification for our frequent failure to consider them explicitly. For ideal gases, these enthalpy changes are identically zero. (In Chapter 13, we see that the entropy changes for the mixing and separation processes are important.)When we call \(\Delta_rH^o\) the standard enthalpy change “for the reaction,” we are indulging in a degree of poetic license. Since \(\Delta_rH^o\) is a computed difference between the enthalpies of the pure products and those of the pure reactants, the corresponding “reaction” is a purely formal change, which is a distinctly different thing from the real-world process that actually occurs.This page titled 8.6: Standard Enthalpies of Reaction is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Paul Ellgen via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,226 |
8.7: Standard State Heat Capacities
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/08%3A_Enthalpy_and_Thermochemical_Cycles/8.07%3A_Standard_State_Heat_Capacities | We have observed that \(C_V\) depends on volume and temperature, while \(C_P\) depends on pressure and temperature. Compilations of heat capacity data usually give values for \(C_P\), rather than \(C_V\). When the temperature-dependence of \(C_P\) is known, such compilations usually express it as an empirical polynomial function of temperature. In Chapter 10, we find an explicit function for the dependence of \(C_P\) on pressure:\[{\left(\frac{\partial C_P}{\partial P}\right)}_T=-T{\left(\frac{{\partial }^2V}{\partial T^2}\right)}_P \nonumber \]If we have an equation of state for a substance, we can find this pressure dependence immediately. It is usually negligible. For ideal gases, it is zero, and \(C_P\) is independent of pressure.Compilations often give data for the standard state heat capacity, \(C^o_P\), at a specified temperature. For condensed phases, this is the heat capacity for the substance at one bar. For gases, this is the heat capacity of the substance in its ideal gas standard state.This page titled 8.7: Standard State Heat Capacities is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Paul Ellgen via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,227 |
8.8: How The Enthalpy Change for a Reaction Depends on Temperature
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/08%3A_Enthalpy_and_Thermochemical_Cycles/8.08%3A_How_The_Enthalpy_Change_for_a_Reaction_Depends_on_Temperature | In Section 8.6, we see how to use tabulated enthalpies of formation to calculate the enthalpy change for a particular chemical reaction. Such tables typically give enthalpies of formation at a number of different temperatures, so that the enthalpy change for a given reaction can also be calculated at these different temperatures; it is just a matter of repeating the same calculation at each temperature.We often need to find the enthalpy change associated with increasing the temperature of a substance at constant pressure. As we observe in §1, this enthalpy change is readily calculated by integrating the heat capacity over the temperature change. We may want to know, for example, the enthalpy change for increasing the temperature of one mole of methane from 300 K to 400 K, with the pressure held constant at one bar. In Table 1, we find\[ \Delta_fH^o\left(CH_4 ,g,300\, K\right) =-74.656\ \mathrm{k}\mathrm{J}\ \mathrm{mol}^{-1} \nonumber \]\[ \Delta_fH^o\left(CH_4\mathrm{,g,400\ K}\right) = -77.703\ \mathrm{k}\mathrm{J}\ \mathrm{mol}^{-1} \nonumber \]We might be tempted to think that the difference represents the enthalpy change associated with heating the methane. This is not so! The reason becomes immediately apparent if we consider a cycle in which we go from the elements to a compound at two different temperatures. For methane, this cycle is shown in -\Delta_fH^o\left(CH_4\mathrm{,g,300\ K}\right) \nonumber \] \[=\int^{400}_{300}{C_P\left(CH_4\mathrm{,g}\right)dT}-\int^{400}_{300}{C_P\left(C\mathrm{,s}\right)dT} -2\int^{400}_{300}{C_P\left(H_2\mathrm{,g}\right)dT} \nonumber \]Over the temperature range from 300 K to 400 K, the heat capacities of carbon, hydrogen, and methane are approximated by \(C_P=a+bT\), with values of \(a\) and \(b\) given in Table 1. From this information, we calculate the enthalpy change for increasing the temperature of one mole of each substance from 300 K to 400 K at 1 bar: \(\Delta H\left(C\right)=1,029\ \mathrm{J}\ {\mathrm{mol}}^{-1}\), \(\Delta H\left(H_2\right)=2,902\ \mathrm{J}\ {\mathrm{mol}}^{-1}\), and \(\Delta H\left(CH_4\right)=3,819\ \mathrm{J}\ {\mathrm{mol}}^{-1}\). Thus, from the cycle, we calculate:\[\Delta_fH^o\left(CH_4\mathrm{,g,400\ K}\right)=-74,656+3,819-1,029-2\left(2,902\right)\ \mathrm{J}\ {\mathrm{mol}}^{-1}=\ -77,670\ \mathrm{J}\ {\mathrm{mol}}^{-1} \nonumber \]The tabulated value is \(-77,703\ \mathrm{J}\ {\mathrm{mol}}^{-1}\). The two values differ by \(33\ \mathrm{J}\ {\mathrm{mol}}^{-1}\), or about 0.04%. This difference arises from the limitations of the two-parameter heat-capacity equations.As another example of a thermochemical cycle, let us consider the selective oxidation of methane to methanol at 300 K and 400 K. From the enthalpies of formation in Table 1, we calculate the enthalpies for the reaction to be \(\Delta_rH^o\left(3\mathrm{00\ K}\right)=-126.412\ \mathrm{k}\mathrm{J}\ {\mathrm{mol}}^{-1}\) and \(\Delta_rH^o\left(4\mathrm{00\ K}\right)=-126.919\ \mathrm{k}\mathrm{J}\ {\mathrm{mol}}^{-1}\). As in the previous example, we use the tabulated heat-capacity parameters to calculate the enthalpy change for increasing the temperature of one mole of each of these gases from 300 K to 400 K at 1 bar. We find: \(\Delta H\left(CH_3OH\right)=4,797\ \mathrm{J}\ {\mathrm{mol}}^{-1}\), \(\Delta H\left(CH_4\right)=3,819\ \mathrm{J}\ {\mathrm{mol}}^{-1}\), and \(\Delta H\left(O_2\right)=2,975\ \mathrm{J}\ {\mathrm{mol}}^{-1}\).The cycle is shown in Inspecting this cycle, we see that we can calculate the enthalpy change for warming one mole of methanol from 300 K to 400 K by summing the enthalpy changes around the bottom, left side, and top of the cycle; that is,\[\Delta H\left(CH_3OH\right)=126,412+3,819+\left(\frac{1}{2}\right)2,975-126,919\ \mathrm{J}\ {\mathrm{mol}}^{-1}=4,800\ \mathrm{J}\ {\mathrm{mol}}^{-1} \nonumber \]This is 3 J or about 0.06 % larger than the value obtained \(\left(4,797\ \mathrm{J}\right)\) by integrating the heat capacity for methanol.This page titled 8.8: How The Enthalpy Change for a Reaction Depends on Temperature is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Paul Ellgen via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,228 |
8.9: Calorimetry
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/08%3A_Enthalpy_and_Thermochemical_Cycles/8.09%3A_Calorimetry | Calorimetry is the experimental science of measuring the heat changes that accompany chemical or physical changes. The accurate measurement of small amounts of heat is experimentally challenging. Nevertheless, calorimetry is an area in which great experimental sophistication has been achieved and remarkably accurate measurements can be made. Numerous devices have been developed to measure heat changes.Some of these devices measure a (usually small) temperature change. Such devices are calibrated by measuring how much their temperature increases when a known amount of heat is introduced. This is usually accomplished by passing a known electric current through a known resistance for a known time. Other calorimeters measure the amount of some substance that undergoes a phase change. The ice calorimeter is an important example of the latter method. In an ice calorimeter, the heat of the process is transferred to a mixture of ice and water. The amount of ice that melts is a direct measure of the amount of heat released by the process. The amount of ice melted can be determined either by direct measurement of the increase in the amount of water present or by measuring the change in the volume of the ice–water mixture. (Since ice occupies a greater volume than the same mass of water, melting is accompanied by a decrease in the total volume occupied by the mixture of ice and water.)The processes that can be investigated accurately using calorimetry are limited by two important considerations. One is that the process must go to completion within a relatively short time. No matter how carefully it is constructed, any calorimeter will exchange thermal energy with its environment at some rate. If this rate is not negligibly small compared to the rate at which the process evolves heat, the accuracy of the measurement is degraded. The second limitation is that the process must involve complete conversion of the system from a known initial state to a known final state. When the processes of interest are chemical reactions, these considerations mean that the reactions must be quantitative and fast.Combustion reactions and catalytic hydrogenation reactions usually satisfy these requirements, and they are the most commonly investigated. However, even in these cases, there can be complications. For a compound containing only carbon, hydrogen, and oxygen, combustion using excess oxygen produces only carbon dioxide and water. For compounds containing heteroatoms like nitrogen, sulfur, or phosphorus, there may be more than one heteroatom-containing product. For example, combustion of an organosulfur compound might produce both sulfur dioxide and sulfur trioxide. To utilize the thermochemical data obtained in such an experiments, a chemical analysis must be done to determine the amount of each oxide present.This page titled 8.9: Calorimetry is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Paul Ellgen via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,229 |
8.10: Problems
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/08%3A_Enthalpy_and_Thermochemical_Cycles/8.10%3A_Problems | 1. One mole of an ideal gas reversibly traverses Cycle I above. Step a is isothermal. Step b is isochoric (constant volume). Step c is isobaric (constant pressure). Assume \(C_V\) and \(C_P\) are constant. Find \(q\), \(w\), \(\Delta E\), and \(\Delta H\) for each step and for the cycle. Prove \(C_P=C_V+R\).2. One mole of an ideal gas reversibly traverses Cycle II below. Step a is the same isothermal process as in problem 1. Step d is adiabatic. Step e is isobaric. Assume \(C_V\) and \(C_P\) are constant. Find \(q\), \(w\), \(\Delta E\), and \(\Delta H\) for each step and for the cycle.3. One mole of an ideal gas reversibly traverses Cycle III below. Step a is the same isothermal process as inproblem 1. Step f is adiabatic. Step g is isochoric. Assume \(C_V\) and \(C_P\) are constant. Find \(q\), \(w\), \(\Delta E\), and \(\Delta H\) for each step and for the cycle.4. One mole of an ideal gas reversibly traverses Cycle IV. Step h is isobaric. Step f is the same adiabatic process as in problem 3. Step i is isochoric. Assume \(C_V\) and \(C_P\) are constant. Find \(q\), \(w\), \(\Delta E\), and \(\Delta H\) for each step and for the cycle.5. Prove that the work done on the system is positive when the system traverses Cycle I. Note that Cycle I traverses the region of the \(PV\) plane that it encloses in a counter-clockwise direction. Hint: Note that \(T_2}\) Cycle III \(\mathrm{>}\) Cycle II.8. For water, the enthalpies of fusion and vaporization are \(6.009\) and \(40.657\ \mathrm{k}\mathrm{J}\ {\mathrm{mol}}^{-1}\), respectively. The heat capacity of liquid water varies only weakly with temperature and can be taken as \(\mathrm{75.49\ }\mathrm{J}\ {\mathrm{mol}}^{-1}\ {\mathrm{K}}^{-1}\). The heat capacity of water vapor varies with temperature: \[C_P\left(H_2O\mathrm{,\ g}\right)=30.51+\left(1.03\times {10}^{-2}\right)T \nonumber \] where \(T\) is in degrees K and the heat capacity is in \(\mathrm{J}\ {\mathrm{mol}}^{-1}\ {\mathrm{K}}^{-1}\). Estimate the enthalpy of sublimation of water.9. If we truncate the virial equation \(\left(Z=1+B^*\left(T\right)P+\dots \right)\) and make use of \(B\left(T\right)=RTB^*\left(T\right)\), where\(\ B\left(T\right)\) is the “second virial coefficient” most often given in data tables, the molar volume is \[\overline{V}=\frac{RT}{P}+B\left(T\right) \nonumber \] Show that \[{\left(\frac{\partial H}{\partial P}\right)}_T=B\left(T\right)-T\left(\frac{dB}{dT}\right) \nonumber \]The Handbook of Chemistry and Physics (CRC Press, 79\({}^{th}\) Ed., 1999, p. 6–25) gives the temperature dependence of \(B\) for water vapor as\[B=-1158-5157t-10301t^2-10597t^3-4415t^4 \nonumber \]where \(t=\left({298.15}/{T}\right)-1\), \(T\) is in degrees kelvin, and the units of \(B\) are \({\mathrm{cm}}^{-3\ }{\mathrm{mol}}^{-1}\). Estimate the enthalpy change when one mole of water vapor at 1 atm and 100 C is expanded to the equilibrium sublimation pressure, which for this purpose we can approximate as the triple-point pressure, \(610\ \mathrm{Pa}\). How does this value compare to the result of problem 8?10. The heat capacities of methanol liquid and gas are \(81.1\) and \(44.1\ \mathrm{J}\ {\mathrm{mol}}^{-1}\ {\mathrm{K}}^{-1}\), respectively. The second virial coefficient for methanol vapor is\[B=-1752-4694t \nonumber \]where \(t=\left({298.15}/{T}\right)-1\), \(T\) is in degrees kelvin, and the units of \(B\) are \({\mathrm{cm}}^{-3\ }{\mathrm{mol}}^{-1}\). Referring to the discussion of methanol vaporization in §5, calculate \({\Delta }_{\left(1\right)}H\), \({\Delta }_{\left(4\right)}H\), \({\Delta }_{\left(5\right)}H\), \({\Delta }_{\left(vap\right)}H^o\). Compare this value of \({\Delta }_{\left(vap\right)}H^o\) to the value given in the text. [Data from the Handbook of Chemistry and Physics, CRC Press, 79\({}^{th}\) Ed., 1999, p. 5-27 and p. 6-31.]11. Using data from the table above, find the enthalpy change for each of the following reactions at 298 K.(a) \(C_2H_6\left(\mathrm{g}\right)+ParseError: invalid DekiScript (click for details)Callstack:
at (Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/08:_Enthalpy_and_Thermochemical_Cycles/8.10:_Problems), /content/body/p/span, line 1, column 1
\ O_2\left(\mathrm{g}\right)\to CH_3CH_2OH\left(\mathrm{liq}\right)\)(b) \(C_2H_4\left(\mathrm{g}\right)+ParseError: invalid DekiScript (click for details)Callstack:
at (Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/08:_Enthalpy_and_Thermochemical_Cycles/8.10:_Problems), /content/body/p/span, line 1, column 1
\ O_2\left(\mathrm{g}\right)\to CH_3CHO\left(\mathrm{liq}\right)\)(c) \(C_2H_6\left(\mathrm{g}\right)+ParseError: invalid DekiScript (click for details)Callstack:
at (Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/08:_Enthalpy_and_Thermochemical_Cycles/8.10:_Problems), /content/body/p/span, line 1, column 1
\ O_2\left(\mathrm{g}\right)\to CH_3CHO\left(\mathrm{liq}\right)+H_2O\left(\mathrm{liq}\right)\)(d) \(C_6H_6\left(\mathrm{liq}\right)+\ CO_2\left(\mathrm{g}\right)\to C_6H_5CO_2H\left(\mathrm{s}\right)\)(e) \(CH_3CHO\left(\mathrm{liq}\right)+ParseError: invalid DekiScript (click for details)Callstack:
at (Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/08:_Enthalpy_and_Thermochemical_Cycles/8.10:_Problems), /content/body/p/span, line 1, column 1
\ O_2\left(\mathrm{g}\right)\to CH_3CO_2H\left(\mathrm{liq}\right)\)(f) \(CH_4\left(\mathrm{g}\right)+H_2O\left(\mathrm{liq}\right)\to CO\left(\mathrm{g}\right)+3\ H_2\left(\mathrm{g}\right)\)(g) \(CH_4\left(\mathrm{g}\right)+H_2O\left(\mathrm{liq}\right)+ParseError: invalid DekiScript (click for details)Callstack:
at (Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/08:_Enthalpy_and_Thermochemical_Cycles/8.10:_Problems), /content/body/p/span, line 1, column 1
\ O_2\left(\mathrm{g}\right)\to CO_2\left(\mathrm{g}\right)+3\ H_2\left(\mathrm{g}\right)\)(h) \(C_2H_4\left(\mathrm{g}\right)+CO\left(\mathrm{g}\right)+\ H_2\left(\mathrm{g}\right)\to CH_3CH_2CHO\left(\mathrm{liq}\right)\)Notes\(^{1}\) Data compiled by The Committee on Data for Science and Technology (CODATA) and reprinted in D. R. Linde, Editor, The Handbook of Chemistry and Physics, 79\({}^{th}\) Edition, CRC Press, Section 5.\({}^{2}\) D. R. Linde, op. cit., p. 6-104.This page titled 8.10: Problems is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Paul Ellgen via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,230 |
9.1: The Second Law of Thermodynamics
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/09%3A_The_Second_Law_-_Entropy_and_Spontaneous_Change/9.01%3A_The_Second_Law_of_Thermodynamics | The first law of thermodynamics is concerned with energy and its properties. As we saw in Chapter 7, the first law arose from the observation that the dissipation of mechanical work through friction creates heat. In a synthesis that was partly definition and partly a generalization from experience, it was proposed that mechanical energy and heat are manifestations of a common quantity, energy. Later, by further definition and generalization, the concept was expanded to include other forms of energy. The energy concept evolved into the prescript that there exists a quantity (state function) that is conserved through any manner of change whatsoever.The element of definition arises from the fact that we recognize new forms of energy whenever necessary in order to ensure that the conservation condition is satisfied. The element of experience arises from the fact that this prescript has resulted in a body of theory and a body of experimental results that are mutually compatible. When we define and measure energy “correctly” we do indeed find that energy is a state function and that it is conserved.The theory of relativity introduced a significant expansion of the energy concept. For chemical processes, we can view mass and energy conservation as independent postulates. For processes in which fundamental particles undergo changes and for systems moving at velocities near that of light, we cannot. Relativity asserts that the energy of a particle is given by Einstein’s equation,\[E^2=p^2c^2+m^2_0c^4. \nonumber \]In this equation, \(E\) is the particle energy, \(p\) is its momentum, \(m_0\) is its rest mass, and \(c\) is the speed of light. In transformations of fundamental particles in which the sum of the rest masses of the product particles is less than that of the reactant particles, conservation of energy requires that the sum of the momenta of the product particles exceed that of the reactant particles. The momentum increase means that the product particles have high velocities, corresponding to a high temperature for the product system. The most famous expression of this result is that \(E=m_0c^2\), meaning that we can associate this quantity of energy with the mass, \(m_0\), of a stationary particle, for which \(p=0\).The situation with respect to the second law is similar. From experience with devices that convert heat into work, the idea evolved that such devices must have particular properties. Consideration of these properties led to the discovery of a new state function, which we call entropy, and to which we customarily assign the symbol “\(S\)”. We introduce the laws of thermodynamics in §6-13. We repeat our statement of the second law here:The Second Law of ThermodynamicsIn a reversible process in which a closed system accepts an increment of heat, \(\boldsymbol{d}\boldsymbol{q}^\boldsymbol{rev}\), from its surroundings, the change in the entropy of the system, \(\boldsymbol{dS}\), is \(\boldsymbol{dS}\boldsymbol{=}\boldsymbol{dq}^\boldsymbol{rev}/\boldsymbol{T}\). Entropy is a state function. For any reversible process, \(\boldsymbol{dS}_\boldsymbol{universe}\boldsymbol{=}\boldsymbol{0}\), and conversely. For any spontaneous process, \(\boldsymbol{dS}_\boldsymbol{universe}\boldsymbol{>}\boldsymbol{0}\), and conversely.If a spontaneous process takes a system from state A to state B, state B may or may not be an equilibrium state. State A cannot be an equilibrium state. Since we cannot use the defining equation to find the entropy change for a spontaneous process, we must use some other method if we are to estimate the value of the entropy change. This means that we must have either an empirical mathematical model from which we can estimate the entropy of a non-equilibrium state or an equilibrium system that is a good model for the initial state of the spontaneous process.We can usually find an equilibrium system that is a good model for the initial state of a spontaneous process. Typically, some alteration of an equilibrium system makes the spontaneous change possible. The change-enabled state is the initial state for a spontaneous process, but its thermodynamic state functions are essentially identical to those of the pre-alteration equilibrium state. For example, suppose that a solution contains the reactants and products for some reaction that occurs only in the presence of a catalyst. In this case, the solution can be effectively at equilibrium even when the composition does not correspond to an equilibrium position of the reaction. (In an effort to be more precise, we can term this a quasi-equilibrium state, by which we mean that the system is unchanging even though a spontaneous change is possible.) If we introduce a very small quantity of catalyst, and consider the state of the system before any reaction occurs, all of the state functions that characterize the system must be essentially unchanged. Nevertheless, as soon as the catalyst is introduced, the system can no longer be considered to be in an equilibrium state. The spontaneous reaction proceeds until it reaches equilibrium. We can find the entropy change for the spontaneous process by finding the entropy change for a reversible process that takes the initial, pre-catalyst, quasi-equilibrium state to the final, post-catalyst, equilibrium state.Our statement of the second law establishes the properties of entropy by postulate. While this approach is rigorously logical, it does not help us understand the ideas involved. Like the first law, the second law can be stated several ways. To develop our understanding of entropy and its properties, it is useful to again consider a more traditional statement of the second law:A Traditional statement of the second lawIt is impossible to construct a machine that operates in a cycle, exchanges heat with its surroundings at only one temperature, and produces work in the surroundings.When we introduce the qualification that the machine “exchanges heat with its surroundings at only one temperature,” we mean that the temperature of the surroundings has a particular value whenever the machine and surroundings exchange heat. The statement does not place any conditions on the temperature of the machine at any time.In this chapter, we have frequent occasion to refer to each of these statements. To avoid confusing them, we will refer to our statement of the second law as the entropy-based statement. We will refer to the statement above as the machine-based statement of the second law.By “a machine”, we mean a heat engine—a device that accepts heat and produces mechanical work. This statement asserts that a “perpetual motion machine of the second kind” cannot exist. Such a machine accepts heat energy and converts all of it into work, while itself returning to the same state at the end of each cycle. (In §7-11, we note that a “perpetual motion machine of the first kind” is one whose operation violates the principle of conservation of energy.) Normally, we view this statement as a postulate. We consider that we infer it from experience. Unlike our statements about entropy, which are entirely abstract, this statement makes an assertion about real machines of the sort that we encounter in daily life. We can understand the assertion that it makes in concrete terms: A machine that could convert heat from a constant-temperature source into work could extract heat from ice water, producing ice cubes in the water and an equivalent amount of work elsewhere in the surroundings. This machine would not exchange heat with any other heat reservoir. Our machine-based statement of the second law postulates that no such machine can exist.Our entropy-based statement of the second law arose from thinking about the properties of machines that do convert heat into work. We trace this thinking to see how our entropy-based statement of the second law was developed. Understanding this development gives us a better appreciation for the meaning of entropy. We find that we must supplement the machine-based statement of the second law with additional assumptions in order to arrive at all of the properties of the entropy function that are asserted in the entropy-based statement.However, before we undertake to develop the entropy-based statement of the second law from the machine-based statement, let us develop the converse; that is, let us show that the machine-based statement is a logical consequence of the entropy-based statement. To do so, we assume that a perpetual motion machine of the second kind is possible. To help keep our argument clear, let proposition \(\mathrm{MSL}\) be the machine-based statement. We are assuming that proposition \(\mathrm{MSL}\) is false, so that proposition \(\sim \mathrm{MSL}\) is true. We let \(\mathrm{SL}\) be the entropy-based statement of the second law.The sketch in describes the interaction of this perpetual motion machine, \(\mathrm{PPM}\), with its surroundings. From our entropy-based statement of the second law, we can assert some important facts about the entropy changes that accompany operation of the machine. Since entropy is a state function, \(\Delta S=0\) for one cycle of the machine. If the machine works (that is, \(\sim \mathrm{MSL}\) is true), then the entropy-based statement requires that \(\Delta S_{universe}=\Delta S+\Delta \hat{S}\ge 0\). Since \(\Delta S=0\), it follows that \(\Delta \hat{S}\ge 0\). We can make this more explicit by writing: \(\left(\mathrm{SL\ and}\ \sim \mathrm{MSL}\right)\Rightarrow \Delta \hat{S}\ge 0\).The machine-based statement of the second law also enables us to determine the entropy change in the surroundings from our second-law definition of entropy. In one cycle, this machine (system) delivers net work, \(\hat{w}>0\), to the surroundings; it accepts a net quantity of heat, \(q>0\), from the surroundings, which are at temperature, \(\hat{T}\). Simultaneously, the surroundings surrender a quantity of heat, \(\hat{q}\), where \(\hat{q}=-q\), and \(\hat{q}<0\). The change that occurs in one cycle of the machine need not be reversible. However, whether the change is reversible or not, the entire thermal change in the surroundings consists in the exchange of an amount of heat, \(\hat{q}<0\), by a constant temperature reservoir at \(\hat{T}\). We can effect identically the same change in the surroundings using some other process to reversibly extract this amount of heat. The entropy change in the surroundings in this reversible process will be \({\hat{q}}/{\hat{T}}\), and this will be the same as the entropy change for the surroundings in one cycle of the machine. (We consider this conclusion further in §15.) It follows that \(\Delta \hat{S}={\hat{q}}/{\hat{T}}\), and since \(\hat{q}<0\), while \(\hat{T}>0\), we have \(\Delta \hat{S}<0\). We can write this conclusion more explicitly: \(\left(\mathrm{SL\ and}\ \sim \mathrm{MSL}\right)\Rightarrow \Delta \hat{S}<0\). By assuming a perpetual motion machine of the second kind is possible—that is, by assuming \(\sim \mathrm{MSL}\) is true—we derive the contradiction that both \(\Delta \hat{S}\ge 0\) and \(\Delta \hat{S}<0\). Therefore, proposition \(\sim \mathrm{MSL}\) must be false. Proposition \(\mathrm{MSL}\) must be true. The entropy-based second law of thermodynamics implies that a perpetual motion machine of the second kind is not possible. That is, the entropy-based statement of the second law implies the machine-based statement. (We prove that \(\sim \left(\mathrm{SL\ and}\ \sim \mathrm{MSL}\right)\); it follows that \(\mathrm{SL\ }\mathrm{\Rightarrow }\mathrm{MSL}\). For a more detailed argument, see problem 2.)This page titled 9.1: The Second Law of Thermodynamics is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Paul Ellgen via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,231 |
9.2: The Carnot Cycle for an Ideal Gas and the Entropy Concept
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/09%3A_The_Second_Law_-_Entropy_and_Spontaneous_Change/9.02%3A_The_Carnot_Cycle_for_an_Ideal_Gas_and_the_Entropy_Concept | Historically, the steam engine was the first machine for converting heat into work that could be exploited on a large scale. The steam engine played a major role in the industrial revolution and thus in the development of today’s technology-intensive economy. It was important also in the development of the basic concepts of thermodynamics. A steam engine produces work when hot steam under pressure is introduced into a cylinder, driving a piston outward. A shaft connects the piston to a flywheel. When the connecting shaft reaches its greatest extension, the spent steam is vented to the atmosphere. Thereafter the flywheel drives the piston inward.The economic viability of the steam engine derives, in part, from the fact that the spent steam can be vented to the atmosphere at the end of each cycle. However, this is not a necessary feature of heat engines. We can devise engines that alternately heat and cool a captive working fluid to convert heat energy into mechanical work. Stirling engines are practical devices of this type. A Carnot engine is a conceptual engine that exploits the response of a closed system to temperature changes. A Carnot engine extracts heat from one reservoir at a fixed high temperature and discharges a lesser amount of heat into a second reservoir at a fixed lower temperature. An amount of energy equal to the difference between these increments of heat energy appears in the surroundings as work.For one cycle of the Carnot engine, let the heat transferred to the system from the hot and cold reservoirs be \(q_h\) and \(q_{\ell }\) respectively. We have \(q_h>0\) and \(q_{\ell }<0\). Let the net work done on the system be \(w_{net}\) and the net work that appears in the surroundings be \({\hat{w}}_{net}\). We have\({\hat{w}}_{net}>0\), \({\hat{w}}_{net}=-w_{net}\), and \(w_{net}<0\). For one cycle of the engine, \(\Delta E=0\), and since\[\Delta E=q_h+q_{\ell }+w_{net}=q_h+q_{\ell }-{\hat{w}}_{net}, \nonumber \]it follows that \({\hat{w}}_{net}=q_h+q_{\ell }\). The energy input to the Carnot engine is \(q_h\), and the useful work that appears in the surroundings is \({\hat{w}}_{net}\). (The heat accepted by the low-temperature reservoir, \({\hat{q}}_{\ell }=-q_{\ell }>0\), is a waste product, in the sense that it represents energy that cannot be converted to mechanical work using this cycle. All feasible heat engines share this feature of the Carnot engine. In contrast, a perpetual motion machine of the second kind converts its entire heat intake to work; no portion of its heat intake goes unused.) The efficiency, \(\epsilon\), with which the Carnot engine converts the input energy, \(q_h\), to useful output energy, \({\hat{w}}_{net}\), is therefore,\[\epsilon =\frac{\hat{w}_{net}}{q_h}=\frac{q_h+q_{\ell}}{q_h}=1+\frac{q_{\ell }}{q_h} \nonumber \]We can generalize our consideration of heat engines to include any series of changes in which a closed system exchanges heat with its surroundings at more than one temperature, delivers a positive quantity of work to the surroundings, and returns to its original state. We use the Carnot cycle and the machine-based statement of the second law to analyze systems that deliver pressure–volume work to the surroundings. We consider both reversible and irreversible systems. We begin by considering reversible Carnot cycles. If any system reversibly traverses any closed path on a pressure–volume diagram, the area enclosed by the path represents the pressure–volume work exchanged between the system and its surroundings. If the area is not zero, the system temperature changes during the cycle. If the cycle is reversible, all of the heat transfers that occur must occur reversibly. We can apply our reasoning about reversible cycles to any closed system containing any collection of chemical substances, so long as any phase changes or chemical reactions that occur do so reversibly. This means that all phase and chemical changes that occur in the system must adjust rapidly to the new equilibrium positions that are imposed on them as a system traverses a Carnot cycle reversibly.In , \(V_1\), and \(T_h\). From this initial state, we cause the ideal gas to undergo a reversible isothermal expansion in which it absorbs a quantity of heat, \(q_h\), from a high-temperature heat reservoir at \({\hat{T}}_h\). We designate the pressure, volume, and temperature at the end of this isothermal expansion as \(P_2\), \(V_2\), and \(T_h\). In a second step, we reversibly and adiabatically expand the ideal gas until its temperature falls to that of the second, low-temperature, heat reservoir. We designate the pressure, volume, and temperature at the end of this adiabatic expansion as \(P_3\), \(V_3\), and \(T_{\ell }\). We begin the return portion of the cycle by reversibly and isothermally compressing the ideal gas at the temperature of the cold reservoir. We continue this reversible isothermal compression until the ideal gas reaches the pressure and volume from which an adiabatic compression will just return it to the initial state. We designate the pressure, volume, and temperature at the end of this isothermal compression by \(P_4\), \(V_4\), and \(T_{\ell }\). During this step, the ideal gas gives up a quantity of heat, \(q_{\ell }<0\), to the low-temperature reservoir. Finally, we reversibly and adiabatically compress the ideal gas to its original pressure, volume, and temperature.For the high-temperature isothermal step, we have \[-q_h=w_h=-RT_h \ln \left(\frac{V_2}{V_1}\right) \nonumber \]and for the low-temperature isothermal step, we have\[-q_{\ell }=w_{\ell }=-RT_{\ell } \ln \left(\frac{V_4}{V_3}\right) \nonumber \]For the adiabatic expansion and compression, we have \[q_{exp}=q_{comp}=0 \nonumber \]The corresponding energy and work terms are\[{\Delta }_{exp}E=w_{exp}=\int^{T_{\ell }}_{T_h}{C_VdT} \nonumber \]for the adiabatic expansion and\[{\Delta }_{comp}E=w_{comp}=\int^{T_h}_{T_{\ell }}{C_VdT} \nonumber \]for the adiabatic compression. The heat-capacity integrals are the same except for the direction of integration; they sum to zero, and we have \(w_{exp}+w_{comp}=0\). The net work done on the system is the sum of the work for these four steps, \(w_{net}=w_h+w_{exp}+w_{\ell }+w_{comp}=w_h+w_{\ell }\). The heat input occurs at the high-temperature reservoir, so that \(q_h>0\). The heat discharge occurs at the low-temperature reservoir, so that \(q_{\ell }<0\).For one cycle of the reversible, ideal-gas Carnot engine,\[\epsilon =1+\frac{q_{\ell}}{q_h}=1+\frac{RT_{\ell } \ln \left({V_4}/{V_3}\right)}{RT_h \ln \left(\frac{V_2}{V_1}\right)} \nonumber \]Because the two adiabatic steps involve the same limiting temperatures, the energy of an ideal gas depends only on temperature, and \(dE=dw\) for both steps, we see from Section 9.7-9.20 that\[\int^{T_{\ell }}_{T_h}{\frac{C_V}{T}}dT=-\int^{V_3}_{V_2}{\frac{R}{V}}dV=-R{ \ln \left(\frac{V_3}{V_2}\right)\ } \nonumber \]and\[\int^{T_h}_{T_{\ell }}{\frac{C_V}{T}}dT=-\int^{V_1}_{V_4}{\frac{R}{V}}dV=-R{ \ln \left(\frac{V_1}{V_4}\right)\ } \nonumber \]The integrals over \(T\) are the same except for the direction of integration. They sum to zero, so that \(-R{ \ln \left({V_3}/{V_2}\right)\ }-R{ \ln \left({V_1}/{V_4}\right)\ }=0\) and\[\frac{V_2}{V_1}=\frac{V_3}{V_4} \nonumber \]Using this result, the second equation for the reversible Carnot engine efficiency becomes\[\epsilon =1-\frac{T_{\ell }}{T_h} \nonumber \]Equating our expressions for the efficiency of the reversible Carnot engine, we find\[\epsilon =1+\frac{q_{\ell }}{q_h}=1-\frac{T_{\ell }}{T_h} \nonumber \] from which we have\[\frac{q_h}{T_h}+\frac{q_{\ell }}{T_{\ell }}=0 \nonumber \]Since there is no heat transfer in the adiabatic steps, \(q_{exp}=q_{comp}=0,\) and we can write this sum as\[\sum_{cycle}{\frac{q_i}{T_i}}=0 \nonumber \]If we divide the path around the cycle into a large number of very short segments, the limit of this sum as the \(q_i\) become very small is\[\oint{\frac{dq^{rev}}{T}}=0 \nonumber \]where the superscript “\(rev\)” serves as a reminder that the cycle must be traversed reversibly. Now, we can define a new function, \(S\), by the differential expression\[dS=\frac{dq^{rev}}{T} \nonumber \]In this expression, \(dS\) is the incremental change in \(S\) that occurs when the system reversibly absorbs a small of increment of heat, \({dq}^{rev}\), at a particular temperature, \(T\). For an ideal gas traversing a Carnot cycle, we have shown that\[\Delta S=\oint{dS}=\oint{\frac{dq^{rev}}{T}}=0 \nonumber \]\(S\) is, of course, the entropy function described in our entropy-based statement of the second law.We now want to see what the machine-based statement of the second law enables us to deduce about the properties of \(S\). Since the change in \(S\) is zero when an ideal gas goes around a complete Carnot cycle, we can conjecture that \(S\) is a state function. Of course, the fact that \(\Delta S=0\) around one particular cycle does not prove that \(S\) is a state function. If \(S\) is a state function, it must be true that \(\Delta S=0\) around any cycle whatsoever. We now prove this for any reversible cycle.The proof has two steps. In the first, we show that \(\oint{dq^{rev}/T}=0\) for a machine that uses any reversible system operating between two constant-temperature heat reservoirs to convert heat to work. In the second step, we show that \(\oint{dq^{rev}/T}=0\) for any system that reversibly traverses any closed path.This page titled 9.2: The Carnot Cycle for an Ideal Gas and the Entropy Concept is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Paul Ellgen via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,232 |
9.3: The Carnot Cycle for Any Reversible System
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/09%3A_The_Second_Law_-_Entropy_and_Spontaneous_Change/9.03%3A_The_Carnot_Cycle_for_Any_Reversible_System | To show that \(\oint dq^{rev}/T=0\) for any reversible system taken around a Carnot cycle, we first observe that the Carnot cycle can be traversed in the opposite direction. In this case, work is delivered to the engine and a quantity of heat is transferred from the low-temperature reservoir to the high-temperature reservoir. Operated in reverse, the Carnot engine is a refrigerator. Suppose that we have two identical ideal-gas Carnot machines, one of which we operate as an engine while we operate the other as a refrigerator. If we configure them so that the work output of the engine drives the refrigerator, the effects of operating them together cancel completely. The refrigerator exactly consumes the work output of the engine. The heat transfers to and from the heat reservoirs offset exactly.Now, let us consider an ideal-gas Carnot engine and any other reversible engine that extracts heat from a high-temperature reservoir and rejects a portion of it to a low-temperature reservoir. Let us call these engines A and B. We suppose that one is operated to produce work in its surroundings (\(w<0\)); the other is operated to consume this work and transfer net heat energy from the low-temperature to the high-temperature reservoir. Let the net work done in one cycle on machines A and B be \(w_{netA}\) and \(w_{netB}\), respectively. We can choose to make these engines any size that we please. Let us size them so that one complete cycle of either engine exchanges the same quantity of heat with the high-temperature reservoir. That is, if the high-temperature reservoir delivers heat \(q_{hA}\) to engine A, then it delivers heat \(q_{hB}=q_{hA}\) to engine B. diagrams these engines. With one operating as an engine and the other operating as a refrigerator, we have \(q_{hA}+q_{hB}=0\). When both engine and refrigerator have completed a cycle, the high temperature reservoir has returned to its original state.We can create a combined device that consists of A running as an engine, B running as a refrigerator, and the high-temperature reservoir. also diagrams this combination. When it executes one complete cycle, the initial condition of the combined device is restored. Therefore, since E is a state function, we have\[\begin{align} \Delta E &= w_{netA}+q_{hA}+q_{\ell A}+w_{netB}+q_{hB}+q_{\ell B} \\[4pt] &=w_{netA}+w_{netB}+q_{\ell A}+q_{\ell B} \\[4pt] &=0. \end{align} \nonumber \]where we use the constraint \(q_{hA}+q_{hB}=0\). Let us consider the possibility that \(w_{netA}+w_{netB}<0\); that is, the combined device does net work on the surroundings. Then, \(\Delta E=0\) implies that \(q_{\ell A}+q_{\ell B}>0\).In this cyclic process, the combined device takes up a positive quantity of heat from a constant-temperature reservoir and delivers a positive quantity of work to the surroundings. There is no other change in either the system or the surroundings. This violates the machine-based statement of the second law. Evidently, it is not possible for the combined device to operate in the manner we have hypothesized. We conclude that any such machine must always operate such that \(w_{netA}+w_{netB}\ge 0\); that is, the net work done on the combined machine during any complete cycle must be either zero or some positive quantity.In concluding that \(w_{netA}+w_{netB}\ge 0\), we specify that the combined machine has A running as a heat engine and B running as a refrigerator. Now, suppose that we reverse their roles, and let \(w^*_{netA}\) and \(w^*_{netB}\) represent the net work for the reversed combination. Applying the same argument as previously, we conclude that \(w^*_{netA}+w^*_{netB}\ge 0\). But, since the direction of operation is reversed for both machines, we must also have \(w^*_{netA}=-w_{netA}\) and \(w^*_{netB}=-w_{netB}\). Hence we have \(-w_{netA}-w_{netB}\ge 0\) or \(w_{netA}+w_{netB}\le 0\). We conclude, therefore, that\[w_{netA}+w_{netB}=0 \nonumber \]for any two, matched, reversible engines operating around a Carnot cycle.This conclusion can be restated as a condition on the efficiencies of the two machines. The individual efficiencies are \({\epsilon }_A=-w_{netA}/q_{hA}\) and \({\epsilon }_B=-w_{netB}/q_{hB}\).(The efficiency equation is unaffected by the direction of operation, because changing the direction changes the sign of every energy term in the cycle. Changing the direction of operation is equivalent to multiplying both the numerator and denominator by minus one.) Then, from \(w_{netA}+w_{netB}=0\), it follows that\[{\epsilon }_Aq_{hA}+{\epsilon }_Bq_{hB}=0 \nonumber \]Since we sized A and B so that \(q_{hA}+q_{hB}=0\), we have\[\epsilon_Aq_{hA}-{\epsilon }_Bq_{hA}=0 \nonumber \] so that \[{\epsilon }_A={\epsilon }_B \nonumber \]for any reversible Carnot engines A and B operating between the same two heat reservoirs.For the ideal gas engine, we found \(\epsilon =1-T_{\ell }/{T_h}\). For any reversible Carnot engine, we have \(\Delta E=0=w_{net}+q_h+q_{\ell }\), so that \(-w_{net}=q_h+q_{\ell }\), and\[\epsilon =\frac{-w_{net}}{q_h}=1+\frac{q_{\ell }}{q_h} \nonumber \]This means that the efficiency relationship\[\epsilon =1-\frac{T_{\ell }}{T_h}=1+\frac{q_{\ell }}{q_h} \nonumber \]applies to any reversible Carnot engine. It follows that the integral of \(dq^{rev}/T\) around a Carnot cycle is zero for any reversible system.The validity of these conclusions is independent of type of work that the engine produces; if engine A is an ideal-gas engine, engine B can be comprised of any system and can produce any kind of work. In obtaining this result from the machine-based statement of the second law, we make the additional assumption that pressure–volume work can be converted entirely to any other form of work, and vice versa. That is, we assume that the work produced by engine A can reversibly drive engine B as a refrigerator, whether engines A and B produce the same or different kinds of work.This page titled 9.3: The Carnot Cycle for Any Reversible System is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Paul Ellgen via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,233 |
9.4: The Entropy Change around Any Cycle for Any Reversible System
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/09%3A_The_Second_Law_-_Entropy_and_Spontaneous_Change/9.04%3A_The_Entropy_Change_around_Any_Cycle_for_Any_Reversible_System | Any system reversibly traversing any closed curve on a pressure–volume diagram exchanges work with its surroundings, and the area enclosed by the curve represents the amount of this work. In the previous section, we found \(\oint{dq^{rev}/T}=0\) for any system that traverses a Carnot cycle reversibly. We now show that this is true for any system that traverses any closed path reversibly. This establishes that \(\Delta S\) is zero for any system traversing any closed path reversibly and proves that \(S\), defined by \(dS=dq^{rev}/T\), is a state function.To do so, we introduce an experience-based theorem: The pressure–volume diagram for any reversible system can be tiled by intersecting lines that represent isothermal and adiabatic paths. These lines can be packed as densely as we please, so that the tiling of the pressure–volume diagram can be made as closely spaced as we please. The perimeter of any one of the resulting tiles corresponds to a path around a Carnot cycle. Given any arbitrary closed curve on the pressure–volume diagram, we can select a set of tiles that just encloses it. See The perimeter of this set of tiles approximates the path of the arbitrary curve. Since the tiling can be made as fine as we please, the perimeter of the set of tiles can be made to approximate the path of the arbitrary curve as closely as we please.Suppose that we traverse the perimeter of each of the individual tiles in a clockwise direction, adding up \(q^{rev}/T\) as we go. Segments of these perimeters fall into two groups. One group consists of segments that are on the perimeter of the enclosing set of tiles. The other group consists of segments that are common to two tiles. When we traverse both of these tiles in a clockwise direction, the shared segment is traversed once in one direction and once in the other. When we add up \(q^{rev}/T\) for these two traverses of the same segment, we find that the sum is zero, because we have \(q^{rev}/T\) in one direction and \(-q^{rev}/T\) in the other. This means that the sum of \(q^{rev}/T\) around all of the tiles will just be equal to the sum of \(q^{rev}/T\) around those segments that lie on the perimeter of the enclosing set. That is, we have\[\sum_{ \begin{array}{c} \mathrm{cycle} \\ \mathrm{perimeter} \end{array}} \frac{q^{rev}}{T}+\sum_{ \begin{array}{c} \mathrm{interior} \\ \mathrm{seqments} \end{array}} \frac{q^{rev}}{T}=\sum_{ \begin{array}{c} \mathrm{all} \\ \mathrm{tiles} \end{array}} \left\{\sum_{ \begin{array}{c} \mathrm{tile} \\ \mathrm{perimeter} \end{array}} \frac{q^{rev}}{T}\right\} \nonumber \]where \[\sum_{ \begin{array}{c} \mathrm{interior} \\ \mathrm{seqments} \end{array} } \frac{q^{rev}}{T}=0 \nonumber \]because each interior segment is traversed twice, and the two contributions cancel exactly.This set of tiles has another important property. Since each individual tile represents a reversible Carnot cycle, we know that\[\sum_{ \begin{array}{c} \mathrm{tile} \\ \mathrm{perimeter} \end{array} }{\frac{q^{rev}}{T}}=0 \nonumber \]around each individual tile. Since the sum around each tile is zero, the sum of all these sums is zero. It follows that the sum of \({q^{rev}}/{T}\) around the perimeter of the enclosing set is zero:\[\sum_{ \begin{array}{c} \mathrm{cycle} \\ \mathrm{perimeter} \end{array} }{\frac{q^{rev}}{T}}=0 \nonumber \]By tiling the pressure–volume plane as densely as necessary, we can make the perimeter of the enclosing set as close as we like to any closed curve. The heat increments become arbitrarily small, and\[\mathop{\mathrm{lim}}_{q^{rev}\to {dq}^{rev}} \left[\sum_{ \begin{array}{c} cycle \\ perimeter \end{array} } \frac{q^{rev}}{T}\right]\ =\oint{\frac{dq^{rev}}{T}}=0 \nonumber \] For any reversible engine producing pressure–volume work, we have \(\oint{dS=0}\) around any cycle.We can extend this analysis to reach the same conclusion for a reversible engine that produces any form of work. To see this, let us consider the tiling theorem more carefully. When we say that the adiabats and isotherms tile the pressure–volume plane, we mean that each point in the pressure–volume plane is intersected by one and only one adiabat and by one and only one isotherm. When only pressure–volume work is possible, every point in the pressure–volume plane represents a unique state of the system. Therefore, the tiling theorem asserts that every state of the variable-pressure system can be reached along one and only one adiabat and one and only one isotherm.From experience, we infer that this statement remains true for any form of work. That is, every state of any reversible system can be reached by one and only one isotherm and by one and only one adiabat when any form of work is done. If more than one form of work is possible, there is an adiabat for each form of work. If changing \({\theta }_1\) and changing \({\theta }_2\) change the energy of the system, the effects on the energy of the system are not necessarily the same. In general, \({\mathit{\Phi}}_1\) is not the same as \({\mathit{\Phi}}_2\), where\[{\mathit{\Phi}}_i={\left(\frac{\partial E}{\partial {\theta }_i}\right)}_{V,{\theta }_{m\neq i}} \nonumber \]From §3, we know that a reversible Carnot engine doing any form of work can be matched with a reversible ideal-gas Carnot engine in such a way that the engines complete the successive isothermal and adiabatic steps in parallel. At each step, each engine experiences the same heat, work, energy, and entropy changes as the other. Just as we can plot the reversible ideal-gas Carnot cycle as a closed path in pressure–volume space, we can plot a Carnot cycle producing any other form of work as a closed path with successive isothermal and adiabatic steps in \({\mathit{\Phi}}_i{--\theta }_i\) space. Just as any closed path in pressure–volume space can be tiled (or built up from) arbitrarily small reversible Carnot cycles, so any closed path in \({\mathit{\Phi}}_i{--\theta }_i\) space can be tiled by such cycles. Therefore, the argument we use to show that \(\oint{dS=0}\) for any closed reversible cycle in pressure–volume space applies equally well to a closed reversible cycle in which heat is used to produce any other form of work.This page titled 9.4: The Entropy Change around Any Cycle for Any Reversible System is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Paul Ellgen via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,234 |
9.5: The Tiling Theorem and the Paths of Cyclic Process in Other Spaces
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/09%3A_The_Second_Law_-_Entropy_and_Spontaneous_Change/9.05%3A_The_Tiling_Theorem_and_the_Paths_of_Cyclic_Process_in_Other_Spaces | We view the tiling theorem as a generalization from experience, just as the machine-based statement of the second law is such a generalization. Let us consider the kinds of familiar observations from which we infer that every equilibrium state of any system is intersected by one and only one adiabat and by one and only one isotherm.When only pressure–volume work is possible, each pressure–volume point specifies a unique equilibrium state of the system. Since temperature is a state function, the temperature of this state has one and only one value. When another form of work is possible, every \({\mathit{\Phi}}_i{-\theta }_i\) point specifies a unique state for which the temperature has one and only one value. From experience, we know that we can produce a new state of the system, at the same temperature, by exchanging heat and work with it in a concerted fashion. We can make this change of state arbitrarily small, so that successive equilibrium states with the same temperature are arbitrarily close to one another. This succession of arbitrarily close equilibrium states is an isotherm. Therefore, at least one isotherm intersects any equilibrium state. There cannot be two such isotherms. If there were two isotherms, the system would have two temperatures, violating the principle that temperature is a state function.In an adiabatic process, the system exchanges energy as work but not as heat. From experience, we know that we can effect such a change with any reversible system. The result is a new equilibrium state. When we make the increment of work arbitrarily small, the new equilibrium state is arbitrarily close to the original state. Successive exchanges of arbitrarily small work increments produce successive equilibrium states that are arbitrarily close to one another. This succession of arbitrarily close equilibrium states is an adiabat.If the same state of a system could be reached by two reversible adiabats involving the same form of work, the effect of doing a given amount of this work on an equilibrium system would not be unique. From the same initial state, two reversible adiabatic experiments could do the same amount of the same kind of work and reach different final states of the system. For example, in two different experiments, we could raise a weight reversibly from the same initial elevation, do the same amount of work in each experiment, and find that the final elevation of the weight is different. Any such outcome conflicts with the observations that underlie our ideas about reversible processes.More specifically, the existence of two adiabats through a given point, in any \({\mathit{\Phi}}_i{-\theta }_i\) space, violates the machine-based statement of the second law. Two such adiabats would necessarily intersect a common isotherm. A path along one adiabat, the isotherm, and the second adiabat would be a cycle that restored the system to its original state. This path would enclose a finite area. Traversed in the appropriate direction, the cycle would produce work in the surroundings. By the first law, the system would then accept heat as it traverses the isotherm. The system would exchange heat with surroundings at a single temperature and produce positive work in the surroundings, thus violating the machine-based statement.If an adiabatic process that connects two states A and B is reversible, we see that the system follows the same path, in opposite directions, when it does work going from A to B as it does when work is done on it as it goes from B to A.From another perspective, we can say that the tiling theorem is a consequence of our assumptions about reversible processes. Our conception of a reversible process is that the energy, pressure, temperature, and volume are continuous functions of state, with continuous derivatives. That there is one and only one isotherm for every state is equivalent to the assumption that temperature is a continuous (single-valued) function of the state of the system. That there is one and only one adiabat for every state is equivalent to the assumption that \({\left({\partial E}/{\partial V}\right)}_{T,{\theta }_1}\), or generally, \({\left({\partial E}/{\partial {\theta }_i}\right)}_{T,V,{\theta }_{m\neq i}}\), is a continuous, single-valued function of the state of the system.With these ideas in mind, let us now observe that any reversible cycle can be described by a closed path in a space whose coordinates are \(T\) and \({q^{rev}}/{T}\) (entropy). In on the abscissa; then an isotherm is a horizontal line, and line of constant entropy (an isentrope) is vertical. A reversible Carnot cycle is a closed rectangle, and the area of this rectangle corresponds to the reversible work done by the system on its surroundings in one cycle. Any equilibrium state of the system corresponds to a particular point in this space. Any closed path can be tiled arbitrarily densely by isotherms and isentropes. Any reversible cycle involving any form of work is represented by a closed path in this space. is an alternative illustration of the argument that we make in Section 9.4. The path in this space is independent of the kind of work done, reinforcing the conclusion that \(\oint{dq^{rev}/T=0}\) for a reversible Carnot cycle producing any form of work. The fact that a cyclic process corresponds to a closed path in this space is equivalent to the fact that entropy is a state function.To appreciate this aspect of the path of a cyclic process in \(T-q^{rev}/{T}\) space, let us describe the path of the same process in a space whose coordinates are \(T\) and \(q^{rev}\). With \(q^{rev}\) on the abscissa, isotherms are again horizontal lines and adiabats are vertical lines. In this space, a reversible Carnot cycle does not begin and end at the same point. The path is not closed. Similarly, the representation of an arbitrary reversible cycle is not a closed figure. See The difference between the representations of a reversible cyclic process in these two spaces illustrates graphically the fact that entropy is a state function while heat is not.This page titled 9.5: The Tiling Theorem and the Paths of Cyclic Process in Other Spaces is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Paul Ellgen via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,235 |
9.6: Entropy Changes for A Reversible Process
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/09%3A_The_Second_Law_-_Entropy_and_Spontaneous_Change/9.06%3A_Entropy_Changes_for_A_Reversible_Process | Let us consider a closed system that undergoes a reversible change while in contact with its surroundings. Since the change is reversible, the portion of the surroundings that exchanges heat with the system is at the same temperature as the system: \(T=\hat{T}\). From \(q^{rev}=- \hat{q}^{rev}\) and the definition, \(dS=dq^{rev}/T\), the entropy changes are\[\Delta S={q^{rev}}/{T} \nonumber \]and\[\Delta \hat{S}=\hat{q}^{rev}/T=- q^{rev}/T=-\Delta S \nonumber \]Evidently, for any reversible process, we have\[\Delta S_{universe}=\Delta S+\Delta \hat{S}=0 \nonumber \]Note that these ideas are not sufficient to prove that the converse is true. From only these ideas, we cannot prove that \(\Delta S_{universe}=0\) for a process means that the process is reversible; it remains possible that there could be a spontaneous process for which \(\Delta S_{universe}=0\). However, our entropy-based statement of the second law does assert that the converse is true, that \(\Delta S_{universe}=0\) is necessary and sufficient for a process to be reversible.In the next section, we use the machine-based statement of the second law to show that \(\Delta S\ge 0\) for any spontaneous process in an isolated system. We introduce heuristic arguments to infer that \(\Delta S=0\) is not possible for a spontaneous process in an isolated system. From this, we show that \(\Delta S_{universe}>0\) for any spontaneous process and hence that \(\Delta S_{universe}=0\) is not possible for any spontaneous process. We conclude that \(\Delta S_{universe}=0\) is sufficient to establish that the corresponding process is reversible.This page titled 9.6: Entropy Changes for A Reversible Process is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Paul Ellgen via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,236 |
9.7: Entropy Changes for A Spontaneous Process in An Isolated System
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/09%3A_The_Second_Law_-_Entropy_and_Spontaneous_Change/9.07%3A_Entropy_Changes_for_A_Spontaneous_Process_in_An_Isolated_System | In Section 9.6, we consider the entropy changes in a system and its surroundings when the process is reversible. We consider now the diametrically opposite situation in which an isolated system undergoes a spontaneous change. From the entropy-based statement of the second law, we know how the entropy of this system and its surroundings change. Since the system is isolated, no change occurs in the surroundings. Thus, \(\Delta \hat{S}=0\); and since \(\Delta S+\Delta \hat{S}>0\), we have \(\Delta S>0\).Let us attempt to develop these conclusions from the machine-based statement of the second law. Since the process occurs irreversibly, we cannot use the heat of the process to find the entropy change for the system. We can calculate the entropy change for a process from the defining equation only if the process is reversible. However, entropy is a state function; using the figure of speech that we introduce in Section 7.21, we can find the entropy change for the spontaneous process by evaluating \(\Delta S\) along a second and reversible path that connects the same initial and final states.In . In this step, the system does work on the surroundings, or vice versa. The system reaches point C on the diagram. Then we reversibly and isothermally add or remove heat from the system to return to the original state at point A. For the transfer of heat to be reversible, we must have \(T=\hat{T}\) for this step. Hence, the final (and original) temperature of the system at point A is equal to the temperature of the surroundings. The reversible path \(B\to C\to A\) must exist, because the tiling theorem asserts that adiabats (vertical lines) and isotherms (horizontal lines) tile the \(T--S\)-plane arbitrarily densely.Taken literally, this description of state A is inconsistent. We suppose that the initial state A is capable of spontaneous change; therefore, it cannot be an equilibrium state. We suppose that the final state A is reached by a reversible process; therefore, it must be an equilibrium state. We bridge this contradiction by refining our definition of the initial state. The final state A is an equilibrium state with well-defined state functions. What we have in mind is that these final equilibrium-state values also characterize the initial non-equilibrium state. Evidently, the initial state A that we have in mind is a hypothetical state. This hypothetical state approximates the state of a real system that undergoes spontaneous change. By invoking this hypothetical initial state, we eliminate the contradiction between our descriptions of initial state A and final state A. Given a real system that undergoes spontaneous change, we must find approximate values for the real system’s state functions by finding an equilibrium—or quasi-equilibrium—system that adequately models the initial state of the spontaneously changing system.In the development below, we place no constraints on the nature of the system or the spontaneous process. We assume that the state functions of any hypothetical initial state A can be adequately approximated by some equilibrium-state model. However, before we consider the general argument, let us show how these conditions can be met for another specific system. Consider a vessel whose interior is divided by a partition. The real gas of a pure substance occupies the space on one side of the partition. The space on the other side of the partition is evacuated. We suppose that this vessel is isolated. The real gas is at equilibrium. We can measure its state functions, including its pressure, volume, and temperature. Now suppose that we puncture the partition. As soon as we do so, the gas expands spontaneously to fill the entire vessel, reaching a new equilibrium position, at a new pressure, volume, and temperature. The gas undergoes a free expansion, as defined in Section 7.17.At the instant the partition is punctured, the system becomes able to undergo spontaneous change. In this hypothetical initial state, before any significant quantity of gas passes through the opening, neither the actual condition of the gas nor the values of its state functions have changed. After the expansion to the new equilibrium state, the original state can be restored by reversible processes of adiabatic compression and isothermal volume adjustment. (Problems 13 and 14 in Chapter 10 deal with the energy and entropy changes for ideal and real gases around a cycle in which spontaneous expansion in an isolated system is followed by reversible restoration of the initial state.)Returning to the general cycle depicted in . For the reversible adiabatic transition from B to C, \(d_{BC}q^{rev}=0\) in every incremental part of the path. The transition from C to A occurs reversibly and isothermally; letting the heat of this step be \(q^{rev}_{CA}\), the entropy changes for these reversible steps are, from the defining equation,\[{\Delta }_{BC}S=\int^{T_C}_{T_B}{\frac{d_{BC}q^{rev}}{T}}=0 \nonumber \] and \[{\Delta }_{CA}S=\frac{q^{rev}_{CA}}{\hat{T}} \nonumber \]The energy and entropy changes around this cycle must be zero, whether the individual steps occur reversibly or irreversibly. We have\[\Delta E=q^{spon}_{AB}+w^{spon}_{AB}+q^{rev}_{BC}+w^{rev}_{BC}+q^{rev}_{CA}+w^{rev}_{CA}=q^{rev}_{CA}+w^{rev}_{BC}+w^{rev}_{CA}=0 \nonumber \] and\[\Delta S={\Delta }_{AB}S+{\Delta }_{BC}S+{\Delta }_{CA}S ={\Delta }_{AB}S+{q^{rev}_{CA}}/{\hat{T}}=0 \nonumber \]We want to analyze this cycle using the machine-based statement of the second law. We have \(w^{rev}_{BC}=-{\hat{w}}^{rev}_{BC}\), \(w^{rev}_{CA}=-{\hat{w}}^{rev}_{CA}\), and \(q^{rev}_{CA}=-{\hat{q}}^{rev}_{CA}\). Let us assume that the system does net work on the surroundings as this cycle is traversed so that \({\hat{w}}^{rev}_{BC}+{\hat{w}}^{rev}_{CA}>0\). Then,\[-\left({\hat{w}}^{rev}_{BC}+{\hat{w}}^{rev}_{CA}\right)=w^{rev}_{BC}+w^{rev}_{CA}<0 \nonumber \]and it follows that \(q^{rev}_{CA}>0\). The system exchanges heat with the surroundings in only one step of this process. In this step, the system extracts a quantity of heat from a reservoir in the surroundings. The temperature of this reservoir remains constant at \(\hat{T}\) throughout the process. The heat extracted by the system is converted entirely into work. This result contradicts the machine-based statement of the second law. Hence, \(w^{rev}_{BC}+w^{rev}_{CA}<0\) is false; it follows that\[w^{rev}_{BC}+w^{rev}_{CA}\ge 0 \nonumber \]and that \[q^{rev}_{CA}\le 0 \nonumber \]For the entropy change in the spontaneous process in the isolated system, we have\[{\Delta }_{AB}S=-{q^{rev}_{CA}}/{\hat{T}}\ge 0 \nonumber \]Now, we introduce the premise that \(q^{rev}_{CA}\neq 0\). If this is true, the entropy change in the spontaneous process in the isolated system becomes\[{\Delta }_{AB}S>0 \nonumber \](The converse is also true; that is, \({\Delta }_{AB}S>0\) implies that \(q^{rev}_{CA}\neq 0\).) The premise that \(q^{rev}_{CA}\neq 0\) is independent of the machine-based statement of the second law, which requires only that \(q^{rev}_{CA}\le 0\), as we just demonstrated. It is also independent of the first law, which requires only that \(q^{rev}_{CA}=-w^{rev}_{BC}-w^{rev}_{CA}\). If \(q^{rev}_{CA}\neq 0\), we can conclude that, for a spontaneous process in an isolated system, we must have \(w_{BC}+w_{CA}>0\) and \(q^{rev}_{CA}<0\). These conditions correspond to doing work on the system and finding that heat is liberated by the system. There is no objection to this; it is possible to convert mechanical energy into heat quantitatively. The conclusions that \(q^{rev}_{CA}<0\) and \({\Delta }_{AB}S>0\) have important consequences; we consider them below. First, however, we consider a line of thought that leads us to infer that \(q^{rev}_{CA}\neq 0\) and hence that \(q^{rev}_{CA}<0\) must be true.Because \(q_{AB}=0\) and \(w_{AB}=0\), we have \({\ E}_A=E_B\). The system can be taken from state A to state B by the reversible process A\(\to C\to B\). Above we see that if \(q^{rev}_{CA}=0\), we have \({\ S}_A=S_B\). In §6-10, we introduce Duhem’s theorem, which asserts that two thermodynamic variables are sufficient to specify the state of a closed reversible system in which only pressure–volume work is possible. We gave a proof of Duhem’s theorem when the two variables are chosen from among the pressure, temperature, and composition variables that describe the system. We avoided specifying whether other pairs of variables can be used. If we assume now that specifying the variables energy and entropy is always sufficient to specify the state of such a system, it follows that states A and B must in fact be the same state. (In §14, and in greater detail in Chapter 10, we see that the first law and our entropy-based statement of the second law do indeed imply that specifying the energy and entropy specifies the state of a closed reversible system in which only pressure–volume work is possible.)If state A and state B are the same state; that is, if the state functions of state A are the same as those of state B, it is meaningless to say that there is a spontaneous process that converts state A to state B. Therefore, if A can be converted to B in a spontaneous process in an isolated system, it must be that \(q^{rev}_{CA}\neq 0\). That is,\[\left[\left(q^{rev}_{CA}=0\right)\Rightarrow \sim \left(\mathrm{A\ can\ go\ to\ B\ spontaneously}\right)\right] \nonumber \] \[\Rightarrow \left[\left(A\ \mathrm{can\ go\ to\ B\ spontaneously}\right)\Rightarrow \left(q^{rev}_{CA}\neq 0\right)\right] \nonumber \]From the machine-based statement of the second law, we find \({\Delta }_{AB}S=-{q^{rev}_{CA}}/{\hat{T}}\ge 0\). When we supplement this conclusion with our Duhem’s theorem-based inference that \(q^{rev}_{CA}\neq 0\), we can conclude that \(\Delta S>0\) for any spontaneous process in any isolated system. Because the system is isolated, we have \(\hat{q}=0\), and \(\Delta \hat{S}=0\). For any spontaneous process in any isolated system we have\[\Delta S_{universe}=\Delta S+\Delta \hat{S}>0. \nonumber \]We can also conclude that the converse is true; that is, if \({\Delta }_{AB}S=S_B-S_A>0\) for a process in which an isolated system goes from state A to state B, the process must be spontaneous. Since any process that occurs in an isolated system must be a spontaneous process, it is only necessary to show that \({\Delta }_{AB}S>0\) implies that state B is different from state A. This is trivial. Because entropy is a state function, \(S_B-S_A>0\) requires that state B be different from state A.None of our arguments depends on the magnitude of the change that occurs. Evidently, the same inequality must describe every incremental portion of any spontaneous process; otherwise, we could define an incremental spontaneous change for which the machine-based statement of the second law would be violated. For every incremental part of any spontaneous change in any isolated system we have \(dS>0\) and\[dS_{universe}=dS+d\hat{S}>0. \nonumber \]These are pivotally important results; we explore their ramifications below. Before doing so, however, let us again consider a system in which only pressure–volume work is possible. There is an alternative way to express the idea that such a system is isolated. Since an isolated system cannot interact with its surroundings in any way, it cannot exchange energy with its surroundings. Its energy must be constant. Since it cannot exchange pressure–volume work, its volume must be constant. Hence, isolation implies constant \(E\) and \(V\). If only pressure–volume work is possible, the converse must be true; that is, if only pressure–volume work is possible, constant energy and volume imply that there are no interactions between the system and its surroundings. Therefore, constant \(E\) and \(V\) imply that the system is isolated, and it must be true that \(\Delta \hat{S}=0\). In this case, a spontaneous process in which \(E\) and \(V\) are constant must be accompanied by an increase in the entropy of the system. (If \(V\) is constant and only pressure–volume work is possible, the process involves no work.) We have a criterion for spontaneous change:\[{\left(\Delta S\right)}_{EV}>0 \nonumber \] (spontaneous process, only pressure–volume work)where the subscripts indicate that the energy and volume of the system are constant. (In Section 9.21, we arrive at this conclusion by a different argument.)This page titled 9.7: Entropy Changes for A Spontaneous Process in An Isolated System is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Paul Ellgen via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,237 |
9.8: The Entropy of the Universe
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/09%3A_The_Second_Law_-_Entropy_and_Spontaneous_Change/9.08%3A_The_Entropy_of_the_Universe | In Section 9.7, we conclude that the entropy change is positive for any spontaneous change in an isolated system. Since we can consider the universe to be an isolated system, it follows that \(\Delta S_{universe}>0\) for any spontaneous process.To reach this conclusion by a more detailed argument, let us consider an arbitrary system that is in contact with its surroundings. We can subdivide these surroundings into subsystems. As diagrammed in that interacts with the system and a more remote surroundings subsystem (Surroundings 2) that does not. That is, we assume that we can define Surroundings 2 so that it is unaffected by the process. Then we define an augmented system consisting of the original system plus Surroundings 1. The augmented system is isolated from the remote portion of the surroundings, so that the entropy change for the augmented system is positive by the argument in the previous section. Denoting entropy changes for the system, Surroundings 1, Surroundings 2, and the augmented system by \(\Delta S\), \(\Delta {\hat{S}}_1\), \(\Delta {\hat{S}}_2\), and \(\Delta S_{augmented}\), respectively, we have \(\Delta S_{augmented}=\Delta S+\Delta {\hat{S}}_1>0\), and\(\Delta \hat{S}=\Delta {\hat{S}}_1+\Delta {\hat{S}}_2>0\). Since the remote portion of the surroundings is unaffected by the change, we have \(\Delta {\hat{S}}_2=0\). For any spontaneous change, whether the system is isolated or not, we have\[\Delta S_{universe}=\Delta S+\Delta {\hat{S}}_1+\Delta {\hat{S}}_2=\Delta S_{augmented}+\Delta {\hat{S}}_2=\Delta S_{augmented}>0 \nonumber \] (any spontaneous change)This statement is an essential part of the entropy-based statement of the second law. We have now developed it from the machine-based statement of the second law by convincing, but not entirely rigorous arguments. In Section 9.6 we find that \(\Delta S_{universe}=\Delta S+\Delta \hat{S}=0\) for any reversible process. Thus, for any possible process, we have\[\Delta S_{universe}=\Delta S+\Delta \hat{S}\ge 0 \nonumber \] The equality applies when the process is reversible; the inequality applies when it is spontaneous.Because entropy is a state function, \(\Delta S\) and \(\Delta \hat{S}\) change sign when the direction of a process is reversed. We say that a process for which \(\Delta S+\Delta \hat{S}<0\) is an impossible process. Our definitions mean that these classifications—reversible, spontaneous, and impossible—are exhaustive and mutually exclusive. We conclude that \(\Delta S_{universe}=\Delta S+\Delta \hat{S}=0\) is necessary and sufficient for a process to be reversible; \(\Delta S_{universe}=\Delta S+\Delta \hat{S}>0\) is necessary and sufficient for a process to be spontaneous. (See problem 19.)This page titled 9.8: The Entropy of the Universe is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Paul Ellgen via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,238 |
9.9: The Significance of The Machine-based Statement of The Second Law
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/09%3A_The_Second_Law_-_Entropy_and_Spontaneous_Change/9.09%3A_The_Significance_of_The_Machine-based_Statement_of_The_Second_Law | Our entropy-based statement of the second law asserts the definition and basic properties of entropy that we need in order to make predictions about natural processes. The ultimate justification for these assertions is that the predictions they make agree with experimental observations. We have devoted considerable attention to arguments that develop the definition and properties of entropy from the machine-based statement of the second law. These arguments parallel those that were made historically as these concepts were developed. Understanding these arguments greatly enhances our appreciation for the relationship between the properties of the entropy function and the changes that can occur in various physical systems.While these arguments demonstrate that our machine-based statement implies the entropy-based statement, we introduce additional postulates in order to make them. These include: the premise that the pressure, temperature, volume, and energy of a reversible system are continuous functions of one another; Duhem’s theorem; the tiling theorem; and the presumption that the conclusions we develop for pressure–volume work are valid for any form of work. We can sum up this situation by saying that our machine-based statement serves a valuable heuristic purpose. The entropy-based statement of the second law is a postulate that we infer by reasoning about the consequences of the machine-based statement. When we want to apply the second law to physical systems, the entropy-based statement and other statements that we introduce below are much more useful.Finally, we note that our machine-based statement of the second law is not the only statement of this type. Other similar statements have been given. The logical relationships among them are interesting, and they can be used to develop the entropy-based statement of the second law by arguments similar to those we make in Section 9.2 to Section 9.8.This page titled 9.9: The Significance of The Machine-based Statement of The Second Law is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Paul Ellgen via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,239 |
9.10: A Slightly Philosophical Digression on Energy and Entropy
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/09%3A_The_Second_Law_-_Entropy_and_Spontaneous_Change/9.10%3A_A_Slightly_Philosophical_Digression_on_Energy_and_Entropy | The content of the first law of thermodynamics is that there is a state function, which we call energy, which has the property that \(\Delta E_{universe}=0\) for any process that can occur. The content of the second law is that there is a state function, which we call entropy, which has the property that \(\Delta S_{universe}>0\) for any spontaneous process.These two state functions exhaust the range of independent possibilities: Suppose that we aspire to find a new and independent state function, call it \(B\), which further characterizes the possibilities open to the universe. What other condition could B impose on the universe—or vice versa? The only available candidate might appear to be \(\Delta B_{universe}<0\). However, this does not represent an independent condition, since its role is already filled by the quantity \(-\Delta S_{universe}\).Of course, we can imagine a state function, \(B\), which is not simply a function of \(S\), but for which\(\Delta B_{universe}>0\), \(\Delta B_{universe}=0\), or \(\Delta B_{universe}<0\), according as the process is spontaneous, reversible, or impossible, respectively. For any given change, \(\Delta B\) would not be the same as \(\Delta S\); however, \(\Delta B\) and \(\Delta S\) would make exactly the same predictions. If \(\Delta B_{universe}\) were more easily evaluated than \(\Delta S_{universe}\), we would prefer to use \(B\) rather than \(S\). Nevertheless, if there were such a function \(B\), its role in our description of nature would duplicate the role played by \(S\).This page titled 9.10: A Slightly Philosophical Digression on Energy and Entropy is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Paul Ellgen via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,240 |
9.11: A Third Statement of the Second Law
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/09%3A_The_Second_Law_-_Entropy_and_Spontaneous_Change/9.11%3A_A_Third_Statement_of_the_Second_Law | Let us consider another frequently cited alternative statement of the second law, which, for easy reference, we call the temperature-based statement of the second law:The temperature-based statement of the second lawThe spontaneous transfer of heat from a colder body to a warmer body is impossible.In the discussion below, we refer to this statement as proposition \(\mathrm{TSL}\). By “body”, we simply mean any system or object. By the “spontaneous transfer of heat,” we mean that the transfer of heat energy can be initiated by bringing the two bodies into contact with one another or by enabling the transmission of radiant energy between them. The surroundings do no work and exchange no heat with either reservoir; there is no change of any sort in the surroundings.We can show that the entropy-based statement and the temperature-based statement of the second law are equivalent: Given the definition of entropy, one implies the other.Let us begin by showing that the entropy-based statement implies the temperature-based statement of the second law. That is, we prove \(\mathrm{SL}\Rightarrow \mathrm{TSL}\). To do so, we prove \(\sim \mathrm{TSL}\Rightarrow \sim \mathrm{SL}\). That is, we assume that spontaneous transfer of heat from a colder to a warmer body is possible and show that this leads to a contradiction of the entropy-based statement of the second law. Let the quantity of heat received by the warmer body be \(dq_{warmer}>0\), and let the temperatures of the warmer and colder bodies be \(T_{warmer}\) and \(T_{colder}\), respectively. We have \(T_{warmer}-T_{colder}>\)0. The colder body receives heat\(dq_{colder}=-dq_{warmer}<0\). We make the heat increment so small that there is no significant change in the temperature of either body. No other changes occur. The two bodies are the only portions of the universe that are affected. Let the entropy changes for the warmer and colder bodies be \(dS_{colder}\) and \(dS_{warmer}\), respectively.To find \(dS_{colder}\) and \(dS_{warmer}\) we must find a reversible path to effect the same changes. This is straightforward. We can effect identically the same change in the warmer body by transferring heat, \(q_{warmer}>0\), to it through contact with some third body, whose temperature is infinitesimally greater than \(T_{warmer}\). This process is reversible, and the entropy change is \(dS_{warmer}={dq_{warmer}}/{T_{warmer}}\). Similarly, the entropy change for the colder body is \(dS_{colder}={dq_{colder}}/{T_{colder}=-{dq_{warmer}}/{T_{colder}}}\). It follows that\[ \begin{aligned} dS_{universe} & =dS_{warmer}+dS_{colder} \\ ~ & =\frac{dq_{warmer}}{T_{warmer}}-\frac{dq_{warmer}}{T_{colder}} \\ ~ & =-dq_{warmer}\left(\frac{T_{warmer}-T_{colder}}{T_{warmer}T_{colder}}\right) \\ ~ & <0 \end{aligned} \nonumber \]However, if \({dS}_{universe}<0\) for a spontaneous process, the second law (\(\mathrm{SL}\)) must be false. We have shown that a violation of the temperature-based statement implies a violation of the entropy-based statement of the second law: \(\sim \mathrm{TSL}\Rightarrow \sim \mathrm{SL}\), so that \(\mathrm{SL}\Rightarrow \mathrm{TSL}\).It is equally easy to show that the temperature-based statement implies the entropy-based statement of the second law. To do so, we assume that the entropy-based statement is false and show that this implies that the temperature-based statement must be false. By the arguments above, the entropy change that the universe experiences during the exchange of the heat increment is\[{dS}_{universe}=-dq_{warmer}\left(\frac{T_{warmer}-T_{colder}}{T_{warmer}T_{colder}}\right) \nonumber \]If the entropy-based statement of the second law is false, then \({dS}_{universe}<0\). It follows that \(dq_{warmer}>0\); that is, the spontaneous process transfers heat from the colder to the warmer body. This contradicts the temperature-based statement. That is, \(\sim \mathrm{SL}\Rightarrow \sim \mathrm{TSL}\), so that \(\mathrm{TSL}\Rightarrow \mathrm{SL}\).This page titled 9.11: A Third Statement of the Second Law is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Paul Ellgen via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,241 |
9.12: Entropy and Predicting Change
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/09%3A_The_Second_Law_-_Entropy_and_Spontaneous_Change/9.12%3A_Entropy_and_Predicting_Change | The entropy-based criteria that we develop in Section 9.2 through Section 9.8 are of central importance. If we are able to evaluate the change in the entropy of the universe for a prospective process and find that it is greater than zero, we can conclude that the process can occur spontaneously. The reverse of a spontaneous process cannot occur; it is an impossible process and the change in the entropy of the universe for such a process must be less than zero. Since an equilibrium process is a reversible process, the entropy of the universe must remain unchanged when a system goes from an initial state to a final state along a path whose every point is an equilibrium state. Using another figure of speech, we often say that a change that occurs along a reversible path is a change that “occurs at equilibrium.”These conclusions are what make the entropy function useful: If we can calculate \({\Delta S}_{universe}\) for a prospective process, we know whether the system is at equilibrium with respect to that process; whether the process is possible; or whether the process cannot occur. If we find \({\Delta S}_{universe}>0\) for a process, we can conclude that the process is possible; however, we cannot conclude that the process will occur. Indeed, many processes can occur spontaneously but do not do so. For example, hydrocarbons can react spontaneously with oxygen; most do so only at elevated temperatures or in the presence of a catalyst.The criteria \(\Delta S_{universe}=\Delta S+\Delta \hat{S}\ge 0\) are completely general. They apply to any process occurring under any conditions. To apply them we must determine both \(\Delta S\) and \(\Delta \hat{S}\). By definition, the system comprises the part of the universe that is of interest to us; the need to determine \(\Delta \hat{S}\) would appear to be a nuisance. This proves not to be the case. So long as the surroundings have a well-defined temperature, we can develop additional criteria for equilibrium and spontaneous change in which \(\Delta \hat{S}\) does not occur explicitly. In §14, we develop criteria that apply to reversible processes. In §15, we find a general relationship for \(\Delta \hat{S}\) that enables us to develop criteria for spontaneous processes.To develop the criteria for spontaneous change, we must define what we mean by spontaneous change more precisely. To define a spontaneous process in an isolated system as one that can take place on its own is reasonably unambiguous. However, when a system is in contact with its surroundings, the properties of the surroundings affect the change that occurs in the system. To specify a particular spontaneous process we must specify some properties of the surroundings or—more precisely—properties of the system that the surroundings act to establish. The ideas that we develop in §15 lead to criteria for changes that occur while one or more thermodynamic functions remain constant. These criteria supplement the second-law criteria \(\Delta S+\Delta \hat{S}\ge 0\). In using these criteria, we can say that the change occurs subject to one or more constraints.Some of these criteria depend on the magnitudes of \(\Delta E\) and \(\Delta H\) in the prospective process. We also find criteria that are expressed using new state functions that we call the Helmholtz and Gibbs free energies. In the next section, we introduce these functions.This page titled 9.12: Entropy and Predicting Change is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Paul Ellgen via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,242 |
9.13: Defining the Helmholtz and Gibbs Free Energies
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/09%3A_The_Second_Law_-_Entropy_and_Spontaneous_Change/9.13%3A_Defining_the_Helmholtz_and_Gibbs_Free_Energies | The first and second laws of thermodynamics define energy and entropy. Energy and entropy are fundamental state functions that we use to define other state functions. In Chapter 8, we use the energy function to define enthalpy. We use the energy and entropy functions to define two more state functions that also prove to have useful properties. These are the Helmholtz and Gibbs free energies. The Helmholtz free energy is usually given the symbol \(A\), and the Gibbs free energy is usually given the symbol \(G\). We define them by\[ \begin{array}{c c} A=E-TS ~ & ~ \text{(Helmholtz free energy)} \end{array} \nonumber \]and\[ \begin{array}{c c} G=H-TS ~ & ~ \text{(Gibbs free energy)} \end{array} \nonumber \]Note that \(PV\), \(TS\), \(H\), \(A\), and \(G\) all have the units of energy, \(E\).The sense of the name “free energy” is that a constant-temperature process in which a system experiences an entropy increase (\(\Delta S>0\)) is one in which the system’s ability to do work in the surroundings is increased by an energy increment \(T\Delta S\). Then, adding \(T\Delta S\) to the internal energy lost by the system yields the amount of energy that the process actually has available (energy that is “free”) to do work in the surroundings. When we consider how \(\Delta A\) and \(\Delta G\) depend on the conditions under which system changes, we find that this idea leads to useful results.The rest of this chapter develops important equations for \(\Delta E\), \(\Delta H\), \(\Delta S\), \(\Delta A\), and \(\Delta G\) that result when we require that a system change occur under particular sets of conditions.This page titled 9.13: Defining the Helmholtz and Gibbs Free Energies is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Paul Ellgen via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,243 |
9.14: The Fundamental Equation and Other Criteria for Reversible Change
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/09%3A_The_Second_Law_-_Entropy_and_Spontaneous_Change/9.14%3A_The_Fundamental_Equation_and_Other_Criteria_for_Reversible_Change | To begin exploring the possibilities for stating the criteria for change using only the properties of the system, let us consider how some thermodynamic functions change when a process is reversible. We consider a closed system and focus on making incremental changes in the state of the system. For a reversible process, we have \({dq}^{rev}=TdS\). The reversible pressure–volume work is \({dw}^{rev}_{PV}=-PdV\). If non-pressure–volume work is also possible, the reversible work becomes \({dw}^{rev}=-PdV+{dw}^{rev}_{NPV}\), where \({dw}^{rev}_{NPV}\) is the increment of reversible, non-pressure–volume work. The energy change is\[dE=dq^{rev}+dw^{rev}=TdS-PdV+dw^{rev}_{NPV} \nonumber \] (any reversible process)This equation is of central importance. It is sometimes called the combined first and second laws of thermodynamics or the fundamental equation. It applies to any closed system that is undergoing reversible change. It specifies a relationship among the changes in energy, entropy, and volume that must occur if the system is to remain at equilibrium while an increment of non-pressure–volume work, \({dw}^{rev}_{NPV}\), is done on it. The burden of our entire development is that any reversible process must satisfy this equation. Conversely, any process that satisfies this equation must be reversible.For a reversible process at constant entropy, we have \(dS=0\), so that \({\left(dE\right)}_S=-PdV+{dw}^{rev}_{NPV}\). Since \(-PdV\) is the reversible pressure–volume work, \({dw}^{rev}_{PV}\), and the sum \({dw}^{rev}_{net}=-PdV+{dw}^{rev}_{NPV}\) is the net work, we have\[{{\left(dE\right)}_S=dw}^{rev}_{net} \nonumber \] (reversible process, constant S)where the subscript “\(S\)” specifies that the entropy is constant. For a reversible process in which all of the work is pressure–volume work, we have \({dw}^{rev}_{NPV}=0\), and the fundamental equation becomes\[dE=TdS-PdV \nonumber \] (reversible process, only pressure–volume work)For a reversible process in which only pressure–volume work is possible, this equation gives the amount, \(dE\), by which the energy must change when the entropy changes by \(dS\) and the volume changes by \(dV\).Now, let us apply the fundamental equation to an arbitrary process that occurs reversibly and at constant entropy and constant volume. Under these conditions, \(dS=0\) and \(dV=0\). Therefore, at constant entropy and volume, a necessary and sufficient condition for the process to be reversible—and hence to be continuously in an equilibrium state as the process takes place—is that\[{{\left(dE\right)}_{SV}=dw}^{rev}_{NPV} \nonumber \] (reversible process)and if only pressure–volume work is possible,\[{\left(dE\right)}_{SV}=0 \nonumber \] (reversible process, only pressure–volume work)where the subscripts indicate that entropy and volume are constant.If we consider an arbitrary reversible process that occurs at constant energy and volume, we have \(dE=0\) and \(dV=0\), and the fundamental equation reduces to\[{\left(dS\right)}_{EV}=-\frac{dw^{rev}_{NPV}}{T} \nonumber \] (reversible process)and if only pressure–volume work is possible,\[{\left(dS\right)}_{EV}=0 \nonumber \] (reversible process, only pressure–volume work)In this case, as noted in §7, the system is isolated. In §1-6, we note that an isolated system in an equilibrium state can undergo no further change. Thus, the condition \({\left(dS\right)}_{EV}=0\) defines a unique or primitive equilibrium state.If a closed system behaves reversibly, any composition changes that occur in the system must be reversible. For chemical applications, composition changes are of paramount importance. We return to these considerations in Chapter 14, where we relate the properties of chemical substances—their chemical potentials—to the behavior of systems undergoing both reversible and spontaneous composition changes.If a closed system behaves reversibly and only pressure–volume work is possible, we see from the fundamental equation that specifying the changes in any two of the three variables, \(E\), \(S\), and \(V\), is sufficient to specify the change in the system. In particular, if energy and entropy are constant, \(dE=dS=0\), the volume is also constant, \(dV=0\), and the system is isolated. Thus, the state of an equilibrium system whose energy and entropy are fixed is unique; \(dE=dS=0\) specifies a primitive equilibrium state. We see that the internal consistency of our model passes a significant test: From the entropy-based statement of the second law, we deduce the same proposition that we introduce in §7 as a heuristic conjecture. In Chapter 10, we expand on this idea.Starting from the fundamental equation, we can find similar sets of relationships for enthalpy, the Helmholtz free energy, and the Gibbs free energy. We define \(H\ =\ E\ +\ PV\). For an incremental change in a system we, have \[dH=dE+PdV+VdP \nonumber \]Using the fundamental equation to substitute for dE, this becomes\[dH=TdS-PdV+dw^{rev}_{NPV}+PdV+VdP=TdS+VdP+dw^{rev}_{NPV} \nonumber \]For a reversible process in which all of the work is pressure–volume work, we have\[dH=TdS+VdP \nonumber \] (reversible process, only pressure–volume work)For a reversible process in which only pressure–volume work is possible, this equation gives the amount, \(dH\), by which the enthalpy must change when the entropy changes by \(dS\) and the pressure changes by \(dP\). If a reversible process occurs at constant entropy and pressure, then \(dS=0\) and \(dP=0\). At constant entropy and pressure, the process is reversible if and only if\[{\left(dH\right)}_{SP}=dw^{rev}_{NPV} \nonumber \] (reversible process)If only pressure–volume work is possible,\[{\left(dH\right)}_{SP}=0 \nonumber \] (reversible process, only pressure–volume work)where the subscripts indicate that entropy and pressure are constant.If we consider an arbitrary reversible process that occurs at constant enthalpy and pressure, we have \(dH=0\) and \(dP=0\), and the total differential for \(dH\) reduces to\({\left(dS\right)}_{HP}=-\frac{dw^{rev}_{NPV}}{T}\)(reversible process)and if only pressure–volume work is possible,\[{\left(dS\right)}_{HP}=0 \nonumber \] (reversible process, only pressure–volume work)From \(A=E-TS\), we have \(dA=dE-TdS-SdT\). Using the fundamental equation to substitute for \(dE\), we have\[dA=TdS-PdV+dw^{rev}_{NPV}-TdS-SdT=-PdV-SdT+dw^{rev}_{NPV} \nonumber \]For a reversible process in which all of the work is pressure–volume work,\[dA=-SdT-PdV \nonumber \] (reversible process, only pressure–volume work)For a reversible process in which only pressure–volume work is possible, this equation gives the amount, \(dA\), by which the Helmholtz free energy must change when the temperature changes by\(\ dT\) and the volume changes by \(dV\). For a reversible isothermal process, we have \(dT=0\), and from \(dA=-PdV-SdT+dw^{rev}_{NPV}\) we have\[{\left(dA\right)}_T=-PdV+dw^{rev}_{NPV}=dw^{rev}_{PV}+dw^{rev}_{NPV}=dw^{rev}_{net} \nonumber \] (reversible isothermal process)where we recognize that the reversible pressure–volume work is \(dw^{rev}_{PV}=-PdV\), and the work of all kinds is \(dw^{rev}_{net}=dw^{rev}_{PV}+dw^{rev}_{NPV}\). We see that \({\left(dA\right)}_T\) is the total of all the work done on the system in a reversible process at constant temperature. This is the reason that “\(A\)” is used as the symbol for the Helmholtz free energy: “\(A\)” is the initial letter in “Arbeit,” a German noun whose meaning is equivalent to that of the English noun “work.”If a reversible process occurs at constant temperature and volume, we have \(dT=0\) and \(dV=0\). At constant temperature and volume, a process is reversible if and only if\[{\left(dA\right)}_{TV}=dw^{rev}_{NPV} \nonumber \] (reversible process)If only pressure–volume work is possible,\[{\left(dA\right)}_{TV}=0 \nonumber \] (reversible process, only pressure–volume work)where the subscripts indicate that volume and temperature are constant. (Of course, these conditions exclude all work, because constant volume implies that there is no pressure–volume work.)From \[G=H-TS=E+PV-TS \nonumber \]and the fundamental equation, we have \[dG=dE+PdV+VdP-SdT-TdS=\ \ TdS-PdV+dw^{rev}_{NPV}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ -TdS+PdV+VdP-SdT=-SdT+VdP+dw^{rev}_{NPV} \nonumber \]For a reversible process in which all of the work is pressure–volume work,\[dG=-SdT+VdP \nonumber \] (reversible process, only pressure–volume work)For a reversible process in which only pressure–volume work is possible, this equation gives the amount, \(dG\), by which the Gibbs free energy must change when the temperature changes by \(dT\) and the pressure changes by \(dP\). For a reversible process that occurs at constant temperature and pressure, \(dT=0\) and \(dP=0\). At constant temperature and pressure, the process will be reversible if and only if\[{\left(dG\right)}_{TP}=dw^{rev}_{NPV} \nonumber \] (any reversible process)If only pressure–volume work is possible,\[{\left(dG\right)}_{TP}=0 \nonumber \] (reversible process, only pressure–volume work)where the subscripts indicate that temperature and pressure are constant.In this section, we develop several criteria for reversible change, stating these criteria as differential expressions. Since each of these expressions applies to every incremental part of a reversible change that falls within its scope, corresponding expressions apply to finite changes. For example, we find \({\left(dE\right)}_S=dw^{rev}_{net}\) for every incremental part of a reversible process in which the entropy has a constant value. Since we can find the energy change for a finite amount of the process by summing up the energy changes in every incremental portion, it follows that\[{\left(\Delta E\right)}_S=w^{rev}_{net} \nonumber \] (reversible process)Each of the other differential-expression criteria for reversible change also gives rise to a corresponding criterion for a finite reversible change. These criteria are summarized in §25.In developing the criteria in this section, we stipulate that various combinations of the thermodynamic functions that characterize the system are constant. We develop these criteria for systems undergoing reversible change; consequently, the requirements imposed by reversibility must be satisfied also. In particular, the system must be composed of homogeneous phases and its temperature must be the same as that of the surroundings. The pressure of the system must be equal to the pressure applied to it by the surroundings. When we specify that a reversible process occurs at constant temperature, we mean that \(T=\hat{T}=\mathrm{constant}\). When we specify that a reversible process occur at constant pressure, we mean that \(P=P_{applied}=\mathrm{constant}\).This page titled 9.14: The Fundamental Equation and Other Criteria for Reversible Change is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Paul Ellgen via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,244 |
9.15: Entropy and Spontaneous Change
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/09%3A_The_Second_Law_-_Entropy_and_Spontaneous_Change/9.15%3A_Entropy_and_Spontaneous_Change | In a reversible process, the changes that occur in the system are imposed by the surroundings; reversible change occurs only because the system responds to changes in the conditions imposed on it by its surroundings. A reversible process is driven by the surroundings. In contrast, a spontaneous process is driven by the system. Nevertheless, when a spontaneous process occurs under some specific set of imposed conditions (specific values of the temperature and pressure, for example) the system’s equilibrium state depends on these conditions. To specify a particular spontaneous change, we must specify enough constraints to fix the final state of the system.To see these points from a slightly different perspective, let us consider a closed reversible system in which only pressure–volume work is possible. Duhem’s theorem asserts that a change in the state of this system can be specified by specifying the changes in some pair of state functions, say \(X\) and \(Y\). If the imposed values of \(X\) and \(Y\) are constant at their eventual equilibrium values, but the system is changing, the system cannot be on a Gibbsian equilibrium manifold. We say that the system is undergoing a spontaneous change at constant\(\ X\) and \(Y\).This description is a figure of speech in that the system’s \(X\) and \(Y\) values do not necessarily attain the imposed values and become constant until equilibrium is reached. An example is in order: A system whose original pressure and temperature are \(P_i\) and \(T_i\) can undergo a spontaneous change while the surroundings impose a constant pressure, \(P_{applied}=P_f\), and the system is immersed in constant temperature bath at \(T=T_f\). The pressure and temperature of the system may be indeterminate as the process occurs, but the equilibrium pressure and temperature must be \(P_f\) and \(T_f\).If the surroundings operate to impose particular values of \(\boldsymbol{X}\) and \(\boldsymbol{Y}\) on the system, then the position at which the system eventually reaches equilibrium is determined by these values. The same equilibrium state is reached for any choice of surroundings that imposes the same values of \(\boldsymbol{X}\) and \(\boldsymbol{Y}\) on the system at the time that the system reaches equilibrium. For every additional form of non-pressure–volume work that affects the system, we must specify the value of one additional variable in order to specify a unique equilibrium state.The entropy changes that occur in the system and its surroundings during a spontaneous process have predictive value. However, our definitions do not enable us to find the entropy change for a spontaneous process, and the temperature of the system may not have a meaningful value. On the other hand, we can always carry out the process so that the temperature of the surroundings is known at every point in the process. Indeed, if the system is in thermal contact with its surroundings as the process occurs, we cannot specify the conditions under which the process occurs without specifying the temperature of the surroundings along this path.describes a spontaneous process whose path can be specified by the values of thermodynamic variable \(Y\) and the temperature of the surroundings, \(\hat{T}\), as a function of time, \(t\). Let us denote the curve that describes this path as \(C\). We can divide this path into short intervals. Let \(C_k\) denote a short segment of this path along which the temperature of the surroundings is approximately constant. For our present purposes, the temperature of the system, \(T\), is irrelevant; since the process is spontaneous, the temperature of the system may have no meaningful value within the interval \(C_k\). As the system traverses segment \(C_k\), it accepts a quantity of heat, \(q_k\), from the surroundings, which are at temperature \({\hat{T}}_k\). The heat exchanged by the surroundings within \(C_k\) is \({\hat{q}}_k=-q_k\). Below, we show that it is always possible to carry out the process in such a way that the change in the surroundings occurs reversibly. Then\[\Delta {\hat{S}}_k=\frac{{\hat{q}}_k}{{\hat{T}}_k}=-\frac{q_k}{{\hat{T}}_k} \nonumber \]and since \(\Delta S_k+\Delta {\hat{S}}_k>0\), it follows that\[\Delta S_k>\frac{q_k}{{\hat{T}}_k} \nonumber \]This is the Clausius inequality. It plays a central role in the thermodynamics of spontaneous processes. When we make the intervals \(C_k\) arbitrarily short, we have\[dS_k>\frac{{dq}_k}{{\hat{T}}_k} \nonumber \]To demonstrate that we can measure the entropy change in the surroundings during a spontaneous process, let us use a conceptual device to transfer the heat, \(q_k\), that must be exchanged from the surroundings at temperature, \(\hat{T}_k\) to the system. As sketched in to the high temperature reservoir in every cycle. While the system is within \(C_k\), we maintain the Carnot engine’s high temperature reservoir at \(\hat{T}_k\), and allow heat \(q_k\) to pass from the high temperature reservoir to the system. The high temperature reservoir is the only part of the surroundings that is in thermal contact with the system; \(q_k\) is the only heat exchanged by the system while it is within \(C_k\).To maintain the high temperature reservoir at \({\hat{T}}_k\) we operate the Carnot engine for a large integral number of cycles, \(n\), such that \(q_k\approx n\times \delta q\), and do so at a rate that just matches the rate at which heat passes from the high-temperature reservoir to the system. When the system passes from path-segment \(C_k\) to path-segment \(C_{k+1}\), we alter the steps in the reversible Carnot cycle to maintain the high-temperature reservoir at the new surroundings temperature, \({\hat{T}}_{k+1}\). The low-temperature heat reservoir for this Carnot engine is always at the constant temperature \({\hat{T}}_{\ell }\). Let the heat delivered from the high-temperature reservoir to the Carnot engine within \(C_k\) be \({\left({\hat{q}}_k\right)}_h\). We have \(q_k=-{\left({\hat{q}}_k\right)}_h\). Let the heat delivered from the low-temperature reservoir to the Carnot engine within \(C_k\) be \({\left({\hat{q}}_k\right)}_{\ell }\). Let the heat delivered to the low-temperature reservoir within \(C_k\) be \(q_{\ell }\). We have \(q_{\ell }=-{\left({\hat{q}}_k\right)}_{\ell }\). Since the Carnot engine is reversible, we have\[\frac{\left(\hat{q}_k\right)_h}{\hat{T}_k}+\frac{\left( \hat{q}_k\right)_{\ell }}{\hat{T}_{\ell }}=0 \nonumber \]and\[-\frac{q_k}{\hat{T}_k}-\frac{q_{\ell }}{\hat{T}_{\ell }}=0 \nonumber \]so that\[\frac{q_{\ell }}{\hat{T}_{\ell }}=-\frac{q_k}{\hat{T}_k} \nonumber \]While the system is within \(C_k\), it receives an increment of heat \(q_k\) from the high temperature reservoir. Simultaneously, three components in the surroundings also exchange heat. Let the entropy changes in the high-temperature reservoir, the Carnot engine, and the low-temperature reservoir be \({\left(\Delta {\hat{S}}_{HT}\right)}_k\), \({\left(\Delta {\hat{S}}_{CE}\right)}_k\), and \({\left(\Delta {\hat{S}}_{LT}\right)}_k\), respectively. The high temperature reservoir receives heat \(q_k\) from the Carnot engine and delivers the same quantity of heat to the system. The net heat accepted by the high temperature reservoir is zero. No change occurs in the high-temperature reservoir. We have \({\left(\Delta {\hat{S}}_{HT}\right)}_k=0\). The reversible Carnot engine completes an integral number of cycles, so that \({\left(\Delta {\hat{S}}_{CE}\right)}_k=0\). The low temperature reservoir accepts heat \({-\left({\hat{q}}_k\right)}_{\ell }=q_{\ell }\), at the fixed temperature \({\hat{T}}_{\ell }\), during the reversible operation of the Carnot engine, so that\[\left(\Delta \hat{S}_{LT}\right)_k=\frac{q_{\ell }}{\hat{T}_{\ell }}=-\frac{q_k}{\hat{T}_k} \nonumber \]The entropy change in the surroundings as the system passes through \(C_k\) is\[\Delta \hat{S}_{k}= \left(\Delta \hat{S}_{HT}\right)_k+ \left(\Delta \hat{S}_{CE}\right)_k+\left(\Delta \hat{S}_{LT}\right)_k=\frac{q_{\ell }}{\hat{T}_{\ell }}=-\frac{q_k}{\hat{T}_k} \nonumber \]so that, as we observed above,\[\Delta S_k >- \Delta {\hat{S}}_k =\frac{q_k}{\mathrm{\ }{\hat{T}}_k} \nonumber \]Since \(C_k\) can be any part of path C, and \(C_k\) can be made arbitrarily short, we have for every increment of any spontaneous process occurring in a closed system that can exchange heat with its surroundings, \(d\hat{S}=-{dq}/{\hat{T}}\), and\[dS>\frac{dq}{\hat{T}} \nonumber \]If the temperature of the surroundings is constant between any two points A and B on curve C, we can integrate over this interval to obtain \(\mathrm{\Delta }_{\mathrm{AB}}\widehat{\mathrm{S}}\mathrm{=-} \mathrm{q}_{\mathrm{AB}}/\widehat{\mathrm{T}}\) and\[\mathrm{\Delta }_{\mathrm{AB}}\mathrm{S>}\frac{\mathrm{q}_{\mathrm{AB}}}{\widehat{\mathrm{T}}} \nonumber \]For an adiabatic process, \(q=0\). For any arbitrarily small increment of an adiabatic process, \(dq=0\). It follows that \(\Delta S>0\) and \(dS>0\) for any spontaneous adiabatic process. This page titled 9.15: Entropy and Spontaneous Change is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Paul Ellgen via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,245 |
9.16: Internal Entropy and the Second Law
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/09%3A_The_Second_Law_-_Entropy_and_Spontaneous_Change/9.16%3A_Internal_Entropy_and_the_Second_Law | For every incremental part of any process, we have \(dS+d\hat{S}\ge 0\). Let us define a new quantity, the external entropy change, as \(d_eS=-d\hat{S}\). The change criteria become \(dS-d_eS\ge 0\). Now, let us define the internal entropy \({}^{1}\) change as \(d_iS=dS-d_eS\). The entropy change for a system is the sum of its internal and external entropy changes, \(dS=d_iS+d_eS\). We use \(d_iS\) and \(d_eS\) to represent incremental changes. To represent macroscopic changes, we use \({\Delta }_iS\) and \({\Delta }_eS\). Since two processes can effect different changes in the surroundings while the change that occurs in the system is the same, \(\Delta \hat{S}\) and \({\Delta }_eS\) are not completely determined by the change in the state of the system. Neither the internal nor the external entropy change depends solely on the change in the state of the system. Nevertheless, we see that \(d_iS\ge 0\) or \({\Delta }_iS\ge 0\) is an alternative expression of the thermodynamic criteria.The external entropy change is that part of the entropy change that results from the interaction between the system and its surroundings. The internal entropy is that part of the entropy change that results from processes occurring entirely within the system. (We also use the term “internal energy.” The fact that the word “internal” appears in both of these terms does not reflect any underlying relationship of material significance.) The criterion \(d_iS>0\) makes it explicit that a process is spontaneous if and only if the events occurring within the system act to increase the entropy of the system. In one common figure of speech, we say “entropy is produced” in the system in a spontaneous process. (It is, of course, possible for a spontaneous process to have \(d_iS>0\) while \(d_eS<0\), and \(dS<0\).)In Section 14.1, we introduce a quantity,\[\sum^{\omega }_{j=1}{{\mu }_j}{dn}_j \nonumber \]that we can think of as a change in the chemical potential energy of a system. The internal entropy change is closely related to this quantity: We find\[d_iS=-\frac{1}{T}\sum^{\omega }_{j=1}{{\mu }_j}{dn}_j \nonumber \]As required by the properties of \(d_iS\), we find that \(\sum^{\omega }_{j=1}{{\mu }_j}{dn}_j\le 0\) is an expression of the thermodynamic criteria for change. Internal entropy is a useful concept that is applied to particular advantage in the analysis of many different kinds of spontaneous processes in non-homogeneous systems.This page titled 9.16: Internal Entropy and the Second Law is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Paul Ellgen via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,246 |
9.17: Notation and Terminology- Conventions for Spontaneous Processes
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/09%3A_The_Second_Law_-_Entropy_and_Spontaneous_Change/9.17%3A_Notation_and_Terminology-_Conventions_for_Spontaneous_Processes | We now want to consider criteria for a spontaneous process in which a closed system passes from state A to state B. State B can be an equilibrium state, but state A is not. We can denote the energy change for this process as \({\Delta }_{AB}E\), and we can find it by measuring the heat and work exchanged with the surroundings as the process takes place, \({\Delta }_{AB}E=q+w\), or for a process in which the increments of heat and work are arbitrarily small, \(d_{AB}E=dq+dw\). Likewise, we can denote the entropy change for the spontaneous process as \({\Delta }_{AB}S\) or \(d_{AB}S\), but we cannot find the entropy change by measuring \(q^{spon}\) or \(dq^{spon}\). If we cannot find the entropy change, we cannot find the Helmholtz or Gibbs free energy changes from their defining relationships, \(A=E-TS\) and \(G=H-TS\). Moreover, intensive variables—pressure, temperature, and concentrations—may not have well-defined values in a spontaneously changing system.When we say that a reversible process occurs with some thermodynamic variable held constant, we mean what we say: The thermodynamic variable has the same value at every point along the path of reversible change. In the remainder of this chapter, we develop criteria for spontaneous change. These criteria are statements about the values of \(\Delta E\), \(\Delta H\), \(\Delta A\), and \(\Delta G\) for a system that can undergo spontaneous change under particular conditions. In stating some of these criteria, we specify the conditions by saying that the pressure or the temperature is constant. As we develop these criteria, we will see that these stipulations have specific meanings. When we say that a process occurs “at constant volume” (isochorically), we mean that the volume of the system remains the same throughout the process. When we say that a spontaneous process occurs “at constant pressure” (isobarically or isopiestically), we mean that the pressure applied to the system by the surroundings is constant throughout the spontaneous process and that the system pressure is equal to the applied pressure, \(P=P_{applied}\), at all times. When we say that a spontaneous process occurs “at constant temperature”, we may mean only thatThis page titled 9.17: Notation and Terminology- Conventions for Spontaneous Processes is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Paul Ellgen via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,247 |
9.18: The Heat Exchanged by A Spontaneous Process at Constant Entropy
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/09%3A_The_Second_Law_-_Entropy_and_Spontaneous_Change/9.18%3A_The_Heat_Exchanged_by_A_Spontaneous_Process_at_Constant_Entropy | To continue our effort to find change criteria that use only properties of the system, let us consider a spontaneous process, during which the system is in contact with its surroundings and the entropy of the system is constant. For every incremental part of this process, we have \(dS=0\) and \(dS+d\hat{S}>0\). Hence, \(d\hat{S}>0\). It follows that \(\Delta S=0\), \(\Delta S+\Delta \hat{S}>0\), and \(\Delta \hat{S}>0\). (Earlier, we found that the entropy changes for a spontaneous process in an isolated system are \(\Delta S>0\) and \(\Delta \hat{S}=0\). The present system is not isolated.) Since the change that occurs in the system is irreversible, \(dS=0\) does not mean that \(dq=0\). The requirement that \(dS=0\) places no constraints on the temperature of the system or of the surroundings at any time before, during, or after the process occurs.In Section 9.15, we find \(dS>dq^{spon}/\hat{T}\) for any spontaneous process in a closed system. If the entropy of the system is constant, we have\[{dq}^{spon}<0 \nonumber \] (spontaneous process, constant entropy)for every incremental part of the process. For any finite change, it follows that the overall heat must satisfy the same inequality:\[q^{spon}<0 \nonumber \] (spontaneous process, constant entropy)For a spontaneous process that occurs with the system in contact with its surroundings, but in which the entropy of the system is constant, the system must give up heat to the surroundings. \(dq<0\) and \(q<0\) are criteria for spontaneous change at constant system entropy.In Section 9.14, we develop criteria for reversible processes. The criteria relate changes in the system’s state functions to the reversible non-pressure–volume work that is done on the system during the process. Now we can develop parallel criteria for spontaneous processes.This page titled 9.18: The Heat Exchanged by A Spontaneous Process at Constant Entropy is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Paul Ellgen via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,248 |
9.19: The Energy Change for A Spontaneous Process at Constant S and V
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/09%3A_The_Second_Law_-_Entropy_and_Spontaneous_Change/9.19%3A_The_Energy_Change_for_A_Spontaneous_Process_at_Constant_S_and_V | From the fundamental equation, \(dE-TdS+PdV=dw^{rev}_{NPV}\) for a reversible process. We find that the criterion for reversible change at constant entropy is \({\left(dE\right)}_S=dw^{rev}_{net}\). For a reversible process at constant entropy and volume, we find \({\left(dE\right)}_{SV}=dw^{rev}_{NPV}\)To consider the energy change for a spontaneous process, we begin with \(dE=dq+dw\), which is independent of whether the change is spontaneous or reversible. For a spontaneous process in which both pressure–volume, \(dw^{spon}_{PV}\), and non-pressure–volume work, \(dw^{spon}_{NPV}\), are possible, we have \(dE=dq^{spon}+dw^{spon}_{PV}+dw^{spon}_{NPV}\), which we can rearrange to\[dE-dw^{spon}_{PV}-dw^{spon}_{NPV}=dq^{spon} \nonumber \]For a spontaneous, constant-entropy change that occurs while the system is in contact with its surroundings, we have \({dq}^{spon}<0\). Hence, we have \({\left(dE\right)}_S-{dw}^{spon}_{PV}-{dw}^{spon}_{NPV}<0\). Lettting \({dw}^{spon}_{net}={dw}^{spon}_{PV}+{dw}^{spon}_{NPV}\), we can express this as\[{\left(dE\right)}_S<{dw}^{spon}_{net} \nonumber \] (spontaneous process, constant S)If we introduce the further condition that the spontaneous process occurs while the volume of the system remains constant, we have \({dw}^{spon}_{PV}=0\). Making this substitution and repeating our earlier result for a reversible process, we have the parallel relationships \[{\left(dE\right)}_{SV}<{dw}^{spon}_{NPV} \nonumber \] (spontaneous process, constant S and V)\[{\left(dE\right)}_{SV}={dw}^{rev}_{NPV} \nonumber \] (reversible process, constant S and V)If we introduce the still further requirement that only pressure–volume work is possible, we have \({dw}^{spon}_{NPV}=0\). The parallel relationships become\[{\left(dE\right)}_{SV}<0 \nonumber \] (spontaneous process, constant S and V, only PV work)\[{\left(dE\right)}_{SV}=0 \nonumber \] (reversible process, constant S and V, only PV work)These equations state the criteria for change under conditions in which the entropy and volume of the system remain constant. If the process is reversible, the energy change must be equal to the non-pressure–volume work. If the process is spontaneous, the energy change must be less than the non-pressure volume work. If only pressure–volume work is possible, the energy of the system must decrease in a spontaneous process and remain constant in a reversible process. Each of these differential-expression criteria applies to every incremental part of a change that falls within its scope. In consequence, corresponding criteria apply to finite spontaneous changes. These criteria are listed in the summary in Section 9.25.Now the question arises: What sort of system can undergo a change at constant entropy? If the process is reversible and involves no heat, the entropy change will be zero. If we have a system consisting of a collection of solid objects at rest, we can rearrange the objects without transferring heat between the objects and their surroundings. For such a process, the change in the energy of the system is equal to the net work done on the system. Evidently, reversible changes in mechanical systems occur at constant entropy and satisfy the criterion\[{\left(dE\right)}_S={dw}^{rev}_{net} \nonumber \]For a change that occurs reversibly and in which the entropy of the system is constant, the energy change is equal to the net work (of all kinds) done on the system. A spontaneous change in a mechanical system dissipates mechanical energy as heat by friction. If this heat appears in the surroundings and the thermal state of the system remains unchanged, such a spontaneous processes satisfies the criterion \[{\left(dE\right)}_S<{dw}^{spon}_{net} \nonumber \]We have arrived at the criterion for change that we are accustomed to using when we deal with a change in the potential energy of a constant-temperature mechanical system: A spontaneous change can occur in such a system if and only if the change in the system’s energy is less than the net work done on it. The excess work is degraded to heat that appears in the surroundings. This convergence notwithstanding, the principles of mechanics and those of thermodynamics, while consistent with one another, are substantially independent. We address this issue briefly in Section 12.2.In the next section, we develop spontaneous-change criteria based on the enthalpy change for a constant-entropy process. In subsequent sections, we consider other constraints and find other criteria. We find that the Helmholtz and Gibbs free energy functions are useful because they provide criteria for spontaneous change when the process is constrained to occur isothermally.This page titled 9.19: The Energy Change for A Spontaneous Process at Constant S and V is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Paul Ellgen via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,249 |
9.20: The Enthalpy Change for A Spontaneous Process at Constant S and P
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/09%3A_The_Second_Law_-_Entropy_and_Spontaneous_Change/9.20%3A_The_Enthalpy_Change_for_A_Spontaneous_Process_at_Constant_S_and_P | From \(H=E+PV\), we have \(dH=dE+d\left(PV\right)\). For a spontaneous process in which both pressure–volume and non-pressure–volume work are possible, we can write this as \(dH={dq}^{spon}+{dw}^{spon}_{PV}+{dw}^{spon}_{NPV}+d\left(PV\right)\), which we can rearrange to \(dH-{dw}^{spon}_{PV}-{dw}^{spon}_{NPV}-d\left(PV\right)={dq}^{spon}\). For a spontaneous constant-entropy change that occurs while the system is in contact with its surroundings, we have \({dq}^{spon}\)\(<0\), so that \[{\left(dH\right)}_S-{dw}^{spon}_{PV}-{dw}^{spon}_{NPV}-d\left(PV\right)<0. \nonumber \]Now, let us introduce the additional constraint that the system is subjected to a constant applied pressure, \(P_{applied}\), throughout the process. Thus \(P_{applied}\) is a well-defined property that can be measured at any stage of the process. The incremental pressure–volume work done by the surroundings on the system is \({dw}^{spon}_{PV}=-P_{applied}dV\). In principle, the system can undergo spontaneous change so rapidly that there can be a transitory difference between the system pressure and the applied pressure. In practice, pressure adjustments occur very rapidly. Except in extreme cases, we find that \(P=P_{applied}\) is a good approximation at all times. Then the change in the pressure volume product is \(d\left(PV\right)=P_{applied}dV\). Making these substitutions, the enthalpy inequality becomes\[{\left(dH\right)}_{SP}<{dw}^{spon}_{NPV} \nonumber \] (spontaneous process, constant S and \(P_{applied}\))From our earlier discussion of reversible processes, we have the parallel relationship\[{\left(dH\right)}_{SP}={dw}^{rev}_{NPV} \nonumber \] (any reversible process, constant S and \(P_{applied}\))If we introduce the still further requirement that only pressure–volume work is possible, we have \({dw}_{NPV}=0\). The parallel relationships become\[{\left(dH\right)}_{SP}<0 \nonumber \] (spontaneous process, constant S and P, only PV work)\[{\left(dH\right)}_{SP}=0 \nonumber \] (reversible process, constant S and P, only PV work)These equations state the criteria for change under conditions in which the entropy and pressure of the system remain constant. If the process is reversible, the enthalpy change must be equal to the non-pressure–volume work. If the process is spontaneous, the enthalpy change must be less than the non-pressure–volume work. If only pressure–volume work is possible, the enthalpy of the system must decrease in a spontaneous process and remain constant in a reversible process. Since each of these differential criteria applies to every incremental part of a reversible change that falls within its scope, corresponding criteria apply to finite spontaneous changes. These criteria are listed in the summary in Section 9.25.This page titled 9.20: The Enthalpy Change for A Spontaneous Process at Constant S and P is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Paul Ellgen via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,250 |
9.21: The Entropy Change for A Spontaneous Process at Constant E and V
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/09%3A_The_Second_Law_-_Entropy_and_Spontaneous_Change/9.21%3A_The_Entropy_Change_for_A_Spontaneous_Process_at_Constant_E_and_V | For any spontaneous process, we have \(dE={dq}^{spon}\)+\({dw}^{spon}\), which we can rearrange to \({dq}^{spon}=dE-{dw}^{spon}\). Substituting our result from Section 9.15, we have\[\hat{T}dS>dE-dw^{spon} \nonumber \] (spontaneous process)If the energy of the system is constant throughout the process, we have \(dE=0\) and\[\hat{T}{\left(dS\right)}_E>-dw^{spon} \nonumber \] (spontaneous process, constant energy)The spontaneous work is the sum of the pressure–volume work and the non-pressure–volume work, \(\ dw^{spon}={dw}^{spon}_{PV}+{dw}^{spon}_{NPV}\). If we introduce the further condition that the spontaneous process occurs while the volume of the system remains constant, we have \({dw}^{spon}_{PV}=0\). Making this substitution and repeating our earlier result for a reversible process, we have the parallel relationships\[{\left(dS\right)}_{EV}>\frac{-dw^{spon}_{NPV}}{\hat{T}} \nonumber \] (spontaneous process, constant \(E\) and \(V\))\[{\left(dS\right)}_{EV}=\frac{-dw^{spon}_{NPV}}{\hat{T}} \nonumber \] (reversible process, constant \(E\) and \(V\))(For a reversible process, \(T=\hat{T}\).) If the spontaneous process occurs while \(\hat{T}\) is constant, summing the incremental contributions to a finite change of state produces the parallel relationships\[{\left(\Delta S\right)}_{EV}>\frac{-w^{spon}_{NPV}}{\hat{T}} \nonumber \] (spontaneous process, constant \(E\), \(V\), and \(\hat{T}\))\[{\left(\Delta S\right)}_{EV}=\frac{-w^{spon}_{NPV}}{\hat{T}} \nonumber \] (reversible process, constant \(E\), \(V\), and \(\hat{T}\))Constant \(\hat{T}\) corresponds to the common situation in chemical experimentation in which we place a reaction vessel in a constant-temperature bath. If we introduce the further condition that only pressure–volume work is possible, we have \(dw^{spon}_{NPV}=0\). The parallel relationships become\[{\left(dS\right)}_{EV}>0 \nonumber \] (spontaneous process, constant \(E\) and \(V\), only \(PV\) work)\[{\left(dS\right)}_{EV}=0 \nonumber \] (reversible process, constant \(E\) and \(V\), only\(\ PV\) work)If the energy and volume are constant for a system in which only pressure–volume work is possible, the system is isolated. The conditions we have just derived are entirely equivalent to our earlier conclusions that \(dS=0\) and \(dS>0\) for an isolated system that is at equilibrium or undergoing a spontaneous change, respectively. Summing the incremental contributions to a finite change of state produces the parallel relationships\[{\left(\Delta S\right)}_{EV}>0 \nonumber \] (spontaneous process, only \(PV\) work)\[{\left(\Delta S\right)}_{EV}=0 \nonumber \] (reversible process, only \(PV\) work)The validity of these expressions is independent of any variation in either \(T\) or \(\hat{T}\).This page titled 9.21: The Entropy Change for A Spontaneous Process at Constant E and V is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Paul Ellgen via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,251 |
9.22: The Entropy Change for A Spontaneous Process at Constant H and P
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/09%3A_The_Second_Law_-_Entropy_and_Spontaneous_Change/9.22%3A_The_Entropy_Change_for_A_Spontaneous_Process_at_Constant_H_and_P | For any spontaneous process, we have\[dH=dE+PdV+VdP=dq^{spon}-P_{applied}dV+dw^{spon}_{NPV}+PdV+VdP \nonumber \]If the pressure is constant (\(P=P_{applied}=\mathrm{constant}\)), this becomes \(dq^{spon}=dH-dw^{spon}_{NPV}\). Substituting our result from Section 9.15, we have\[\hat{T}{\left(dS\right)}_P>dH-dw^{spon}_{NPV} \nonumber \] (spontaneous process, constant \(P\))If the enthalpy of the system is also constant throughout the process, we have\[\hat{T}{\left(dS\right)}_{HP}>-dw^{spon}_{NPV} \nonumber \] (spontaneous process, constant \(H\) and \(P\))Dividing by \(\hat{T}\) and repeating our earlier result for a reversible process, we have the parallel relationships\[{\left(dS\right)}_{HP}>\frac{-dw^{spon}_{NPV}}{\hat{T}} \nonumber \] (spontaneous process, constant \(H\) and \(P\)) \[{\left(dS\right)}_{HP}=\frac{-dw^{rev}_{NPV}}{\hat{T}} \nonumber \] (reversible process, constant \(H\) and \(P\))If it is also true that the temperature of the surroundings is constant, summing the incremental contributions to a finite change of state produces the parallel relationships\[{\left(\Delta S\right)}_{HP}>\frac{-w^{spon}_{NPV}}{\hat{T}} \nonumber \] (spontaneous process, constant \(H\), \(P\), and \(\hat{T}\))\[{\left(\Delta S\right)}_{HP}>\frac{-w^{rev}_{NPV}}{\hat{T}} \nonumber \] (reversible process, constant \(H\), \(P\), and \(\hat{T}=T\))If only pressure–volume work is possible, we have \(dw^{spon}_{NPV}=0\), and\[{\left(dS\right)}_{HP}>0 \nonumber \] (spontaneous process, constant \(H\), \(P\), only \(PV\) work)\[{\left(dS\right)}_{HP}=0 \nonumber \] (reversible process, constant \(H\) and \(P\), only \(PV\) work)and for a finite change of state,\[{\left(\Delta S\right)}_{HP}>0 \nonumber \] (spontaneous process, only \(PV\) work)\[{\left(\Delta S\right)}_{HP}=0 \nonumber \] (reversible process, only \(PV\) work)In this and earlier sections, we develop criteria for spontaneous change that are based on \(dE\) and \(dH\). We are now able to develop similar criteria for a spontaneous change in a system that is in thermal contact with constant-temperature surroundings. These criteria are based on \(dA\) and \(dG\). However, before doing so, we develop a general relationship between the isothermal work in a spontaneous process and the isothermal work in a reversible process, when these processes take a system from a common initial state to a common final state.This page titled 9.22: The Entropy Change for A Spontaneous Process at Constant H and P is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Paul Ellgen via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,252 |
9.23: The Reversible Work is the Minimum Work at Constant Tˆ
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/09%3A_The_Second_Law_-_Entropy_and_Spontaneous_Change/9.23%3A_The_Reversible_Work_is_the_Minimum_Work_at_Constant_T | The Clausius inequality leads to an important constraint on the work that can be done on a system during a spontaneous process in which the temperature of the surroundings is constant. As we discuss in Section 9.7, the initial state of the spontaneous process cannot be a true equilibrium state. In our present considerations, we assume that the initial values of all the state functions of the spontaneously changing system are the same as those of a true equilibrium system. Likewise, we assume that the final state of the spontaneously changing system is either a true equilibrium state or a state whose thermodynamic functions have the same values as those of a true equilibrium system.From the first law applied to any spontaneous process in a closed system, we have \({\Delta E}^{rev}={\Delta E}^{spon}\) and \(q^{rev}+w^{rev}=q^{spon}+w^{spon}\). Since the temperature of the system and its surroundings are equal and constant for the reversible process, we have \(q^{rev}=T\Delta S=\hat{T}\Delta S\). So long as the temperature of the surroundings is constant, we have \(q^{spon}<\hat{T}\Delta S\) for the spontaneous process. It follows that\[\hat{T}\Delta S+w^{rev}-w^{spon}=q^{spon}<\hat{T}\Delta S \nonumber \]so that \[w^{rev}Section 7.20, we find this result for the special case in which the only work is the exchange of pressure–volume work between an ideal gas and its surroundings.) Equivalently, a given isothermal process produces the maximum amount of work in the surroundings when it is carried out reversibly: Since \(w^{rev}=-{\hat{w}}^{rev}\) and \(w^{spon}=-{\hat{w}}^{spon}\), we have \(-{\hat{w}}^{rev}<-{\hat{w}}^{spon}\) or\[{\hat{w}}^{rev}>{\hat{w}}^{spon} \nonumber \]This page titled 9.23: The Reversible Work is the Minimum Work at Constant Tˆ is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Paul Ellgen via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,253 |
9.24: The Free Energy Changes for A Spontaneous Process at Constant Tˆ
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/09%3A_The_Second_Law_-_Entropy_and_Spontaneous_Change/9.24%3A_The_Free_Energy_Changes_for_A_Spontaneous_Process_at_Constant_T | Now let us consider the change in the Helmholtz free energy when a system undergoes a spontaneous change while in thermal contact with surroundings whose temperature remains constant at \(\hat{T}\). We begin by considering an arbitrarily small increment of change in a process in which the temperature of the system remains constant at \(T=\hat{T}\). The change in the Helmholtz free energy for this process is \({\left(dA\right)}_T=dE-TdS\). Substituting \(dE=dq^{spon}+dw^{spon}\) gives\[{\left(dA\right)}_T=dq^{spon}+dw^{spon}-TdS \nonumber \] (spontaneous process, constant \(T\))Rearranging, we have \({\left(dA\right)}_T-dw^{spon}+TdS=dq^{spon}\). Using the inequality \(dq^{spon}<\hat{T}dS\), we have\[{\left(dA\right)}_T-dw^{spon}+TdS<\hat{T}dS \nonumber \]When we stipulate that \(T=\hat{T}=\mathrm{constant}\), this becomes\[{\left(dA\right)}_T for any spontaneous process that occurs at constant pressure, while the system is in contact with surroundings at the constant temperature \(\hat{T}\), and in which the initial and final system temperatures are equal to \(\hat{T}\). These are the most common conditions for carrying out a chemical reaction. Consider the situation after we mix non-volatile reactants in an open vessel in a constant-temperature bath. We suppose that the initial temperature of the mixture is the same as that of the bath. The atmosphere applies a constant pressure to the system. The reaction is an irreversible process. It proceeds spontaneously until its equilibrium position is reached. Until equilibrium is reached, the reaction cannot be reversed by an arbitrarily small change in the applied pressure or the temperature of the surroundings. \({\left(\Delta G\right)}_{P\hat{T}}9.24: The Free Energy Changes for A Spontaneous Process at Constant Tˆ is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Paul Ellgen via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,254 |
9.25: Summary- Thermodynamic Functions as Criteria for Change
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/09%3A_The_Second_Law_-_Entropy_and_Spontaneous_Change/9.25%3A_Summary-_Thermodynamic_Functions_as_Criteria_for_Change | For a spontaneous process, we conclude that the entropy change of the system must satisfy the inequality \(\Delta S+\Delta \hat{S}>\)\(0\). For any process that occurs reversibly, we conclude that \(\Delta S+\Delta \hat{S}=0\). For every incremental part of a reversible process that occurs in a closed system, we have the following relationships: \[dE=TdS-PdV+dw^{rev}_{NPV} \nonumber \] \[dH=TdS+VdP+dw^{rev}_{NPV} \nonumber \] \[dA=-SdT-PdV+dw^{rev}_{NPV} \nonumber \] \[dG=-SdT+VdP+dw^{rev}_{NPV} \nonumber \]At constant entropy, the energy relationship becomes:\[{\left(dE\right)}_S=dw^{rev}_{net} \nonumber \] \[{\left(\Delta E\right)}_S=w^{rev}_{net} \nonumber \]At constant temperature, the Helmholtz free energy relationship becomes:\[{\left(dA\right)}_T=dw^{rev}_{net} \nonumber \] \[{\left(\Delta A\right)}_T=w^{rev}_{net} \nonumber \]For reversible processes in which all work is pressure–volume work:\[dE=TdS-PdV \nonumber \] \[dH=TdS+VdP \nonumber \] \[dA=-SdT-PdV \nonumber \] \[dG=-SdT+VdP \nonumber \]From these general equations, we find the following relationships for reversible processes when various pairs of variables are held constant:\[{\left(dS\right)}_{EV}={-dw^{rev}_{NPV}}/{T} {\left(\Delta S\right)}_{EV}={-w^{rev}_{NPV}}/{T} \nonumber \] \[{\left(dS\right)}_{HP}={-dw^{rev}_{NPV}}/{T} {\left(\Delta S\right)}_{HP}={-w^{rev}_{NPV}}/{T} \nonumber \] \[{\left(dE\right)}_{SV}=dw^{rev}_{NPV} {\left(\Delta E\right)}_{SV}=w^{rev}_{NPV} \nonumber \] \[{\left(dH\right)}_{SP}=dw^{rev}_{NPV} {\left(\Delta H\right)}_{SP}=w^{rev}_{NPV} \nonumber \] \[{\left(dA\right)}_{TV}=dw^{rev}_{NPV} {\left(\Delta A\right)}_{TV}=w^{rev}_{NPV} \nonumber \] \[{\left(dG\right)}_{TP}=dw^{rev}_{NPV} {\left(\Delta G\right)}_{TP}=w^{rev}_{NPV} \nonumber \]If the only work is pressure–volume work, then \(dw^{rev}_{NPV}=0\), \(w^{rev}_{NPV}=0\), and these relationships become:\[{\left(dS\right)}_{EV}=0 {\left(\Delta S\right)}_{EV}=0 \nonumber \] \[{\left(dS\right)}_{HP}=0 {\left(\Delta S\right)}_{HP}=0 \nonumber \] \[{\left(dE\right)}_{SV}=0 {\left(\Delta E\right)}_{SV}=0 \nonumber \] \[{\left(dH\right)}_{SP}=0 {\left(\Delta H\right)}_{SP}=0 \nonumber \] \[{\left(dA\right)}_{TV}=0 {\left(\Delta A\right)}_{TV}=0 \nonumber \] \[{\left(dG\right)}_{TP}=0 {\left(\Delta G\right)}_{TP}=0 \nonumber \]For every incremental part of an irreversible process that occurs in a closed system at constant entropy:\[{dq}^{spon}<0 \nonumber \] and\[{\left(dE\right)}_S<{dw}^{spon}_{net} \nonumber \]and\[q^{spon}<0 \nonumber \] and\[{\left(\Delta E\right)}_S{-dw^{spon}_{NPV}}/{\hat{T}} {\left(\Delta S\right)}_{EV}={-w^{spon}_{NPV}}/{\hat{T}} \nonumber \]\[{\left(dS\right)}_{HP}>{-dw^{spon}_{NPV}}/{\hat{T}} {\left(\Delta S\right)}_{HP}>{-w^{spon}_{NPV}}/{\hat{T}} \nonumber \]\[{\left(dE\right)}_{SV} \nonumber \]\[{\left(dH\right)}_{SP} \nonumber \]\[{\left(dA\right)}_{\hat{T}V} \nonumber \]\[{\left(dG\right)}_{\hat{T}P} \nonumber \]For irreversible processes in which the only work is pressure–volume work, these inequalities become:\[{\left(dS\right)}_{EV}>0 {\left(\Delta S\right)}_{EV}>0 \nonumber \] \[{\left(dS\right)}_{HP}>0 {\left(\Delta S\right)}_{HP}>0 \nonumber \] \[{\left(dE\right)}_{SV}<0 {\left(\Delta E\right)}_{SV}<0 \nonumber \] \[{\left(dH\right)}_{SP}<0 {\left(\Delta H\right)}_{SP}<0 \nonumber \] \[{\left(dA\right)}_{\hat{T}V}<0 {\left(\Delta A\right)}_{\hat{T}V}<0 \nonumber \] \[{\left(dG\right)}_{\hat{T}P}<0 {\left(\Delta G\right)}_{\hat{T}P}<0 \nonumber \]This page titled 9.25: Summary- Thermodynamic Functions as Criteria for Change is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Paul Ellgen via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,255 |
9.26: Problems
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/09%3A_The_Second_Law_-_Entropy_and_Spontaneous_Change/9.26%3A_Problems | Problems1. Does a perpetual motion machine of the second kind violate the principle of conservation of energy?2. What is the contrapositive of \(\left(\mathrm{SL\ and}\ \sim \mathrm{MSL}\right)\Rightarrow \left(\Delta \hat{S}<0\right)\)? It is a theorem of logic that \(\sim (\mathrm{B\ and\ C})\Rightarrow (\sim B{\mathrm{and}}/{\mathrm{or}}\sim \mathrm{C})\). Interpret this theorem. Given that SL is true and that \(\sim \left(\mathrm{SL\ and}\ \sim \mathrm{MSL}\right)\) is true, prove that \(\sim \mathrm{MSL}\) is true.3. Max Planck introduced the following statement of the second law:“It is impossible to construct an engine which will work in a complete cycle, and produce no effect except the raising of a weight and the cooling of a heat-reservoir.”(M. Planck, Treatise on Thermodynamics, 3rd Edition, translated from the seventh German Edition, Dover Publications, Inc., p 89.) Since we take “raising a weight” to be equivalent to “produces work in the surroundings,” the Planck statement differs from our machine-based statement only in that it allows the temperature of the heat source to decrease as the production of work proceeds. We can now ask whether this difference has any material consequences. In particular, can we prove that the Planck statement implies our machine-based statement, or vice versa? (Suggestion: Suppose that we have identical Planck-type machines, each with its own heat reservoir. We dissipate by friction the work produced by one machine in the heat reservoir of the other.)4. Our statements of the first and second laws have a common format: Assertion that a state function exists; operational definition by which the state function can be measured; statement of a property exhibited by this state function. Express the zero-th law of thermodynamics (Chapter 1) in this format.5. A 0.400 mol sample of \(N_2\) is compressed from 5.00 L to 2.00 L, while the temperature is maintained constant at 350 K. Assume that \(N_2\) is an ideal gas. Calculate the change in the Helmholtz free energy, \(\Delta A\).6. Show that \(\Delta G=\Delta A\) when an ideal gas undergoes a change at constant temperature.7. Calculate \(\Delta E\), \(\Delta H\), and \(\Delta G\) for the process in problem 5.8. A sample of 0.200 mol of an ideal gas, initially at 5.00 bar, expands reversibly and isothermally from 1.00 L to 10.00 L. Calculate \(\Delta E\), \(\Delta H\), and \(\Delta G\) for this process.9. A 100.0 g sample of carbon tetrachloride is compressed from 1.00 bar to 10.00 bar at a constant temperature of 20 C. At 20 C, carbon tetrachloride is a liquid whose density is \(1.5940\ \mathrm{g}\ \mathrm{m}{\mathrm{L}}^{-1}\). Assume that the density does not change significantly with pressure. What is \(\Delta G\) for this process?10. Calculate the Helmholtz free energy change (\(\Delta A\)) in problem 9.11. If \(C_V\) is constant, show that the initial and final temperatures and volumes for an adiabatic ideal-gas expansion are related by the equation \[\left(\frac{T_f}{T_i}\right)=\left(\frac{V_i}{V_f}\right)^{R/C_V} \nonumber \]12. At 25 C, the initial volume of a monatomic ideal gas is 5 L at 10 bar. This gas expands to 20 L against a constant applied pressure of 1 bar.(a) Is this process impossible, spontaneous, or reversible?(b) What is the final temperature?(c) Find \(q\), \(w\), \(\Delta E\), and \(\Delta H\) for this process.13. The same change of state experience by the monatomic ideal gas in problem 12 can be effected in two steps. Let step A be the reversible cooling of the gas to its final temperature while the pressure is maintained constant at 10 bar. Let step B be the reversible isothermal expansion of the resulting gas to a pressure of 1 bar.(a) Find \(q\), \(w\), \(\Delta E\), and \(\Delta H\) for step A.(b) Find \(q\), \(w\), \(\Delta E\), and \(\Delta H\) for step B.(c) From your results in (a) and (b), find\(\ q\), \(w\), \(\Delta E\), and \(\Delta H\) for the overall process of step A followed by step B.(d) Compare the values of \(q\), \(w\), \(\Delta E\), and \(\Delta H\) that you find in (c) to the values for the same overall process that you found in problem 12.(e) Find \(\Delta S\) and \(\Delta \hat{S}\) for step A.(f) Find \(\Delta S\) and \(\Delta \hat{S}\) for step B.(g) Find \(\Delta S\), \(\Delta \hat{S}\), and \(\Delta S_{universe}\) for the overall process.14. Assume that the process in problem 12 occurs while the gas is in thermal contact with its surroundings and that the temperature of the surroundings is always equal to the final temperature of the gas. Find \(\Delta \hat{S}\) and \(\Delta S_{universe}\) for this process.15. At 25 C, the initial volume of a monatomic ideal gas is 5 L at 10 bar. The gas expands to 20 L while in thermal contact with surroundings at 125 C. During the expansion, the applied pressure is constant and equal to the equilibrium pressure at the final volume and temperature.(a) Is this process impossible, spontaneous, or reversible?(b) Find \(q\), \(w\), \(\Delta E\), \(\Delta H\), and \(\Delta \hat{S}\) for this process.(c) Find \(\Delta S\) and \(\Delta S_{universe}\) for this process. To find \(\Delta S\), it is necessary to find a reversible alternative path that effects the same change in the system’s state functions.16. At 60 C, the density of water is \(\mathrm{0.98320\ g}\ {\mathrm{cm}}^{-3}\), the vapor pressure is 19,932 Pa, and the enthalpy of vaporization is \(42,482\mathrm{\ J}\ {\mathrm{mol}}^{-1}\). Assume that gaseous water behaves as an ideal gas. A vessel containing liquid and gaseous water is placed in a constant 60 C bath, and the applied pressure is maintained at 19,932 Pa while 100 g of water vaporizes.(a) Is this process impossible, spontaneous, or reversible?(b) Find \(q\), \(w\), \(\Delta E\), \(\Delta H\), \(\Delta S\), \(\Delta A\), and \(\Delta G\) for this process.(c) Is \({\left(\Delta S\right)}_{EV}=0\) a criterion for equilibrium that applies to this system? Why or why not? \({\left(\Delta H\right)}_{SP}=0\)? \({\left(\Delta A\right)}_{VT}=0\)? \({\left(\Delta G\right)}_{PT}=0\)?17. This problem compares the efficiency and \(\sum{q/T}\) for one mole of a monatomic ideal gas taken around a reversible Carnot cycle to the same quantities for the same gas taken around an irreversible cycle using the same two heat reservoirs.(i) Let the successive step of the reversible Carnot cycle be a, b, c, and d. Isothermal step a begins with the gas occupying 5.00 L at 600 K and ends with the gas occupying 20.00 L. Adiabatic expansion step b ends with the gas at 300 K. After the isothermal compression step c, the gas is adiabatically compressed in step d to the original state. Find \(P\), \(V\), and \(T\) for the gas at the end of each step of this reversible cycle. Find \(\sum q/\hat{T}\), \(\Delta S\), and \(\Delta \hat{S}\) for the cycle a, b, c, d. What is the efficiency of this cycle?(ii) Suppose that following step b, the ideal gas is warmed at constant volume to 400 K by exchanging heat with the 600 K reservoir. Call this step e. Following step e, the gas is cooled at constant pressure to 300 K by contact with the 300 K reservoir. Call this step f. Following step f, the gas is isothermally and reversibly compressed at 300 K to the same \(P\), \(V\), and \(T\) as the gas reaches at the end of step c. Call this step g. Find \(P\), \(V\), and \(T\) for the gas at the ends of steps e, f, and g. Although steps e and f are not reversible, the same changes can be effected reversibly by keeping \(T=\hat{T}\) as the gas is warmed at constant volume (step e) and cooled at constant pressure (step f). (We discuss this point in further in Section 12.4.) Consequently, \({\Delta }_eS=\int^{400\ K}_{300\ K} \frac{C_V}{T}dT\) and \({\Delta }_fS=\int^{400\ K}_{300\ K} \frac{C_P}{T}dT\). Find \(\sum q/\hat{T}\), \(\Delta S\), and \(\Delta \hat{S}\), and \(\Delta S_{universe}\) for the cycle a, b, e, f, g, d. What is the efficiency of this cycle?(iii) Compare the value of \(\sum q/ \hat{T}\) that you obtained in part (ii) to value of \(\sum q/ \hat{T}\) that you obtained in part (i).(iv) Clausius’ theorem states that \(\sum q/ \hat{T}=0\) for a cycle traversed reversibly, and \(\sum q/ \hat{T}<0\) for a cycle traversed spontaneously. Comment.18. For a spontaneous cycle traversed while the temperature changes continuously, Clausius’ theorem asserts that \(\oint{dq/\hat{T}}<0\). Show that this inequality follows from the result, \(dS>dq/ \hat{T}\), that we obtained in Section 9.15 for any spontaneous process in a closed system.19. In Sections 9.6 through 9.8, we conclude that \(\Delta S+\Delta \hat{S}=0\) is necessary for a reversible process, \(\Delta S+\Delta \hat{S}>0\) is necessary for a spontaneous process, and \(\Delta S+\Delta \hat{S}<0\) is necessary for an impossible process. That is: \[(\mathrm{Process\ is\ reversible})\ \ \ \Rightarrow \ \left(\Delta S+\Delta \hat{S}=0\right) \nonumber \] (\(\mathrm{Process\ is\ spontaneous}\))\(\ \ \Rightarrow \left(\Delta S+\Delta \hat{S}>0\right)\), and (\(\mathrm{Process\ is\ impossible}\)) \(\ \ \Rightarrow \left(\Delta S+\Delta \hat{S}<0\right)\).Since we have defined the categories reversible, spontaneous, and impossible so that they are exhaustive and mutually exclusive, the following proposition is true:\[\sim \left(\mathrm{Process\ is\ spontaneous}\right)\ \mathrm{and}\ \sim \left(\mathrm{Process\ is\ impossible}\right) \nonumber \] \[\Rightarrow \left(\mathrm{Process\ is\ reversible}\right) \nonumber \](a) Prove that \(\Delta S+\Delta \hat{S}=0\) is sufficient for the process to be reversible; that is, prove: \[\left(\Delta S+\Delta \hat{S}=0\right)\ \ \Rightarrow \ \ \left(\mathrm{Process\ is\ reversible}\right) \nonumber \](b) Prove that \(\Delta S+\Delta \hat{S}>0\) is sufficient for the process to be spontaneous; that is, prove: \[\left(\Delta S+\Delta \hat{S}>0\ \right)\ \Rightarrow \ \ \left(\mathrm{Process\ is\ spontaneous}\right) \nonumber \](c) Prove that \(\Delta S+\Delta \hat{S}<0\) is sufficient for the process to be impossible; that is, prove: \[\left(\Delta S+\Delta \hat{S}<0\right)\ \ \Rightarrow \ \ \left(\mathrm{Process\ is\ impossible}\right) \nonumber \]20. Label the successive steps in a reversible Carnot cycle A, B, C, and D, where A is the point at which the pressure is greatest.(a) Sketch the path ABCD in \(P\)–\(V\) space.(b) Sketch the path ABCD in \(T\)–\(dq^{rev}/T\) space.(c) Sketch the path ABCE in \(T\)– \(q^{rev}\) space.(d) Sketch the path BCDA in \(T\)– \(q^{rev}\) space.(e) Sketch the path CDAB in \(T\)– \(q^{rev}\) space.(d) Sketch the path DABC in \(T\)– \(q^{rev}\) space.21. Assume that the earth’s atmosphere is pure nitrogen and that it behaves as an ideal gas. Assume that the molar energy of this nitrogen is constant and that its molar entropy changes are adequately modeled by \(d\overline{S}=\left({C_V}/{T}\right)dT+\left({R}/{\overline{V}}\right)d\overline{V}\). For this atmosphere, show that \[{\left(\frac{\partial T}{\partial h}\right)}_E=\frac{-\overline{M}g}{C_V} \nonumber \] where \(h\) is the height above the earth’s surface, \(\overline{M}\) is the molar mass of dinitrogen (\(0.0280\ \ \mathrm{kg}\ {\mathrm{mol}}^{-1}\)), \(g\) is the acceleration due to gravity (\(\mathrm{9.80}\ \mathrm{m}\ {\mathrm{s}}^{-1}\)), and \(C_V\) is the constant-volume heat capacity (\(20.8\ \mathrm{J}\ {\mathrm{K}}^{-1}\ {\mathrm{mol}}^{-1}\)). [Suggestion: Write the total differential for \(\overline{E}=\overline{E}\left(\overline{S},\overline{V},h\right)\). What are \({\left({\partial \overline{E}}/{\partial \overline{S}}\right)}_{\overline{V},h}\), \({\left({\partial \overline{E}}/{\partial \overline{V}}\right)}_{\overline{S},h}\), and \({\left({\partial \overline{E}}/{\partial h}\right)}_{\overline{S},\overline{V}}\)?]If the temperature at sea level is 300 K, what is the temperature on the top of a 3000 m mountain?22. Assume that the earth’s atmosphere is pure nitrogen and that it behaves as an ideal gas. Assume that the molar enthalpy of this nitrogen is constant and that its molar entropy changes are adequately modeled by \(d\overline{S}=\left({C_P}/{T}\right)dT-\left({R}/{P}\right)dP\). For this atmosphere, show that \[{\left(\frac{\partial T}{\partial h}\right)}_S=\frac{-\overline{M}g}{C_P} \nonumber \] where \(h\) is the height above the earth’s surface, \(\overline{M}\) is the molar mass of dinitrogen (\(0.0280\ \ \mathrm{kg}\ {\mathrm{mol}}^{-1}\)), \(g\) is the acceleration due to gravity (\(\mathrm{9.80}\ \mathrm{m}\ {\mathrm{s}}^{-1}\)), and \(C_P\) is the constant-pressure heat capacity (\(29.1\ \mathrm{J}\ {\mathrm{K}}^{-1}\ {\mathrm{mol}}^{-1}\)). [Suggestion: Write the total differential for\(\ \overline{H}=\overline{H}\left(\overline{S},P,h\right)\). What are \({\left({\partial \overline{H}}/{\partial \overline{S}}\right)}_{P,h}\), \({\left({\partial \overline{H}}/{\partial P}\right)}_{\overline{V},h}\), and \({\left({\partial \overline{H}}/{\partial h}\right)}_{\overline{S},\overline{V}}\)?]Use this approximation to calculate the temperature on the top of a 3000 m mountain when the temperature at sea level is 300 K.23. Hikers often say that, as a rule-of-thumb, the temperature on a mountain decreases by 1 C for every 100 m increase in elevation. Is this rule in accord with the relationships developed in problems 21 and 22? In these problems, we assume that the temperature of an ideal-gas atmosphere varies with altitude but that the molar energy or enthalpy does not. Does this assumption contradict the principle that the energy and enthalpy of an ideal gas depend only on temperature?24. Derive the barometric formula (Section 2.10) from the assumptions that the earth’s atmosphere is an ideal gas whose molar mass is \(\overline{\boldsymbol{M}}\) and whose temperature and Gibbs free energy are independent of altitude.25. Run in reverse, a Carnot engine consumes work \(\left(w>0\right)\) and transfers heat \(\left(q_{\ell }>0\right)\) from a low-temperature reservoir to a high temperature reservoir \(\left(q_h<0\right)\). The work done by the machine is converted to heat that is discharged to the high-temperature reservoir. In one cycle of the machine, \(\Delta E=q_{\ell }+q_h=0\). For a refrigerator—or for a heat pump operating in air-conditioning mode—we are interested in the quantity of heat removed \(\left(q_{\ell }\right)\) per unit of energy expended \(\left(w\right)\). We define the coefficient of performance as \(COP\left(cooling\right)={q_{\ell }}/{w}\). This is at a maximum for the reversible Carnot engine. Show that the theoretical maximum is \[COP\left(cooling\right)=\frac{1-\epsilon }{\epsilon }=\frac{T_{\ell }}{T_h-T_{\ell }} \nonumber \] where \(\epsilon\) is the reversible Carnot-engine efficiency, \[\epsilon =1-\frac{T_{\ell }}{T_h} \nonumber \] 26. For a heat pump operating in heating mode—as a “furnace”—we are interested in the quantity of heat delivered to the space being heated \(\left(-q_h\right)\) per unit of energy expended \(\left(w\right)\). We define the coefficient of performance as \(COP\left(heating\right)=-{q_h}/{w}\). Show that the theoretical maximum is \[COP\left(heating\right)=\frac{1}{\epsilon }=\frac{T_h}{T_h-T_{\ell }} \nonumber \]27. For \(T_{\ell }=300\ \mathrm{K}\) and \(T_h=500\ \mathrm{K}\), calculate the theoretical maxima for \(COP\left(cooling\right)\) and \(COP\left(heating\right)\).28. Find the theoretical maximum \(COP\left(cooling\right)\) for a refrigerator at \(40\ \mathrm{F}\) in a room at \(72\ \mathrm{F}\).29. Find the theoretical maximum \(COP\left(cooling\right)\) for a heat pump that keeps a room at \(72\ \mathrm{F}\) when the outside temperature is \(100\ \mathrm{F}\).30. Find the theoretical maximum \(COP\left(heating\right)\) for a heat pump that keeps a room at \(72\ \mathrm{F}\) when the outside temperature is \(32\ \mathrm{F}\).Notes\({}^{1}\) For an introduction to the concept of internal entropy and its applications, see Ilya Prigogine, Introduction to the Thermodynamics of Irreversible Processes, Interscience Publishers, 1961.This page titled 9.26: Problems is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Paul Ellgen via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,256 |
About the Author
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/00%3A_Front_Matter/About_the_Author | Following undergraduate work at Carleton College, the author earned a Ph.D. in physical chemistry at Northwestern University. He taught for several years at the University of California at Riverside, and then spent many years in industrial research and industrial research management. He has authored 16 technical publications and is the inventor on 15 U. S. Patents. Following his retirement, he presented a college-level physical chemistry course for several years at The Oklahoma School of Science and Mathematics. This book arose from his perception that many of the obstacles to understanding the subject can be reduced by increased attention to the background material that is included in the initial presentation. Accordingly, this text sacrifices breadth in order to emphasize basic concepts and clarify them by treating them from alternative perspectives. | 8,257 |
InfoPage
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/00%3A_Front_Matter/02%3A_InfoPage | This text is disseminated via the Open Education Resource (OER) LibreTexts Project and like the hundreds of other texts available within this powerful platform, it is freely available for reading, printing and "consuming." Most, but not all, pages in the library have licenses that may allow individuals to make changes, save, and print this book. Carefully consult the applicable license(s) before pursuing such effects.Instructors can adopt existing LibreTexts texts or Remix them to quickly build course-specific resources to meet the needs of their students. Unlike traditional textbooks, LibreTexts’ web based origins allow powerful integration of advanced features and new technologies to support learning. The LibreTexts mission is to unite students, faculty and scholars in a cooperative effort to develop an easy-to-use online platform for the construction, customization, and dissemination of OER content to reduce the burdens of unreasonable textbook costs to our students and society. The LibreTexts project is a multi-institutional collaborative venture to develop the next generation of open-access texts to improve postsecondary education at all levels of higher learning by developing an Open Access Resource environment. The project currently consists of 14 independently operating and interconnected libraries that are constantly being optimized by students, faculty, and outside experts to supplant conventional paper-based books. These free textbook alternatives are organized within a central environment that is both vertically (from advance to basic level) and horizontally (across different fields) integrated.The LibreTexts libraries are Powered by NICE CXOne and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. This material is based upon work supported by the National Science Foundation under Grant No. 1246120, 1525057, and 1413739.Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation nor the US Department of Education.Have questions or comments? For information about adoptions or adaptions contact More information on our activities can be found via Facebook , Twitter , or our blog .This text was compiled on 07/13/2023 | 8,260 |
1.1: Describing a System Quantum Mechanically
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/01%3A_Overview_of_Time-Independent_Quantum_Mechanics/1.01%3A_Describing_a_System_Quantum_Mechanically | As a starting point it is useful to review the postulates of quantum mechanics, and use this as an opportunity to elaborate on some definitions and properties of quantum systems.Quantum mechanical matter exhibits wave-particle duality in which the particle properties emphasize classical aspects of the object’s position, mass, and momentum, and the wave properties reflect its spatial delocalization and ability to interfere constructively or destructively with other particles or waves. As a result, in quantum mechanics the physical properties of the system are described by the wavefunction \(\Psi\). The wavefunction is a time-dependent complex probability amplitude function that is itself not observable; however, it encodes all properties of the system’s particles and fields. Depending on the context, particle is a term that will refer to a variety of objects―such as electron, nucleons, and atoms―that fill space and have mass, but also retain wavelike properties. Fields refer to a variety of physical quantities that are continuous in time and space, which have energy and influence the behavior of particles.In the general sense, the wavefunction, or state, does not refer to a three dimensional physical space in which quantum particles exist, but rather an infinite dimensional linear vector space, or Hilbert space, that accounts for all possible observable properties of the system. We can represent the wavefunction in physical space, \(\Psi(\mathbf{r})\) by carrying out a projection onto the desired spatial coordinates. As a probability amplitude function, the wavefunction describes the statistical probability of locating particles or fields in space and time. Specifically, we claim that the square of the wavefunction is proportional to a probability density (probability per unit volume). In one dimension, the probability of finding a particle in a space between x and x+dx at a particular time t is\[P(\mathbf{x}, \mathbf{t}) d x=\Psi^{*}(\mathbf{x}, \mathbf{t}) \Psi(\mathbf{x}, \mathbf{t}) \mathrm{d} \mathrm{x}\]We will always assume that the wavefunctions for a particle are properly normalized, so that \(\int \mathrm{P}(\mathbf{x}, \mathrm{t}) \mathrm{dx}=1\).Quantum mechanics parallels Hamilton’s formulation of classical mechanics, in which the properties of particles and fields are described in terms of their position and momenta. Each particle described by the wavefunction will have associated with it one or more degrees of freedom that are defined by the dimensionality of the problem. For each degree of freedom, particles which are described classically by a position x and momentum \(p_{x}\) will have associated with it a quantum mechanical operator \(\hat{x} \text { or } \hat{p}_{x}\) which will be used to describe physical properties and experimental observables. Operators correspond to dynamical variables, whereas static variables, such as mass, do not have operators associated with them. In practice there is a quantum/classical correspondence which implies that the quantum mechanical behavior can often be deduced from the classical dynamical equations by substituting the quantum mechanical operator for the corresponding classical variables. In the case of position and momenta, these operators are \(x \rightarrow \hat{x} \text { and } \hat{p}_{x}=-i \hbar(\partial / \partial x)\). Table 1 lists some important operators that we will use. Note that time does not have an operator associated with it, and for our purposes is considered an immutable variable that applies uniformly to the entire system.Table \(\PageIndex{1}\): lists some important operators that we will use. Note that time does not have an operator associated with it, and for our purposes is considered an immutable variable that applies uniformly to the entire system.\begin{equation}
\begin{array}{|l|l|l|l|}
\hline & & \text {Classical variable} & \text {Operator} \\
\hline \text {Position} & (1 \mathrm{D}) & x & \hat{x} \\
\hline & (3 \mathrm{D}) & r & \hat{r} \\
\hline \text {Linear momentum} & (1 \mathrm{D}) & p_{\mathrm{x}} & \hat{p}_{x}=-i \hbar(\partial / \partial x) \\
\hline & (3 \mathrm{D}) & p & \hat{p}=-i \hbar \nabla \\
\hline \begin{array}{l}
\text {Function of position} \\
\text {and momentum}
\end{array} & (1 \mathrm{D}) & f\left(x, p_{\mathrm{x}}\right) & f\left(\hat{x}, \hat{p}_{x}\right) \\
\hline \text {Angular momentum} & (3 \mathrm{D}) & \bar{L}=\bar{r} \times \bar{p} & \hat{L}=-i \hbar \hat{r} \times \bar{\nabla} \\
\hline \begin{array}{l}
\text {z-component of orbital} \\
\text {angular momentum}
\end{array} & & & \hat{L}_{z}=-i \hbar(\partial / \partial \phi) \\
\hline
\end{array}
\end{equation} What do operators do? Operators map one state of the system to another―also known as acting on the wavefunction:\[\hat{\mathrm{A}} \Psi_{0}=\Psi_{\mathrm{A}} \label{2}\]Here \(\Psi_{0}\) is the initial wavefunction and \(\Psi_{\mathrm{A}}\) refers to the wavefunction after the action of the operator \(\hat{\mathrm{A}}\). Whereas the variable x represents a position in physical space, the operator \(\hat{x}\) maps the wavefunction from Hilbert space onto physical space. Operators also represent a mathematical operation on the wavefunction that influences or changes it, for instance moving in time and space. Operators may be simply multiplicative, as with the operator \(\hat{x}\), or they may take differential or integral forms. The gradient \(\bar{\nabla}\), divergence \(\nabla \cdot, \text {and curl} \nabla \times\) are examples of differential operators, whereas Fourier and Laplace transforms are integral operators.When writing an operator, it is always understood to be acting on a wavefunction to the right. For instance, the operator \(\hat{p}_{x}\) says that one should differentiate the wavefunction to its right with respect to \(x\) and then multiply the result by \(-i \hbar\). The operator \(\hat{x}\) simply means multiply the wavefunction by x. Since operators generally do not commute, a series of operators must be applied in the prescribed right-to-left order.\[\hat{\mathrm{B}} \hat{\mathrm{A}} \Psi_{0}=\hat{\mathrm{B}} \Psi_{A}=\Psi_{\mathrm{B}, \mathrm{A}} \label{3}\]One special characteristic of operators that we will look for is whether operators are Hermitian. A Hermitian operator obeys the equality \(\hat{A}=\hat{A}^{*}\).Of particular interest is the Hamiltonian, \(\hat{H}\), an operator corresponding to the total energy of the system. The Hamiltonian operator describes all interactions between particles and fields, and thereby determines the state of the system. The Hamiltonian is a sum of the total kinetic and potential energy for the system of interest, \(\hat{H}=\hat{T}+\hat{V}\), and is obtained by substituting the position and momentum operators into the classical Hamiltonian. For one particle under the influence of a potential,\[\hat{H}=-\frac{\hbar^{2}}{2 m} \nabla^{2}+V(\hat{r}, t) \label{4}\]Notation: In the following chapters, we will denote operators with a circumflex only when we are trying to explicitly note its role as an operator, but otherwise we take the distinction between variables and operators to be understood.The properties of a system described by mapping with the operator \(\hat{A}\) can only take on the values a that satisfy an eigenvalue equation\[\hat {A \Psi} = a \Psi \label{5}\]For instance, if the state of the system is \(\Psi (x) = e^{i p x / \hbar}\), the momentum operator \(\hat {p} _ {x} = - i \hbar ( \partial / \partial x )\) returns the eigenvalue \(p\) (a scalar) times the original wavefunction. Then \(\Psi (x)\) x is said to be an eigenfunction of \(\hat {p} _ {x}\). For the Hamiltonian, the solutions to the eigenvalue equation\[\hat {H} \Psi = E \Psi \label{6}\]yield possible energies of the system. The set of all possible eigenvectors are also known as the eigenstates \(\psi_{1}\). Equation is the time-independent Schrödinger equation (TISE).The eigenstates of \(\hat{A}\) form a complete orthonormal basis. In Hilbert space the wavefunction is expressed as a linear combination of orthonormal functions,\[\Psi = \sum _ {i = 0}^{\infty} c _ {i} \psi _ {i} \label{7}\]where \(c _ {i}\) are complex numbers. The eigenvectors \(\psi_{1}\) are orthogonal and complete:\[\int _ {- \infty}^{+ \infty} d \tau \psi _ {i}^{*} \psi _ {j} = \delta _ {i j} \label{8}\]and\[\sum _ {i = 0}^{\infty} \left| c _ {i} \right|^{2} = 1 \label{9}\]The choice of orthonormal functions in which to represent the system is not unique and is referred to as selecting a basis set. The change of basis set is effectively a transformation that rotates the wavefunction in Hilbert space.The outcome of a quantum measurement cannot be known with arbitrary accuracy; however, we can statistically describe the probability of measuring a certain value. The measurement of a value associated with the operator is obtained by calculating the expectation value of the operator\[\langle A\rangle=\int d \tau \Psi^{*} \hat{A} \Psi \label{10}\]Here the integration is over Hilbert space. The brackets \(\langle\ldots\rangle\) refer to an average value that will emerge from a large series of measurements on identically prepared systems. Whereas \(\langle A\rangle\) is an average value, the variance in a distribution of values measured can be calculated from \(\Delta A=\left\langle A^{2}\right\rangle-\langle A\rangle^{2}\). Since an observable must be real valued, operators corresponding to observables are Hermitian:\[\int d \tau \Psi^{*} \hat{A} \Psi=\int d \tau \hat{A}^{*} \Psi^{*} \Psi \label{11}\]As a consequence, a Hermitian operator must have real eigenvalues and orthogonal eigenfunctions.Operators are associative but not necessarily commutative. Commutators determine whether two operators commute. The commutator of two operators \(\hat{A} \text {and} \hat{B}\) is defined as\[[\hat{A}, \hat{B}]=\hat{A} \hat{B}-\hat{B} \hat{A} \label{12}\]If we first make an observation of an eigenvalue a for \(\hat{A}\), one cannot be assured of determining a unique eigenvalue b for a second operator \(\hat{B}\). This is only possible if the system is an eigenstate of both \(\hat{A}\) and \(\hat{B}\). This would allow one to state that \(\hat{A} \hat{B} \psi=\hat{B} \hat{A} \psi\) or alternatively \([\hat{A}, \hat{B}] \psi=0\). If the operators commute, the commutator is zero, and \(\hat{A}\) and \(\hat{B}\) have simultaneous eigenfunctions. If the operators do not commute, one cannot specify a and b exactly, however, the variance in their uncertainties can be expressed as \(\Delta A^{2} \Delta B^{2} \geq\left\langle\frac{1}{2}[\hat{A}, \hat{B}]\right\rangle^{2}\). As an example, we see that \(\hat{p}_{x} \text {and} \hat{p}_{y}\) commute, but \(\hat{x} \text {and} \hat{p}_{x}\) do not. Thus we can specify the momentum of a particle in the x and y coordinates precisely, but cannot specify both the momentum and position of a particle in the x dimension to arbitrary resolution. We find that \(\left[\hat{x}, \hat{p}_{x}\right]=i \hbar\) and \(\Delta x \Delta p_{x} \geq \hbar / 2\).Note that for the case that the Hamiltonian can be written as a sum of commuting terms, as is the case for a set of independent or separable coordinates or momenta, then the total energy is additive in eigenvalues for each term, and the total eigenfunctions can be written as product states in the eigenfunctions for each term.The wavefunction evolves in time as described by the time-dependent Schrödinger equation (TDSE):\[- i \hbar \frac {\partial \Psi} {\partial t} = \hat {H} \Psi \label{13}\]In the following chapter, we will see the reasoning that results in this equation. This page titled 1.1: Describing a System Quantum Mechanically is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,263 |
1.2: Matrix Mechanics
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/01%3A_Overview_of_Time-Independent_Quantum_Mechanics/1.02%3A_Matrix_Mechanics | Most of our work will make use of the matrix mechanics formulation of quantum mechanics. The wavefunction is written as \(|\Psi\rangle\) and referred to as a ket vector. The complex conjugate \(\Psi^{*}=\langle\Psi|\) is a bra vector, where \(\langle a \Psi|=a^{*}\langle\Psi|\). The product of a bra and ket vector, \(\langle\alpha \mid \beta\rangle\) is therefore an inner product (scalar), whereas the product of a ket and bra \(|\beta\rangle\langle\alpha|\) is an outer product (matrix). The use of bra–ket vectors is the Dirac notation in quantum mechanics.In the matrix representation, \(|\Psi\rangle\) is represented as a column vector for the expansion coefficients \(c_{i}\) in a particular basis set.\[|\Psi\rangle=\left(\begin{array}{c}
c_{1} \\
c_{2} \\
c_{3} \\
\vdots
\end{array}\right) \label{14}\]The bra vector \(\langle\Psi|\) refers to a row vector of the conjugate expansion coefficients \(c_{i}^{*}\). Since wavefunctions are normalized, \(\langle\Psi \mid \Psi\rangle=1\). Dirac notation has the advantage of brevity, often shortening the wavefunction to a simple abbreviated notation for the relevant quantum numbers in the problem. For instance, we can write eq. (1.1.7) as\[|\Psi\rangle=\sum_{i} c_{i}|i\rangle \label{15} \]where the sum is over all eigenstates and the \(i^{\text {th}} \text { eigenstate }|i\rangle=\psi_{i}\). Implicit in this equation is that the expansion coefficient for the \(i^{\text {th }} \text { eigenstate is } c_{i}=\langle i \mid \Psi\rangle\). With this brevity comes the tendency to hide some of the variables important to the description of the wavefunction. One has to be aware of this, and although we will use Dirac notation for most of our work, where detail is required, Schrödinger notation will be used.The outer product \(|i\rangle\langle i|\) is known as a projection operator because it can be used to project the wavefunction of the system onto the \(i^{\mathrm{th}}\) eigenstate of the system as \(|i\rangle\langle i \mid \Psi\rangle=c_{i}|i\rangle\). Furthermore, if we sum projection operators over the complete basis set, we obtain an identity operator\[\sum_{i}|i\rangle\langle i|=1 \label{16} \]which is a statement of the completeness of a basis set. The orthogonality of eigenfunctions (eq. (1.1.8)) is summarized as \(\langle i \mid j\rangle=\delta_{i j}\).The operator \(\hat{A}\) is a square matrix that maps from one state to another\[\hat{A}\left|\Psi_{0}\right\rangle=\left|\Psi_{A}\right\rangle \label{17} \]and from eq. (1.1.6) the TISE is\[\hat{H}|\Psi\rangle=E|\Psi\rangle \label{18} \]where E is a diagonal matrix of eigenvalues whose solution is obtained from the characteristic equation\[\operatorname{det}(H-E \mathbf{I})=0 \label{19}\]The expectation value, a restatement of eq. (1.1.10), is written\[\langle A\rangle=\langle\Psi|\hat{A}| \Psi\rangle \label{20}\]or from eq. (\ref{15})\[\langle A\rangle=\sum_{i} \sum_{j} c_{i}^{*} c_{j} A_{i j} \label{21}\]where \(A_{i j}=\langle i|A| j\rangle\) are the matrix elements of the operator \(\hat{A}\). As we will see later, the matrix of expansion coefficients \(\rho_{i j}=c_{i}^{*} c_{j}\) is known as the density matrix. From eq. (\ref{18}), we see that the expectation value of the Hamiltonian is the energy of the system,\[E=\langle\Psi|H| \Psi\rangle \label{22}\]Hermitian operators play a special role in quantum mechanics. The Hermitian adjoint of an operator \(\hat{A} \text { is written } \hat{A}^{\dagger}\), and is defined as the conjugate transpose of \(\hat{A}: \hat{A}^{\dagger}=\left(\hat{A}^{T}\right)^{*}\). From this we see \(\langle\hat{A} \psi \mid \phi\rangle=\left\langle\psi \mid \hat{A}^{\dagger} \phi\right\rangle\). A Hermitian operator is one that is self-adjoint, i.e., \(\hat{A}^{\dagger}=\hat{A}\). For a Hermitian operator, a unique unitary transformation exists that will diagonalize it.Each basis set provides a different route to representing the same physical system, and a similarity transformation S transforms a matrix from one orthonormal basis to another. A transformation from the state \(|\Psi\rangle \text { to the state }|\Phi\rangle\) can be expressed as\[|\Theta\rangle=S|\Psi\rangle\]where the elements of the matrix are \(S_{i j}=\left\langle\theta_{i} \mid \psi_{j}\right\rangle\). Then the reverse transformation is\[|\Psi\rangle=S^{\dagger}|\Theta\rangle\]If \(S^{-1} = S^{\dagger}\), then \(S^{\dagger} S=1\) and the transformation is said to be unitary. A unitary transformation refers to a similarity transformation in Hilbert space that preserves the scalar product, i.e., the length of the vector. The transformation of an operator from one basis to another is obtained from \(S^{\dagger} A S\) and diagonalizing refers to finding the unitary transformation that puts the matrix A in diagonal form.1. The inverse of \(\hat{A}\left(\text { written } \hat{A}^{-1}\right)\) is defined by\[\hat{A}^{-1} \hat{A}=\hat{A} \hat{A}^{-1}=1\]2. The transpose of \(\hat{A}\left(\text { written } A^{T}\right)\) is\[\left(A^{T}\right)_{n q}=A_{q n}\]If \(A^{T}=-A\) then the matrix is antisymmetric.3. The trace of \(\hat{A}\) is defined as\[\operatorname{Tr}(\hat{A})=\sum_{q} A_{q q}\]The trace of a matrix is invariant to a similarity operation.4. The Hermitian adjoint of \(\hat{A}\left(\text { written } \hat{A}^{\dagger}\right)\) is\[\begin{array}{l}
\hat{A}^{\dagger}=\left(\hat{A}^{T}\right)^{*} \\
\left(\hat{A}^{\dagger}\right)_{n q}=\left(\hat{A}_{q n}\right)^{*}
\end{array}\]5. \(\hat{A}\) is Hermitian if \(\hat{A}^{\dagger}=\hat{A}\)\[\left(\hat{A}^{T}\right)^{*}=\hat{A}\]If \(\hat{A}\) is Hermitian, then \(\hat{A}^{n}\) is Hermitian and \(e^{\hat{A}}\) is Hermitian. For a Hermitian operator, \(\langle\psi \mid \hat{A} \varphi\rangle=\langle\psi \hat{A} \mid \varphi\rangle\). Expectation values of Hermitian operators are real, so all physical observables are associated with Hermitian operators.6. \(\hat{A}\) is a unitary operator if its adjoint is also its inverse:\[\begin{array}{l}
\hat{A}^{\dagger}=\hat{A}^{-1} \\
\left(\hat{A}^{T}\right)^{*}=\hat{A}^{-1} \\
\hat{A} \hat{A}^{\dagger}=1 \quad \Rightarrow \quad\left(\hat{A} \hat{A}^{\dagger}\right)_{n q}=\delta_{n q}
\end{array}\]7. \(\hat{A}^{\dagger}=-\hat{A} \text { then } \hat{A}\) is said to be anti-Hermitian. Anti-Hermetian operators have imaginary expectation values. Any operator can be decomposed into its Hermitian and anti-Hermitian parts as\[\begin{array}{l}
\hat{A}=\hat{A}_{H}+\hat{A}_{A H} \\
\hat{A}_{H}=\frac{1}{2}\left(\hat{A}+\hat{A}^{\dagger}\right) \\
\hat{A}_{A H}=\frac{1}{2}\left(\hat{A}-\hat{A}^{\dagger}\right)
\end{array}\]From the definition of a commutator:\[[\hat{A}, \hat{B}]=\hat{A} \hat{B}-\hat{B} \hat{A}\]we find it is anti-symmetric to exchange:\[[\hat{A}, \hat{B}]=-[\hat{B}, \hat{A}]\]and distributive:\[[\hat{A}, \hat{B}+\hat{C}]=[\hat{A}, \hat{B}]+[\hat{B}, \hat{C}]\]These properties lead to a number of useful identities:\[\left[\hat{A}, \hat{B}^{n}\right]=n \hat{B}^{n-1}[\hat{A}, \hat{B}]\]\[\left[\hat{A}^{n}, \hat{B}\right]=n \hat{A}^{n-1}[\hat{A}, \hat{B}]\]\[[\hat{A}, \hat{B} \hat{C}]=[\hat{A}, \hat{B}] \hat{C}+\hat{B}[\hat{A}, \hat{C}]\]\[[[\hat{C}, \hat{B}], \hat{A}]=[[\hat{A}, \hat{B}], \hat{C}]\]\[\begin{array}{l}
{[\hat{A},[\hat{B}, \hat{C}]]+[\hat{B},[\hat{C}, \hat{A}]]} \\
\quad+[\hat{C},[\hat{A}, \hat{B}]]=0
\end{array}\]The Hermetian conjugate of a commutator is\[[\hat{A}, \hat{B}]^{\dagger}=\left[\hat{B}^{\dagger}, \hat{A}^{\dagger}\right]\]Also, the commutator of two Hermitian operators is also Hermitian. The anti-commutator is defined as\[[\hat{A}, \hat{B}]_{+}=\hat{A} \hat{B}+\hat{B} \hat{A}\]and is symmetric to exchange. For two Hermitian operators, their product can be written in terms of the commutator and anti-commutator as\[\hat{A} \hat{B}=\frac{1}{2}[\hat{A}, \hat{B}]+\frac{1}{2}[\hat{A}, \hat{B}]_{+}\]The anti-commutator is the real part of the product of two operators, whereas the commutator is the imaginary part.This page titled 1.2: Matrix Mechanics is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,264 |
1.3: Basic Quantum Mechanical Models
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/01%3A_Overview_of_Time-Independent_Quantum_Mechanics/1.03%3A_Basic_Quantum_Mechanical_Models | This section summarizes the results that emerge for common models for quantum mechanical objects. These form the starting point for describing the motion of electrons and the translational, rotational, and vibrational motions for molecules. Thus they are the basis for developing intuition about more complex problems.Waves form the basis for our quantum mechanical description of matter. Waves describe the oscillatory amplitude of matter and fields in time and space, and can take a number of forms. The simplest form we will use is plane waves, which can be written as\[\psi ( \mathbf {r} , t ) = \mathbf {A} \exp [ i \mathbf {k} \cdot \mathbf {r} - \mathbf {i} \omega t ] \label{43}\]The angular frequency \(ω\) describes the oscillations in time and is related to the number of cycles per second through \(ν = ω/2π\). The wave amplitude also varies in space as determined by the wavevector \(\mathbf {k}\), where the number of cycles per unit distance (wavelength) is \(λ = ω/k\). Thus the wave propagates in time and space along a direction \(\mathbf {k}\) with a vector amplitude A with a phase velocity \(vϕ = νλ\).For a free particle of mass \(m\) in one dimension, the Hamiltonian only reflects the kinetic energy of the particle\[\hat {H} = \hat {T} = \frac {\hat {p}^{2}} {2 m} \label{44}\]Judging from the functional form of the momentum operator, we assume that the wavefunctions will have the form of plane waves\[\psi (x) = A e^{i k x} \label{45}\]Inserting this expression into the TISE, eq. (1.1.6), we find that\[k = \sqrt {\frac {2 m E} {\hbar^{2}}} \label{46}\]and set \(A = 1 / \sqrt {2 \pi}\). Now, since we know that \(E = p^{2} / 2 m\), we can write\[k = \frac {p} {\hbar} \label{47}\]\(k\) is the wavevector, which we equate with the momentum of the particle.Free particle plane waves \(\psi _ {k} (x)\) form a complete and continuous basis set with which to describe the wavefunction. Note that the eigenfunctions, Equation (\ref{45}), are oscillatory over all space. Thus describing a plane wave allows one to exactly specify the wavevector or momentum of the particle, but one cannot localize it to any point in space. In this form, the free particle is not observable because its wavefunction extends infinitely and cannot be normalized. An observation, however, taking an expectation value of a Hermitian operator will collapse this wavefunction to yield an average momentum of the particle with a corresponding uncertainty relationship to its position.The minimal model for translational motion of a particle that is confined in space is given by the particle-in-a-box. For the case of a particle confined in one dimension in a box of length L with impenetrable walls, we define the Hamiltonian as\[\hat {H} = \frac {\hat {p}^{2}} {2 m} + V (x) \label{48}\]\[V (x) = \left\{\begin{array} {l l} {0} & {0 < x < L _ {x}} \\ {\infty} & {\text {otherwise}} \end{array} \right. \label{49}\]The boundary conditions require that the particle cannot have any probability of being within the wall, so the wavefunction should vanish at \(x = 0\) and \(L_x\), as with standing waves. We therefore assume a solution in the form of a sine function. The properly normalized eigenfunctions are\[\psi _ {n} = \sqrt {\frac {2} {L}} \sin \frac {n \pi x} {L} \quad n = 1,2,3 \dots \label{50}\]Here \(n\) are the integer quantum numbers that describe the harmonics of the fundamental frequency \(\pi/L\) whose oscillations will fit into the box while obeying the boundary conditions. We see that any state of the particle-in-a-box can be expressed in a Fourier series. On inserting Equation \ref{50} into the time-independent Schrödinger equation, we find the energy eigenvalues\[E _ {n} = \frac {n^{2} \pi^{2} h^{2}} {2 m L^{2}} \label{51}\]Note that the spacing between adjacent energy levels grows as \(n(n+1)\). This model is readily extended to a three-dimensional box by separating the box into \(x\), \(y\), and \(z\) coordinates. Then\[\hat {H} = \hat {H} _ {x} + \hat {H} _ {y} + \hat {H} _ {z} \label{52}\]in which each term is specified as Equation \ref{48}. Since \(\hat {H} _ {x}\), \(\hat {H} _ {y}\), \(\hat {H} _ {z}\) commute, each dimension is separable from the others. Then we find\[\psi ( x , y , z ) = \psi _ {x} \psi _ {y} \psi _ {z} \label{53}\]and\[E _ {x , y , z} = E _ {x} + E _ {y} + E _ {z} \label{54}\]which follow the definitions given in Equation \ref{50} and \ref{51} above. The state of the system is now specified by three quantum numbers with positive integer values: \(n_x\), \(n_y\), \(n_z\)Particle-in-a-box potential wavefunctions that are plotted superimposed on their corresponding energy levels.Harmonic oscillator potential showing wavefunctions that are superimposed on their corresponding energy levels. The harmonic oscillator Hamiltonian refers to a particle confined to a parabolic, or harmonic, potential. We will use it to represent vibrational motion in molecules, but it also becomes a general framework for understanding all bosons. For a classical particle bound in a one-dimensional potential, the potential near the minimum \(x_0\) can be expanded as\[V (x) = V \left( x _ {0} \right) + \left( \frac {\partial V} {\partial x} \right) _ {x = x _ {0}} \left( x - x _ {0} \right) + \frac {1} {2} \left( \frac {\partial V^{2}} {\partial x^{2}} \right) _ {x = x _ {0}} \left( x - x _ {0} \right)^{2} + \cdots \label{55}\]Setting \(x_0\) to 0, the leading term with a dependence on \(x\) is the second-order (harmonic) term \(V = - \mathrm {K} x^{2} / 2\), where the force constant\[\kappa = - \left( \partial^{2} V / \partial x^{2} \right) _ {x = 0}.\]The classical Hamiltonian for a particle of mass \(m\) confined to this potential is\[H = \frac {p^{2}} {2 m} + \frac {1} {2} \kappa x^{2} \label{56}\]Noting that the force constant and frequency of oscillation are related by\[\kappa = m \omega _ {0}^{2},\]we can substitute operators for \(p\) and \(x\) in Equation \ref{56} to obtain the quantum Hamiltonian\[\hat {H} = - \frac {1} {2} \frac {\hbar^{2}} {m} \frac {\partial^{2}} {\partial x^{2}} + \frac {1} {2} m \omega _ {0}^{2} \hat {x}^{2} \label{57}\]We will also make use of reduced mass-weighted coordinates defined as\[p = \sqrt {\frac {2} {m \hbar \omega _ {0}}} \hat {p}\label{58A}\]\[x = \sqrt {\frac {m \omega _ {0}} {2 \hbar}} \hat {x} \label{58B}\]for which the Hamiltonian can be written as\[\hat {H} = \hbar \omega _ {0} \left( p^{2} + q^{2} \right) \label{59}\]The eigenstates for the Harmonic oscillator are expressed in terms of Hermite polynomials\[\psi _ {n} (x) = \sqrt {\frac {\alpha} {2^{n} \sqrt {\pi} n !}} e^{- \alpha^{2} x^{2} / 2} \mathcal {H} _ {n} ( \alpha x ) \label{60}\]where \(\alpha=\sqrt{m \omega_{0} / \hbar}\) and the Hermite polynomials are obtained from\[\mathcal {H} _ {n} (x) = ( - 1 )^{n} e^{x^{2}} \frac {d^{n}} {d x^{n}} e^{- x^{2}} \label{61}\]The corresponding energy eigenvalues are equally spaced in units of the vibrational quantum \(\hbar \omega _ {0} \) above the zero-point energy \(\hbar \omega _ {0} / 2\).\(E_{n}=\hbar \omega_{0}\left(n+\frac{1}{2}\right) \quad n=0,1,2 \ldots \label{62}\)Raising and Lowering Operators for Harmonic OscillatorsFrom a practical point of view, it will be most useful for us to work problems involving harmonic oscillators in terms of raising and lower operators (also known as creation and annihilation operators, or ladder operators). We define these as\[\hat {a} = \sqrt {\frac {2 \hbar} {m \omega _ {0}}} \left( \hat {x} + \frac {i} {m \omega _ {0}} \hat {p} \right) \label{63}\]\[\hat {a}^{\dagger} = \sqrt {\frac {2 \hbar} {m \omega _ {0}}} \left( \hat {x} - \frac {i} {m \omega _ {0}} \hat {p} \right) \label{64}\]Note \(a\) and \(a^†\) operators are Hermitian conjugates of one another. These operators get their name from their action on the harmonic oscillator wavefunctions, which is to lower or raise the state of the system:\[\hat {a} | n \rangle = \sqrt {n} | n - 1 \rangle \label{65}\]and\[\hat {a}^{\dagger} | n \rangle = \sqrt {n + 1} | n + 1 \rangle\]Then we find that the position and momentum operators are\[\hat {x} = \sqrt {\frac {\hbar} {2 m \omega _ {0}}} \left( \hat {a}^{\dagger} + \hat {a} \right) \label{66}\]\[\hat {p} = i \sqrt {\frac {m \hbar \omega _ {0}} {2}} \left( \hat {a}^{\dagger} - \hat {a} \right) \label{67}\]When we substitute these ladder operators for the position and momentum operators—known as second quantization—the Hamiltonian becomes\[\hat {H} = \hbar \omega _ {0} \left( \hat {n} + \frac {1} {2} \right) \label{68}\]The number operator is defined as \(\hat {n} = \hat {a}^{\dagger} \hat {a}\) and returns the state of the system: \(\hat {n} = \hat {a}^{\dagger} \hat {a}\). The energy eigenvalues satisfying \(\hat {H} | n \rangle = E _ {n} | n \rangle\) are given by Equation \ref{62). Since the quantum numbers cannot be negative, we assert a boundary condition \(a | 0 \rangle = 0\), where \(0\) refers to the null vector. The harmonic oscillator Hamiltonian expressed in raising and lowering operators, together with its commutation relationship\[\left[ a , a^{\dagger} \right] = 1 \label{69}\]is used as a general representation of all bosons, which for our purposes includes vibrations and photons.Properties of raising and lower operators\(a\) and \(a^†\) a operators are Hermitian conjugates of one another.\[a a^{\dagger} + a^{\dagger} a = a^{\dagger} a + \frac {1} {2} \label{70}\]\[\left[ a , a^{\dagger} \right] = 1 \label{71}\]\[[ a , a ] = 0 \left[ a^{\dagger} , a^{\dagger} \right] = 0 \label{72}\]\[\left[ a , \left( a^{\dagger} \right)^{n} \right] = n \left( a^{\dagger} \right)^{n - 1} \label{73}\]\[\left[ a^{\dagger} , a^{n} \right] = - n a^{n - 1} \label{74}\]\[| n \rangle = \frac {1} {\sqrt {n !}} \left( a^{\dagger} \right)^{n} | 0 \rangle \label{75}\]The Morse oscillator is a model for a particle in a one-dimensional anharmonic potential energy surface with a dissociative limit at infinite displacement. It is commonly used for describing the spectroscopy of diatomic molecules and anharmonic vibrational dynamics, and most of its properties can be expressed through analytical expressions.3 The Morse potential is\[V (x) = D _ {e} \left[ 1 - e^{- \alpha x} \right]^{2} \label{76}\]where \(x = \left( r - r _ {0} \right)\). \(D_e\) sets the depth of the energy minimum at \(r = r_0\) relative to the dissociation limit as \(r → ∞\), and α sets the curvature of the potential. If we expand \(V\) in powers of \(x\) as described in Equation \ref{55}\[V (x) \approx \frac {1} {2} \kappa x^{2} + \frac {1} {6} g x^{3} + \frac {1} {24} h x^{4} + \cdots \label{77}\]we find that the harmonic, cubic, and quartic expansion coefficients are\[\kappa = 2 D _ {e} \alpha^{2},\]\[g = - 6 D _ {e} \alpha^{3},\]and \[h = 14 D _ {e} \alpha^{4}.\]The Morse oscillator Hamiltonian for a diatomic molecule of reduced mass mR bound by this potential is\[H = \frac {p^{2}} {2 m _ {R}} + V (x) \label{78}\]and has the eigenvalues\[E _ {n} = \hbar \omega _ {0} \left[ \left( n + \frac {1} {2} \right) - x _ {e} \left( n + \frac {1} {2} \right)^{2} \right] \label{79}\]Here \(\omega _ {0} = \sqrt {2 D _ {e} \alpha^{2} / m _ {R}}\) is the fundamental frequency and \(x _ {e} = \hbar \omega _ {0} / 4 D _ {e}\) is the anharmonic constant. Similar to the harmonic oscillator, the frequency \(\omega _ {0} = \sqrt {\kappa / m _ {R}}\). The anharmonic constant e x is commonly seen in the spectroscopy expression for the anharmonic vibrational energy levels\[G ( v ) = \omega _ {e} \left( v + \frac {1} {2} \right) - \omega _ {e} x _ {e} \left( v + \frac {1} {2} \right)^{2} + \omega _ {e} y _ {e} \left( v + \frac {1} {2} \right)^{3} + \cdots \label{80}\]From Equation \ref{79}, the ground state (or zero-point) energy is\[E _ {0} = \frac {1} {2} \hbar \omega _ {0} \left( 1 - \frac {1} {2} x _ {e} \right) \label{81}\]So the dissociation energy for the Morse potential is given by \(D_{0}=D_{e}-E_{0}\). The transition energies are\[E _ {n} - E _ {m} = \hbar \omega _ {0} ( n - m ) \left[ 1 - x _ {e} \left( n + m + \frac {1} {2} \right) \right] \label{82}\]The proper harmonic expressions are obtained from the corresponding Morse oscillator expressions by setting \(D _ {e} \rightarrow \infty\) or \(x _ {e} \rightarrow 0\).Shape of the Morse potential illustrating the first six energy eigenvalues. First six eigenfunctions of the Morse oscillator potential. The wavefunctions for the Morse oscillator can also be expressed analytically in terms of associated Laguerre polynomials \(\mathcal {L} _ {n}^{\prime} ( z )\)\[\psi _ {n} = N _ {n} e^{- z / 2} z^{b / 2} \mathcal {L} _ {n}^{b} ( z ) \label{83}\]where \(N_{n}=[\alpha \cdot b \cdot n ! / \Gamma(k-n)]^{1 / 2}\), \(z=k \exp [-\alpha q], b=k-2 n-1\), and \(k=4 D_{e} / \hbar \omega_{0}\). These expressions and those for matrix elements in \(q, q^{2}, \mathrm{e}^{-\alpha q}, \text {and} q \mathrm{e}^{-\alpha q}\) have been given by Vasan and Cross.To describe quantum mechanical rotation or orbital motion, one has to quantize angular momentum. The total orbital angular momentum operator is defined as\[\hat{L}=\hat{r} \times \hat{p}=i \hbar(\hat{r} \times \nabla)\]It has three components \(\left(\hat{L}_{x}, \hat{L}_{y}, \hat{L}_{z}\right)\) that generate rotation about the x, y, or z axis, and whose magnitude is given by\(\hat{L}^{2}=\hat{L}_{x}^{2}+\hat{L}_{y}^{2}+\hat{L}_{z}^{2}\). The angular momentum operators follow the commutation relationships\[\left[ H , L _ {z} \right] = 0 \label{85A}\]\[\left[ H , L^{2} \right] = 0 \label{85B}\]\[\left[ L _ {x} , L _ {y} \right] = i \hbar L _ {z} \label{86}\](In Equation \ref{86} the \(x\), \(y\), \(z\) indices can be cyclically permuted.) There is an eigenbasis common to \(H\) and \(L^2\) and one of the \(L_i\), which we take to be \(L_z\). The eigenvalues for the orbital angular momentum operator L and z-projection of the angular momentum Lz are\[L^{2} | \ell m \rangle = \hbar^{2} \ell ( \ell + 1 ) | \ell m \rangle \quad \ell = 0,1,2 \ldots \label{87}\]\[L _ {z} | \ell m \rangle = \hbar m | \ell m \rangle \quad m = 0 , \pm 1 , \pm 2 \ldots \pm \ell \label{88}\]where the eigenstates \(| \ell m \rangle\) are labeled by the orbital angular momentum quantum number \(\ell\), and the magnetic quantum number, \(m\).Similar to the strategy used for the harmonic oscillator, we can also define raising and lowering operators for the total angular momentum,\[\hat {L} _ {\pm} = \hat {L} _ {i} \pm \mathrm {i} \hat {L} _ {y} \label{89}\]which follow the commutation relations \(\left[ \hat {L}^{2} , \hat {L} _ {\pm} \right] = 0\) and \(\left[ \hat {L} _ {z} , \hat {L} _ {\pm} \right] = \pm \hbar \hat {L} _ {\pm}\), and satisfy the eigenvalue equation\[\hat {L} _ {\pm} | \ell m \rangle = A _ {\ell , m} | \ell m \rangle \label{90}\]\[A _ {\ell , m} = \hbar [ \ell ( \ell + 1 ) - m ( m \pm 1 ) ]^{1 / 2}\]Let’s examine the role of angular momentum for the case of a particle experiencing a spherically symmetric potential V(r) such as the hydrogen atom, 3D isotropic harmonic oscillator, and free particles or molecules. For a particle with mass \(m_{R}\), the Hamiltonian is\[\hat{H}=-\frac{\hbar^{2}}{2 m} \nabla^{2}+V(r) \label{91}\]Writing the kinetic energy operator in spherical coordinates,\[-\frac{\hbar^{2}}{2 m} \nabla^{2}=-\frac{\hbar^{2}}{2 m}\left(\frac{1}{r^{2}} \frac{\partial}{\partial r} r^{2} \frac{\partial}{\partial r}-\frac{1}{r^{2}} L^{2}\right) \label{92}\]where the square of the total angular momentum is\[L^{2}=-\frac{1}{\sin \theta}\left(\frac{1}{\sin \theta} \frac{\partial^{2}}{\partial \phi^{2}}+\frac{\partial}{\partial \theta} \sin \theta \frac{\partial}{\partial \theta}\right) \label{93}\]We note that this representation separates the radial dependence in the Hamiltonian from the angular part. We therefore expect that the overall wavefunction can be written as a product of a radial and an angular part in the form\[\psi(r, \theta, \phi)=R(r) Y(\theta, \phi) \label{94}\]Substituting this into the TISE, we find that we solve for the orientational and radial wavefunctions separately. Considering solutions first to the angular part, we note that the potential is only a function of r, and only need to consider the angular momentum. This leads to the identities in eqs. (\ref{87}) and (\ref{88}), and reveals that the \(|\ell m\rangle\) wavefunctions projected onto spherical coordinates are represented by the spherical harmonics\[Y_{\ell}^{m}(\theta, \phi)=N_{t m}^{Y} P_{\ell}^{|m|}(\cos \theta) \mathrm{e}^{i m \phi} \label{95}\]\(P_{\ell}^{m}\) are the associated Legendre polynomials and the normalization factor is\[N_{(m}^{Y}=(-1)^{(m+\mid m) / 2} i^{\ell}\left[\frac{2 \ell+1}{4 \pi} \frac{(\ell-|m|) !}{(\ell+|m|) !}\right]^{1 / 2}\]The angular components of the wavefunction are common to all eigenstates of spherically symmetric potentials. In chemistry, it is common to use real angular wavefunctions instead of the complex form in eq. (\ref{95}). These are constructed from the linear combinations \(Y_{\mathrm{n}, z} \pm Y_{\mathrm{n},-\ell}\).Substituting eq. (\ref{92}) and eq. (\ref{87}) into eq. (\ref{91}) leads to a new Hamiltonian that can be inserted into the Schrödinger equation. This can be solved as a purely radial problem for a given value of \(l\). It is convenient to define the radial distribution function \(\chi(r)=r R(r)\), which allows the TISE to be rewritten as\[\left(-\frac{\hbar^{2}}{2 m} \frac{\partial^{2}}{\partial r^{2}}+U(r, \ell)\right) \chi=E \chi \label{96}\]U plays the role of an effective potential\[U(r, \ell)=V(r)+\frac{\hbar^{2}}{2 m r^{2}} \ell(\ell+1) \label{97}\]Equation (\ref{96}) is known as the radial wave equation. It looks like the TISE for a one-dimensional problem in r, where we could solve this equation for each value of \(\ell\). Note U has a barrier due to centrifugal kinetic energy that scales as \(r^{-2} \text {for} \ell>0\).The wavefunctions defined in eq. (\ref{94}) are normalized such that\[\int|\psi|^{2} d \Omega=1 \label{98}\]where \[\int d \Omega \equiv \int_{0}^{\infty} r^{2} d r \int_{0}^{\pi} \sin \theta d \theta \int_{0}^{2 \pi} d \phi \label{99}\]If we restrict the integration to be over all angles, we find that the probability of finding a particle between a distance r and \(r+d r \text {is} P(r)=4 \pi r^{2}|R(r)|^{2}=4 \pi|\chi(r)|^{2}\).To this point the treatment of orbital angular momentum is identical for any spherically symmetric potential. Now we must consider the specific form of the potential; for instance in the case of the isotropic harmonic oscillator, \(U(r)=1 / 2 \kappa r^{2}\). In the case of a free particle, we substitute \(V(r)=0 \text {in eq.}(\ref{97})\) and find that the radial solutions can be written in terms of spherical Bessel functions, \(j_{\ell}\). Then the solutions to the full wavefunction for the free particle can be written as \[\Psi(r, \theta, \phi)=j_{\ell}(\mathrm{k} r) Y_{\ell}^{m}(\theta, \phi) \label{100}\]where the wavevector k is defined as in eq. (\ref{46}).For a hydrogen-like atom, a single electron of charge e interacts with a nucleus of charge \(Ze\) under the influence of a Coulomb potential\[V_{H}(r)=-\frac{Z e^{2}}{4 \pi \epsilon_{0}} \frac{1}{r} \label{101}\]We can simplify the expression by defining atomic units for distance and energy. The Bohr radius is defined as\[a_{0}=4 \pi \varepsilon_{0} \frac{\hbar^{2}}{m_{e} e^{2}}=5.2918 \times 10^{-11} \mathrm{~m} \label{102}\]and the Hartree is\[\mathcal{E}_{H}=\frac{1}{4 \pi \varepsilon_{0}} \frac{e^{2}}{a_{0}}=4.3598 \times 10^{-18} J=27.2 \mathrm{eV} \label{103}\]Written in terms of atomic units, we can see from eq. (\ref{103}) that eq. (\ref{101}) becomes \(\left(V / \mathcal{E}_{H}\right)=-Z /\left(r / a_{0}\right)\). Thus the conversion effectively sets the SI variables \(m_{\mathrm{e}}=e=\left(4 \pi \varepsilon_{0}\right)^{-1}=\hbar = 1\). Then the radial wave equation is \[\frac{\partial^{2} \chi}{\partial r^{2}}+\left(\frac{2 Z}{r}-\frac{\ell(\ell+1)}{r^{2}}\right) \chi=2 E \chi \label{104}\]The effective potential within the parentheses in eq. (\ref{104}) is shown in for varying \(\ell\). Solutions to the radial wavefunction for the hydrogen atom take the form \[R_{n \ell}(r)=N_{n \ell}^{R} \rho^{\ell} \mathcal{L}_{n+\ell}^{2 \ell+1}(\rho) e^{-\rho / 2} \label{105}\]where the reduced radius \(\rho=2 r / n a_{0} \text {and} \mathcal{L}_{k}^{\alpha}(z)\) are the associated Laguerre polynomials. The primary quantum number takes on integer values \(n=1,2,3 \ldots, \text {and} \ell\) is constrained such that \(\ell= 0,1,2 \ldots n-1\). The radial normalization factor in eq. (\ref{105}) is\[N_{n \ell}^{R}=-\frac{2}{n^{3} a_{0}^{3 / 2}}\left[\frac{(\mathrm{n}-\ell-1) !}{[(n+1) !]^{3}}\right]^{1 / 2} \label{106}\]The energy eigenvalues are\[E_{n}=-\frac{Z^{2}}{2 n^{2}} \mathcal{E}_{H} \label{107}\]The radial effective potential, \(U_{e f f}(\rho)\) Radial probability density R and radial distribution function \(\chi=r R\).In describing electronic wavefunctions, the electron spin also results in a contribution to the total angular momentum, and results in a spin contribution to the wavefunction. The electron spin angular momentum S and its z-projection are quantized as\[S^{2}\left|s m_{s}\right\rangle=\hbar^{2} s(s+1)\left|s m_{s}\right\rangle \quad s=0,1 / 2,1,3 / 2,2 \ldots \label{108}\]\[S_{z}\left|s m_{s}\right\rangle=\hbar m_{s}\left|s m_{s}\right\rangle \quad m_{s}=-s,-s+1, \ldots, s \label{109}\]where the electron spin eigenstates \(\left|s m_{s}\right\rangle\) are labeled by the electron spin angular momentum quantum number s and the spin magnetic quantum number \(m s\). The number of values of \(S_{z}\) is \(2 s+1\) and is referred to as the spin multiplicity. As fermions, electrons have half-integer spin, and each unpaired electron contributes \(1 / 2\) to the electron spin quantum number s. A single unpaired electron has \(s=1 / 2, \text {for which} m_{s}=\pm 1 / 2\) corresponding to spin-up and spin-down configurations. For multi-electron systems, the spin is calculated as the vector sum of spins, essentially \(1 / 2\) times the number of unpaired electrons.The resulting total angular momentum for an electron is \(J=L+S\). J has associated with it the total angular momentum quantum number \(j\), which takes on values of \(j=|\ell-s|,|\ell-s|+1, \ldots \ell+s\). The additive nature of the orbital and spin contributions to the angular momentum leads to a total electronic wavefunction that is a product of spatial and spin wavefunctions. \[\Psi_{\text {tot}}=\Psi(r, \theta, \phi)\left|s m_{s}\right\rangle \label{110}\]Thus the state of an electron can be specified by four quantum numbers \(\Psi_{t o t}=\left|n \ell m_{\ell} m_{s}\right\rangle\).In the case of a freely spinning anisotropic molecule, the total angular momentum J is obtained from the sum of the orbital angular momentum L and spin angular momentum S for the molecular constituents: \(J=L+S, \text {where} L=\sum_{i} L_{i} \text {and} S=\sum_{i} S_{i}\). The case of the rigid rotor refers to the minimal model for the rotational quantum states of a freely spinning object that has cylindrical symmetry and no magnetic spin. Then, the Hamiltonian is given by the rotational kinetic energy\[H_{r o t}=\frac{\hat{J}^{2}}{2 I} \label{111}\]I is the moment of inertia about the principle axis of rotation. The eigenfunctions for this Hamiltonian are spherical harmonics \(Y_{J, M}(\theta, \phi)\) \[\begin{array}{ll}
\hat{J}^{2}\left|Y_{J, M}\right\rangle=\hbar^{2} J(J+1)\left|Y_{J, M}\right\rangle & J=0,1,2 \ldots \\
\hat{J}_{z}\left|Y_{J, M}\right\rangle=M \hbar\left|Y_{J, M}\right\rangle & M=-J,-J+1, \ldots, J
\end{array} \label{112}\]J is the rotational quantum number. M is its projection onto the z axis. The energy eigenvalues for \(H_{\text {rot}}\) are\[E_{J, M}=\bar{B} J(J+1) \label{113}\]where the rotational constant is\[\bar{B}=\frac{\hbar^{2}}{2 I} \label{114}\]More commonly, \(\bar{B}\) is given in units of \(c m^{-1} \text {using} \bar{B}=h / 8 \pi^{2} I c\).This page titled 1.3: Basic Quantum Mechanical Models is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,265 |
1.4: Exponential Operators
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/01%3A_Overview_of_Time-Independent_Quantum_Mechanics/1.04%3A_Exponential_Operators | Throughout our work, we will make use of exponential operators of the form\[\hat {T} = e^{- i \hat {A}} \label{115}\]We will see that these exponential operators act on a wavefunction to move it in time and space, and are therefore also referred to as propagators. Of particular interest to us is the time-evolution operator, \(\hat {U} = e^{- i \hat {H} t / \hbar},\) which propagates the wavefunction in time. Note the operator \(\hat{T}\) is a function of an operator, \(f(\hat{A})\). A function of an operator is defined through its expansion in a Taylor series, for instance\[\hat {T} = e^{- \hat {i} \hat {A}} = \sum _ {n = 0}^{\infty} \dfrac {( - i \hat {A} )^{n}} {n !} = 1 - i \hat {\hat {A}} - \dfrac {\hat {A} \hat {A}} {2} - \cdots \label{116}\]Since we use them so frequently, let’s review the properties of exponential operators that can be established with Equation \ref{116}. If the operator \(\hat {A}\) is Hermitian, then\(\hat {T} = e^{- i \hat {A}}\) is unitary, i.e., \(\hat {T}^{\dagger} = \hat {T}^{- 1}.\) Thus the Hermitian conjugate of \(\hat {T}\) reverses the action of \(\hat {T}\). For the time-propagator \(\vec {U}\), \(\vec {U}^{\dagger}\) is often referred to as the time-reversal operator.The eigenstates of the operator \(\hat {A}\) also are also eigenstates of \(f(\hat {A})\), and eigenvalues are functions of the eigenvalues of \(\hat {A}\). Namely, if you know the eigenvalues and eigenvectors of \(\hat {A}\), i.e., \(\hat {A} \varphi _ {n} = a _ {n} \varphi _ {n},\)you can show by expanding the function\[f ( \hat {A} ) \varphi _ {n} = f \left( a _ {n} \right) \varphi _ {n} \label{117}\]Our most common application of this property will be to exponential operators involving the Hamiltonian. Given the eigenstates \(\varphi _ {n}\), then \(\hat {H} | \varphi _ {n} \rangle = E _ {n} | \varphi _ {n} \rangle\) implies\[e^{- i \hat {H} t / \hbar} | \varphi _ {n} \rangle = e^{- i E _ {n} t / \hbar} | \varphi _ {n} \rangle \label{118}\]Just as \(\hat {D} _ {x} ( \lambda )\) is the time-evolution operator that displaces the wavefunction in time, \(\hat {D} _ {x} = e^{- i \hat {p} _ {x} x / h}\) is the spatial displacement operator that moves \(\psi\) along the \(x\) coordinate. If we define \(\hat {D}_ {x} = e^{- i \hat {p} _ {x} x / h},\) then the action of is to displace the wavefunction by an amount \(\lambda\)\[| \psi ( x - \lambda ) \rangle = \hat {D} _ {x} ( \lambda ) | \psi (x) \rangle \label{119}\]Also, applying \(\hat {D} _ {x} ( \lambda )\) to a position operator shifts the operator by \(\lambda\)\[\hat {D} _ {x}^{\dagger} \hat {x} \hat {D} _ {x} = x + \lambda \label{120}\]Thus \(e^{- i \hat {p} _ {x} \lambda / \hbar} | x \rangle\) is an eigenvector of \(\hat {x}\) with eigenvalue \(x + \lambda\) instead of \(x\). The operator\(\hat {D} _ {x} = e^{- i \hat {p} _ {x} \lambda / h}\) is a displacement operator for \(x\) position coordinates. Similarly, \(\hat {D} _ {y} = e^{- i \hat {p} _ {y} \lambda / h}\)generates displacements in \(y\) and \(\hat {D_z}\) in \(z\). Similar to the time-propagator \(\boldsymbol {U}\), the displacement operator \(\boldsymbol {D}\) must be unitary, since the action of \(\hat {D}^{\dagger} \hat {D}\) must leave the system unchanged. That is if \(\hat {D} _ {x}\) shifts the system to \(x\) from \(x_0\), then \(\hat {D} _ {x}^{\dagger}\) shifts the system from \(x\) back to \(x_0\).We know intuitively that linear displacements commute. For example, if we wish to shift a particle in two dimensions, x and y, the order of displacement does not matter.
\end{aligned}\]These displacement operators commute, as expected from \([p_x,p_y] = 0.\)Similar to the displacement operator, we can define rotation operators that depend on the angular momentum operators, \(L_x\), \(L_y\), and \(L_z\). For instance,\[\hat {R} _ {x} ( \phi ) = e^{- i \phi L _ {x} / \hbar}\]gives a rotation by angle \(\phi\) about the \(x\) axis. Unlike linear displacement, rotations about different axes do not commute. For example, consider a state representing a particle displaced along the z axis, \(| z 0 \rangle\). Now the action of two rotations \(\hat {R} _ {x}\) and \(\hat {R} _ {y}\) by an angle of \(\phi = \pi / 2\) on this particle differs depending on the order of operation, as illustrated in If we rotate first about \(x\), the operation\[e^{- i \tfrac {\pi} {2} L _ {y} / \hbar} e^{- i \tfrac {\pi} {2} L _ {x} / \hbar} | z _ {0} \rangle \rightarrow | - y \rangle \label{122}\]leads to the particle on the –y axis, whereas the reverse order\[e^{- i \tfrac {\pi} {2} L _ {x} / \hbar} e^{- i \tfrac {\pi} {2} L _ {y} / \hbar} | z _ {0} \rangle \rightarrow | + x \rangle \label{123}\]leads to the particle on the +x axis. The final state of these two rotations taken in opposite order differ by a rotation about the z axis. Since rotations about different axes do not commute, we expect the angular momentum operators not to commute. Indeed, we know that\[\left[ L _ {x} , L _ {y} \right] = i \hbar L _ {z}\]where the commutator of rotations about the x and y axes is related by a z-axis rotation. As with rotation operators, we will need to be careful with time-propagators to determine whether the order of time-propagation matters. This, in turn, will depend on whether the Hamiltonians at two points in time commute.Properties of exponential operators1. If \(\hat{A}\) and \(\hat{B}\) do not commute, but \([ \hat {A} , \hat {B} ]\) commutes with \(\hat{A}\) and \(\hat{B}\), then\[e^{\hat {A} + \hat {B}} = e^{\hat {A}} e^{\hat {B}} e^{- \frac {1} {2} [ \hat {A} , \hat {B} ]} \label{124}\]\[e^{\hat {A}} e^{\hat {B}} = e^{\hat {B}} e^{\hat {A}} e^{- [ \hat {B} , \hat {A} ]} \label{125}\]2. More generally, if \(\hat{A}\) and \(\hat{B}\) do not commute,\[ e^{\hat {A}} e^{\hat {B}} = {\mathrm {exp}} \left[ \hat {A} + \hat {B} + \dfrac {1} {2} [ \hat {A} , \hat {B} ] + \dfrac {1} {12} ( [ \hat {A} , [ \hat {A} , \hat {B} ] ] + [ \hat {A} , [ \hat {B} , \hat {B} ] ] ) + \cdots \right] \label{126}\]3. The Baker–Hausdorff relationship:\[\left. \begin{array} {r l} {\mathrm {e}^{i \hat {G} \lambda} \hat {A} \mathrm {e}^{- i \hat {G} \lambda} = \hat {A} + i \lambda [ \hat {G} , \hat {A} ] + \left( \dfrac {i^{2} \lambda^{2}} {2 !} \right) [ \hat {G} , [ \hat {G} , \hat {A} ] ] + \ldots} \\ {} & {+ \left( \dfrac {i^{n} \lambda^{n}} {n !} \right) [ \hat {G} , [ \hat {G} , [ \hat {G} , \hat {A} ] ] ] \ldots ] + \ldots} \end{array} \right. \label{127}\]where \(λ\) is a number.This page titled 1.4: Exponential Operators is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,266 |
1.5: Numerically Solving the Schrödinger Equation
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/01%3A_Overview_of_Time-Independent_Quantum_Mechanics/1.05%3A_Numerically_Solving_the_Schrodinger_Equation | Often the bound potentials that we encounter are complex, and the time-independent Schrödinger equation will need to be evaluated numerically. There are two common numerical methods for solving for the eigenvalues and eigenfunctions of a potential. Both methods require truncating and discretizing a region of space that is normally spanned by an infinite dimensional Hilbert space. The Numerov method is a finite difference method that calculates the shape of the wavefunction by integrating step-by-step across along a grid. The DVR method makes use of a transformation between a finite discrete basis and the finite grid that spans the region of interest.A one-dimensional Schrödinger equation for a particle in a potential can be numerically solved on a grid that discretizes the position variable using a finite difference method. The TISE is\[[ T + V (x) ] \psi (x) = E \psi (x) \label{128}\]with\[T = - \dfrac {\hbar^{2}} {2 m} \dfrac {\partial^{2}} {\partial x^{2}},\]which we can write as\[\psi^{\prime \prime} (x) = - k^{2} (x) \psi (x) \label{129}\]where\[k^{2} (x) = \dfrac {2 m} {\hbar^{2}} [ E - V (x) ].\]If we discretize the variable \(x\), choosing a grid spacing \(\delta x\) over which \(V\) varies slowly, we can use a three point finite difference to approximate the second derivative:\[f _ {i}^{\prime \prime} \approx \dfrac {1} {\delta x^{2}} \left( f \left( x _ {i + 1} \right) - 2 f \left( x _ {i} \right) + f \left( x _ {i - 1} \right) \right) \label{130}\]The discretized Schrödinger equation can then be written in the form\[\psi \left( x _ {i + 1} \right) - 2 \psi \left( x _ {i} \right) + \psi \left( x _ {i - 1} \right) = - k^{2} \left( x _ {i} \right) \psi \left( x _ {i} \right) \label{131}\]Using the equation for \(\psi \left( x _ {i + 1} \right)\), one can iteratively solve for the eigenfunction. In practice, you discretize over a range of space such that the highest and lowest values lie in a region where the potential is very high or forbidden. Splitting the space into N points, chose the first two values \(\psi \left( x _ {1} \right) = 0\) and \(\psi \left( x _ {2} \right)\)x to be a small positive or negative number, guess \(E\), and propagate iteratively to \(\psi \left( x _ {N} \right)\). A comparison of the wavefunctions obtained by propagating from \(x_1\) to \(x_N\) with that obtained propagating from \(x_N\) to \(x_1\) tells you how good your guess of \(E\) was.The Numerov Method improves on Equation \ref{131} by taking account for the fourth derivative of the wavefunction \(\Psi^{( 4 )}\), leading to errors on the order \(O \left( \delta x^{6} \right)\). Equation \ref{130} becomes\[f _ {i}^{\prime \prime} \approx \dfrac {1} {\delta x^{2}} \left( f \left( x _ {i + 1} \right) - 2 f \left( x _ {i} \right) + f \left( x _ {i - 1} \right) \right) - \dfrac {\delta x^{2}} {12} f _ {i}^{( 4 )} \label{132}\]By differentiating Equation \ref{129} we know\[\psi^{( 4 )} (x) = - \left( k^{2} (x) \psi (x) \right)^{\prime \prime}\]and the discretized Schrödinger equation becomes\[\left.\begin{aligned} \psi^{\prime \prime} \left( x _ {i} \right) & = \dfrac {1} {\delta x^{2}} \left( \psi \left( x _ {i + 1} \right) - 2 \psi \left( x _ {i} \right) + \psi \left( x _ {i - 1} \right) \right) + \\ & \dfrac {1} {12} \left( k^{2} \left( x _ {i + 1} \right) \psi \left( x _ {i + 1} \right) - 2 k^{2} \left( x _ {i + 1} \right) \psi \left( x _ {i} \right) + k^{2} \left( x _ {i + 1} \right) \psi \left( x _ {i - 1} \right) \right) \end{aligned} \right. \label{33}\]This equation leads to the iterative solution for the wavefunction\[\psi \left( x _ {i + 1} \right) = \dfrac{\psi \left( x _ {i} \right) \left( 2 + \dfrac {10 \delta x^{2}} {12} k^{2} \left( x _ {i} \right) \right) - \psi \left( x _ {i - 1} \right) \left( 1 - \dfrac {\delta x^{2}} {12} k^{2} \left( x _ {i - 1} \right) \right)}{1 - \dfrac {\delta x^{2}} {12} k^{2} \left( x _ {i + 1} \right)} \label{134}\]Numerical solutions to the wavefunctions of a bound potential in the position representation require truncating and discretizing a region of space that is normally spanned by an infinite dimensional Hilbert space. The DVR approach uses a real space basis set whose eigenstates \(\varphi _ {i} (x)\) we know and that span the space of interest—for instance harmonic oscillator wavefunctions—to express the eigenstates of a Hamiltonian in a grid basis (\(\theta _ {j}\)) that is meant to approximate the real space continuous basis \(\delta (x)\). The two basis sets, which we term the eigenbasis (\(\varphi\)) and grid basis (\(\theta\)), will be connected through a unitary transformation\(\Phi^{\dagger} \varphi (x) = \theta (x) \label{135}\) \(\Phi \theta (x) = \varphi (x)\)For \(N\) discrete points in the grid basis, there will be \(N\) eigenvectors in the eigenbasis, allowing the properties of projection and completeness will hold in both bases. Wavefunctions can be obtained by constructing the Hamiltonian in the eigenbasis,\(H = T ( \hat {p} ) + V ( \hat {x} ),\) transforming to the DVR basis, \(H^{D V R} = \Phi H \Phi,\) and then diagonalizing.Here we will discuss a version of DVR in which the grid basis is set up to mirror the continuous \(| \mathcal {X} \rangle\) eigenbasis. We begin by choosing the range of \(x\) that contain the bound states of interest and discretizing these into \(N\) points (\(x_i\)) equally spaced by \(δx\). We assume that the DVR basis functions \(\theta _ {j} \left( x _ {i} \right)\) resemble the infinite dimensional position basis\[\theta _ {j} \left( x _ {i} \right) = \sqrt {\Delta x} \delta _ {i j} \label{136}\]Our truncation is enabled using a projection operator in the reduced space\[P _ {N} = \sum _ {i = 1}^{N} | \theta _ {i} \rangle \langle \theta _ {i} | \approx 1 \label{137}\]which is valid for appropriately high \(N\). The complete Hamiltonian can be expressed in the DVR basis DVR\[H^{D V R} = T^{D V R} + V^{D V R}.\]For the potential energy, since \(\left\{\theta _ {i} \right\}\) is localized with \(\left\langle \theta _ {i} | \theta _ {j} \right\rangle = \delta _ {i j}\), we make the DVR approximation, which casts \(V^{DVR}\) into a diagonal form that is equal to the potential energy evaluated at the grid point:\[V _ {i j}^{D V R} = \left\langle \theta _ {i} | V ( \hat {x} ) | \theta _ {j} \right\rangle \approx V \left( x _ {i} \right) \delta _ {i j} \label{138}\]This comes from approximating the transformation as \(\Phi V ( \hat {x} ) \Phi^{\dagger} \approx V \left( \Phi \hat {x} \Phi^{\dagger} \right) . \)For the kinetic energy matrix elements \(\left\langle \theta _ {i} | T ( \hat {p} ) | \theta _ {j} \right\rangle\), we need to evaluate second derivatives between different grid points. Fortunately, Colbert and Miller have simplified this process by finding an analytical form for the \(T^{DVR}\) matrix for a uniformly gridded box with a grid spacing of \(∆x\).\[T _ {i j}^{\mathrm {DVR}} = \dfrac {\hbar^{2} ( - 1 )^{i - j}} {2 m \Delta x^{2}} \left\{\begin{array} {c c} {\pi^{2} / 3} & {i = j} \\ {2 / ( i - j )^{2}} & {i \neq j} \end{array} \right\} \label{139}\]This comes from a Fourier expansion in a uniformly gridded box. Naturally this looks oscillatory in \(x\) at period of \(δx\). Expression becomes exact in the limit of \(N \rightarrow \infty\) or \(\Delta x \rightarrow 0\). The numerical routine becomes simple and efficient. We construct a Hamiltonian filling with matrix elements whose potential and kinetic energy contributions are given by Equations \ref{138} and \ref{139}. Then we diagonalize \(H^{DVR}\), from which we obtain \(N\) eigenvalues and the \(N\) corresponding eigenfunctions.This page titled 1.5: Numerically Solving the Schrödinger Equation is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,267 |
10.1: Definitions, Properties, and Examples of Correlation Functions
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/10%3A_Time-Correlation_Functions/10.01%3A_Definitions_Properties_and_Examples_of_Correlation_Functions | Returning to the microscopic fluctuations of a molecular variable \(A\), there seems to be little information in observing the trajectory for a variable characterizing the time-dependent behavior of an individual molecule. However, this dynamics is not entirely random, since they are a consequence of time-dependent interactions with the environment. We can provide a statistical description of the characteristic time scales and amplitudes to these changes by comparing the value of \(A\) at time \(t\) with the value of \(A\) at time \(t’\) later.We define a time-correlation function (TCF) as a time-dependent quantity, \(A(t)\), multiplied by that quantity at some later time, \(A(t')\), and averaged over an equilibrium ensemble:\[C _ {A A} \left( t , t^{\prime} \right) \equiv \left\langle A (t) A \left( t^{\prime} \right) \right\rangle _ {e q}\label{9.1}\]The classical form of the correlation function is evaluated as\[C _ {A A} \left( t , t^{\prime} \right) = \int d \mathbf {p} \int d \mathbf {q} A ( \mathbf {p} , \mathbf {q} ; t ) A \left( \mathbf {p} , \mathbf {q} ; t^{\prime} \right) \rho _ {e q} ( \mathbf {p} , \mathbf {q} ) \label{9.2}\]whereas the quantum correlation function can be evaluated as\[\begin{align} C _ {A A} \left( t , t^{\prime} \right) &= \operatorname {Tr} \left[ \rho _ {e q} A (t) A \left( t^{\prime} \right) \right] \\[4pt] &= \sum _ {n} p _ {n} \left\langle n \left| A (t) A \left( t^{\prime} \right) \right| n \right\rangle \label{9.3} \end{align}\]where\[p _ {n} = e^{- \beta E _ {n}} / Z.\]These are auto-correlation functions, which correlates the same variable at two points in time, but one can also define a cross-correlation function that describes the correlation of two different variables in time\[C _ {A B} \left( t , t^{\prime} \right) \equiv \left\langle A (t) B \left( t^{\prime} \right) \right\rangle \label{9.4}\]So, what does a time-correlation function tell us? Qualitatively, a TCF describes how long a given property of a system persists until it is averaged out by microscopic motions and interactions with its surroundings. It describes how and when a statistical relationship has vanished. We can use correlation functions to describe various time-dependent chemical processes. For instance, we will use \(\langle \mu (t) \mu ( 0 ) \rangle\) - the dynamics of the molecular dipole moment - to describe absorption spectroscopy. We will also use them for relaxation processes induced by the interaction of a system and bath:\[\left\langle H _ {S B} (t) H _ {S B} ( 0 ) \right\rangle.\]Classically, you can use TCFs to characterize transport processes. For instance a diffusion coefficient is related to the velocity correlation function:\[D = \frac {1} {3} \int _ {0}^{\infty} d t \langle v (t) v ( 0 ) \rangle.\]A typical correlation function for random fluctuations at thermal equilibrium in the variable \(A\) might look likeIt is described by a number of properties:Example \(\PageIndex{1}\): Velocity Autocorrelation Function for GasLet’s analyze a dilute gas of molecules which have a Maxwell–Boltzmann distribution of velocities. We focus on the component of the molecular velocity along the \(\hat{x}\) direction, \(x_v\). We know that the average velocity is \(\left\langle v _ {x} \right\rangle = 0\). The velocity correlation function is\[C _ {v _ {x} v _ {x}} ( \tau ) = \left\langle v _ {x} ( \tau ) v _ {x} ( 0 ) \right\rangle \nonumber \]From the equipartition principle the average translational energy is\[\frac {1} {2} m \left\langle v _ {x}^{2} \right\rangle = k _ {B} T / 2 \nonumber\]For time scales short compared to collisions between molecules, the velocity of any given molecule remains constant and unchanged, so the correlation function for the velocity is also unchanged at \(k_BT/m\). This non-interacting regime corresponds to the behavior of an ideal gas.For any real gas, there will be collisions that randomize the direction and speed of the molecules, so that any molecule over a long enough time will sample the various velocities within the Maxwell–Boltzmann distribution. From the trajectory of x-velocities for a given molecule we can calculate \(C _ {v _ {x _ {x}}} ( \tau )\) using time-averaging. The correlation function will drop on with a correlation time \(\tau_c\), which is related to mean time between collisions. After enough collisions, the correlation with the initial velocity is lost and \(C _ {v _ {x _ {x}}} ( \tau )\) approaches \(\left\langle v _ {x}^{2} \right\rangle = 0\). Finally, we can determine the diffusion constant for the gas, which relates the time and mean square displacement of the molecules:\[\left\langle x^{2} (t) \right\rangle = 2 D _ {x} t.\nonumber\]From\[D _ {x} = \int _ {0}^{\infty} d t \left\langle v _ {x} (t) v _ {x} ( 0 ) \right\rangle\nonumber\]we have\[D _ {x} = k _ {B} T \tau _ {c} / m\nonumber\]In viscous fluids \(\tau _ {c} / m\) is called the mobility, \(\mu\).Example \(\PageIndex{2}\): Dipole Moment Correlation FunctionNow consider the correlation function for the dipole moment of a polar diatomic molecule in a dilute gas, \(\overline {\mu}\). For a rigid rotating object, we can decompose the dipole into a magnitude and a direction unit vector:\[\overline {\mu} _ {i} = \mu _ {0} \cdot \hat {u}.\nonumber\]We know that \(\langle \hat {\mu} \rangle = 0\) since all orientations of the gas phase molecules are equally probable. The correlation function is\[\begin{align*} C _ {\mu \mu} (t) & = \langle \overline {\mu} (t) \overline {\mu} ( 0 ) \rangle \\[4pt] & = \left\langle \mu _ {0}^{2} \right\rangle \langle \hat {u} (t) \cdot \hat {u} ( 0 ) \rangle \end{align*}\]This correlation function projects the time-dependent orientation of the molecule onto the initial orientation. Free inertial rotational motion will lead to oscillations in the correlation function as the dipole spins. The oscillations in this correlation function can be related to the speed of rotation and thereby the molecule’s moment of inertia (discussed below). Any apparent damping in this correlation function would reflect the thermal distribution of angular velocities. In practice a real gas would also have the collisional damping effects described in Example \(\PageIndex{1}\) superimposed on this relaxation process.Example \(\PageIndex{3}\): Harmonic Oscillator Correlation FunctionThe time-dependent motion of a harmonic vibrational mode is given by Newton’s law in terms of the acceleration and restoring force as \(m \ddot {q} = - \kappa q\) or \(\ddot {q} = - \omega^{2} q\) where the force constant is \(\kappa = m \omega^{2}\). We can write a common solution to this equation as\[q (t) = q ( 0 ) \cos \omega t\nonumber\]Furthermore, the equipartition theorem says that the equilibrium thermal energy in a harmonic vibrational mode is\[\frac {1} {2} \kappa \left\langle q^{2} \right\rangle = \frac {k _ {B} T} {2}\nonumber\]We therefore can write the correlation function for the harmonic vibrational coordinate as\[\begin{align*} C _ {q q} (t) &= \langle q (t) q ( 0 ) \rangle \\[4pt] &= \left\langle q^{2} \right\rangle \cos \omega t \\[4pt] & = \frac {k _ {B} T} {\kappa} \cos \omega t \end{align*}\]This page titled 10.1: Definitions, Properties, and Examples of Correlation Functions is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,269 |
10.2: Correlation Function from a Discrete Trajectory
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/10%3A_Time-Correlation_Functions/10.02%3A_Correlation_Function_from_a_Discrete_Trajectory | In practice classical correlation functions in molecular dynamics simulations or single molecule experiments are determined from a time-average over a long trajectory at discretely sampled data points. Let’s evaluate \(C _ {A A} \)for a discrete and finite trajectory in which we are given a series of \(N\) observations of the dynamical variable \(A\) at equally separated time points ti. The separation between time points is \(t _ {i + 1} - t _ {i} = \Delta t\), and the length of the trajectory is \(T = N \Delta t\). Then we have ,\[C _ {A A} = \frac {1} {T} \sum _ {i , j = 1}^{N} \Delta t A \left( t _ {i} \right) A \left( t _ {j} \right) = \frac {1} {N} \sum _ {i , j = 1}^{N} A _ {i} A _ {j} \label{9.16}\]where \(A _ {i} = A \left( t _ {i} \right)\). To make this more useful we want to express it as the time interval between points \(\tau = t _ {j} - t _ {i} = ( j - i ) \Delta t\), and average over all possible pairwise products of \(A\) separated by \(\tau\). Defining a new count integer \(n = j -i\), we can express the delay as \(\tau = n \Delta t\). For a finite data set there are a different number of observations to average over at each time interval (n). We have the most pairwise products—\(N\) to be precise—when the time points are equal (ti=tj). We only have one data pair for the maximum delay \(\tau = T\). Therefore, the number of pairwise products for a given delay \(\tau\) is \(N-n\). So we can write Equation \ref{9.16} as\[C _ {A A} ( \tau ) = C ( n ) = \frac {1} {N - n} \sum _ {i = 1}^{N - n} A _ {i + n} A _ {i} \label{9.17}\]Note that this expression will only be calculated for positive values of \(n\), for which \(t_j≥t_i\). As an example consider the following calculation for fluctuations in a vibrational frequency \(\omega(t)\), which consists of 32000 consecutive frequencies in units of \(cm^{-1}\) for points separated by 10 femtoseconds, and has a mean value of \(\omega _ {0} = 3244 \mathrm {cm}^{- 1}\). This trajectory illustrates that there are fast fluctuations on femtosecond time scales, but the behavior is seemingly random on 100 picosecond time scalesAfter determining the variation from the mean \(\delta \omega \left( t _ {i} \right) = \omega \left( t _ {i} \right) - \omega 0\), the frequency correlation function is determined from Equation \ref{9.17}, with the substitution \(\delta \omega \left( t _ {i} \right) \rightarrow A _ {i}\).We can see that the correlation function reveals no frequency correlation on the time scale of 104 –105 fs, however a decay of the correlation function is observed for short delays signifying the loss of memory in the fluctuating frequency on the 103 fs time scale. From Equation \ref{9.15}, we find that the correlation time is \(\tau_C = 785\, fs\).This page titled 10.2: Correlation Function from a Discrete Trajectory is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,270 |
10.3: Quantum Time-Correlation Functions
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/10%3A_Time-Correlation_Functions/10.03%3A_Quantum_Time-Correlation_Functions | Quantum correlation functions involve the equilibrium (thermal) average over a product of Hermitian operators evaluated two times. The thermal average is implicit in writing\[C _ {A A} ( \tau ) = \langle A ( \tau ) A ( 0 ) \rangle.\]Naturally, this also invokes a Heisenberg representation of the operators, although in almost all cases, we will be writing correlation functions as interaction picture operators\[A _ {I} (t) = e^{i H _ {0} t} A e^{- i H _ {0} t}.\]To emphasize the thermal average, the quantum correlation function can also be written as\[C _ {A A} ( \tau ) = \left\langle \frac {e^{- \beta H}} {Z} A ( \tau ) A ( 0 ) \right\rangle \label{9.18}\]with \(\beta = \left( k _ {\mathrm {B}} T \right)^{- 1}\). If we evaluate this for a time-independent Hamiltonian in a basis of states n , inserting a projection operator leads to our previous expression\[C _ {A A} ( \tau ) = \sum _ {n} p _ {n} \langle n | A ( \tau ) A ( 0 ) | n \rangle \label{9.19}\]with \(p _ {n} = e^{- \beta E _ {n}} / Z\). Given the case of a time-independent Hamiltonian for which we have knowledge of the eigenstates, we can also express the correlation function in the Schrödinger picture as\[\begin{align} C _ {A A} ( \tau ) &= \sum _ {n} p _ {n} \left\langle n \left| U^{\dagger} ( \tau ) A U ( \tau ) A \right| n \right\rangle \\[4pt] &= \sum _ {n , m} p _ {n} \langle n | A | m \rangle \langle m | A | n \rangle e^{- i \omega _ {m n} \tau} \\[4pt] &= \sum _ {n , m} p _ {n} \left| A _ {m n} \right|^{2} e^{- i \omega _ {m n} \tau} \label{9.20} \end{align}\]There are a few properties of quantum correlation functions for Hermitian operators that can be obtained using the properties of the time-evolution operator. First, we can show that correlation functions are stationary:\[\left.\begin{aligned} \left\langle A (t) A \left( t^{\prime} \right) \right\rangle & = \left\langle U^{\dagger} (t) A ( 0 ) U (t) U^{\dagger} \left( t^{\prime} \right) A ( 0 ) U \left( t^{\prime} \right) \right\rangle \\[4pt] & = \left\langle U \left( t^{\prime} \right) U^{\dagger} (t) A U (t) U^{\dagger} \left( t^{\prime} \right) A \right\rangle \\[4pt] & = \left\langle U^{\dagger} \left( t - t^{\prime} \right) A U \left( t - t^{\prime} \right) A \right\rangle \\[4pt] & = \left\langle A \left( t - t^{\prime} \right) A ( 0 ) \right\rangle \end{aligned} \right. \label{9.21}\]Similarly, we can show\[\langle A ( - t ) A ( 0 ) \rangle = \langle A (t) A ( 0 ) \rangle^{*} = \langle A ( 0 ) A (t) \rangle \label{9.22}\]or in short\[C _ {A A}^{*} (t) = C _ {A A} ( - t ) \label{9.23}\]Note that the quantum \(C_{AA}(t)\) is complex. You cannot directly measure a quantum correlation function, but observables are often related to the real or imaginary part of correlation functions.\[C _ {A A} (t) = C _ {A A}^{\prime} (t) + i C _ {A A}^{\prime \prime} (t) \label{9.24}\]The real and imaginary parts of \(C_{AA}(t)\) can be separated as\[\left.\begin{aligned} C _ {A A}^{\prime} (t) & = \frac {1} {2} \left[ C _ {A A} (t) + C _ {A A}^{*} (t) \right] = \frac {1} {2} [ \langle A (t) A ( 0 ) \rangle + \langle A ( 0 ) A (t) \rangle ] \\[4pt] & = \frac {1} {2} \left\langle [ A (t) , A ( 0 ) ] _ {+} \right\rangle \end{aligned} \right. \label{9.25}\]\[\left.\begin{aligned} C _ {A A}^{\prime \prime} (t) & = \frac {1} {2} \left[ C _ {A A} (t) - C _ {A A}^{*} (t) \right] = \frac {1} {2} [ \langle A (t) A ( 0 ) \rangle - \langle A ( 0 ) A (t) \rangle ] \\[4pt] & = \frac {1} {2} \langle [ A (t) , A ( 0 ) ] \rangle \end{aligned} \right. \label{9.26}\]Above \([ A , B ] _ {+} \equiv A B + B A\) is the anticommutator. As illustrated below, the real part is even in time, and can be expanded as Fourier series in cosines, whereas the imaginary part is odd, and can be expanded in sines. We will see later that the magnitude of the real part grows with temperature, but the imaginary does not. At 0 K, the real and imaginary components have equal amplitudes, but as one approaches the high temperature or classical limit, the real part dominates the imaginary.We will also see in our discussion of linear response that \(C'_{AA}\) and \(C''_{AA}\) are directly proportional to the step response function \(S\) and the impulse response function \(R\), respectively. \(R\) describes how a system is driven away from equilibrium by an external potential, whereas \(S\) describes the relaxation of the system to equilibrium when a force holding it away from equilibrium is released. Classically, the two are related by \(R \propto \partial S / \partial t\).Since time and frequency are conjugate variables, we can also define a spectral or frequency-domain correlation function by the Fourier transformation of the TCF. The Fourier transform and its inverse are defined as\[ \begin{align} \tilde {C} _ {A A} ( \omega ) &= \tilde {\mathcal {F}} \left[ C _ {A A} (t) \right] \\[4pt] &= \int _ {- \infty}^{+ \infty} \mathrm {e}^{i \omega t} C _ {A A} (t) \,d t \label{9.27} \end{align}\]\[\begin{align} C _ {A A} (t) &= \tilde {\mathcal {F}}^{- 1} \left[ \tilde {C} _ {A A} ( \omega ) \right] \\[4pt] &= \frac {1} {2 \pi} \int _ {- \infty}^{+ \infty} \mathrm {e}^{- i \omega t} \tilde {C} _ {A A} ( \omega ) \,d \omega \label{9.28} \end{align}\]Examples of the frequency-domain correlation functions are shown below.For a time-independent Hamiltonian, as we might have in an interaction picture problem, the Fourier transform of the TCF in Equation \ref{9.20} gives\[\tilde {C} _ {A A} ( \omega ) = \sum _ {n , m} p _ {n} \left| A _ {m n} \right|^{2} \delta \left( \omega - \omega _ {m n} \right) \label{9.29}\]This expression looks very similar to the Golden Rule transition rate from first-order perturbation theory. In fact, the Fourier transform of time-correlation functions evaluated at the energy gap gives the transition rate between states that we obtain from first-order perturbation theory. Note that this expression is valid whether the initial states \(n\) are higher or lower in energy than final states \(m\), and accounts for upward and downward transitions. If we compare the ratio of upward and downward transition rates between two states \(i\) and \(j\), we have\[\frac {\tilde {C} _ {A A} \left( \omega _ {i j} \right)} {\tilde {C} _ {A A} \left( \omega _ {j i} \right)} = \frac {p _ {j}} {p _ {i}} = e^{\beta E _ {j}} \label{9.30}\]This is one way of showing the principle of detailed balance, which relates upward and downward transition rates at equilibrium to the difference in thermal occupation between states:\[\tilde {C} _ {A A} ( \omega ) = e^{\beta \hbar \omega} \tilde {C} _ {A A} ( - \omega ) \label{9.31}\]This relationship together with a Fourier transform of Equation \ref{9.23} allows us to obtain the real and imaginary components using\[\tilde {C} _ {A A} ( \omega ) \pm \tilde {C} _ {A A} ( - \omega ) = \left( 1 \pm e^{- \beta \hbar \omega} \right) \tilde {C} _ {A A} ( \omega ) \label{9.32}\]\[\tilde {C} _ {A A}^{\prime} ( \omega ) = \tilde {C} _ {A A} ( \omega ) \left( 1 + e^{- \beta \hbar \omega} \right) \label{9.33}\]\[\tilde {C} _ {A A}^{\prime \prime} ( \omega ) = \tilde {C} _ {A A} ( \omega ) \left( 1 - e^{- \beta \hbar \omega} \right) \label{9.34}\]This page titled 10.3: Quantum Time-Correlation Functions is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,271 |
10.4: Transition Rates from Correlation Functions
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/10%3A_Time-Correlation_Functions/10.04%3A_Transition_Rates_from_Correlation_Functions | We have already seen that the rates obtained from first-order perturbation theory are related to the Fourier transform of the time-dependent external potential evaluated at the energy gap between the initial and final state. Here we will show that the rate of leaving an initially prepared state, typically expressed by Fermi’s Golden Rule through a resonance condition in the frequency domain, can be expressed in the time-domain picture in terms of a time-correlation function for the interaction of the initial state with others. The state-to-state form of Fermi’s Golden Rule is\[w _ {k \ell} = \frac {2 \pi} {\hbar} \left| V _ {k \ell} \right|^{2} \delta \left( E _ {k} - E _ {\ell} \right) \label{9.35}\]We will look specifically at the case of a system at thermal equilibrium in which the initially populated states \(\ell\) are coupled to all states \(k\). Time-correlation functions are expressions that apply to systems at thermal equilibrium, so we will thermally average this expression.\[\overline {w} _ {k \ell} = \frac {2 \pi} {\hbar} \sum _ {k , \ell} p _ {\ell} \left| V _ {k \ell} \right|^{2} \delta \left( E _ {k} - E _ {\ell} \right) \label{9.36}\]where \(p _ {\ell} = e^{- \beta E _ {\ell}} / Z\) and \(Z\) is the partition function. The energy conservation statement expressed in terms of \(E\) or \(\omega \) can be converted to the time domain using the definition of the delta function\[\delta ( \omega ) = \frac {1} {2 \pi} \int _ {- \infty}^{+ \infty} d t e^{i \omega t} \label{9.37}\]giving\[\overline {w} _ {k \ell} = \frac {1} {\hbar^{2}} \sum _ {k , \ell} p _ {\ell} \left| V _ {k \ell} \right| \int _ {- \infty}^{+ \infty} d t e^{i \left( E _ {k} - E _ {l} \right) t / h} \label{9.38}\]Writing the matrix elements explicitly and recognizing that in the interaction picture,\[e^{- i H _ {0} t / h} | \ell \rangle = e^{- i E _ {t} t / h} | \ell \rangle,\]we have\[ \begin{align} \overline {w} _ {k \ell} &= \frac {1} {\hbar^{2}} \sum _ {k , \ell} p _ {\ell} \int _ {- \infty}^{+ \infty} d t\, e^{i \left( E _ {k} - E _ {\ell} \right) t / \hbar} \langle \ell | V | k \rangle \langle k | V | \ell \rangle \label{9.39} \\[4pt] &= \frac {1} {\hbar^{2}} \sum _ {k , \ell} p _ {\ell} \int _ {- \infty}^{+ \infty} d t \langle \ell | V | k \rangle \left\langle k \left| e^{i H _ {0} t / \hbar} V e^{- i H _ {0} t / \hbar} \right| \ell \right\rangle \label{9.40} \end{align}\]Then, since \(\sum _ {k} | k \rangle \langle k | = 1\),\[ \begin{align} \overline {w} _ {m n} &= \frac {1} {\hbar^{2}} \sum _ {\ell = m , n} p _ {\ell} \int _ {- \infty}^{+ \infty} d t \left\langle \ell \left| V _ {I} ( 0 ) V _ {I} (t) \right| \ell \right\rangle \label{9.41} \\[4pt] &= \overline {w} _ {m n} = \frac {1} {\hbar^{2}} \int _ {- \infty}^{+ \infty} d t \left\langle V _ {I} (t) V _ {I} ( 0 ) \right\rangle \label{9.42} \end{align}\]As before\[V _ {I} (t) = e^{i H _ {0} t / h} V e^{- i H _ {0} t / \hbar}\]The final expression in Equation \ref{9.42} indicates that integrating over a correlation function for the time-dependent interaction of the initial state with its surroundings gives the relaxation or transfer rate. This is a general expression. Although the derivation emphasized specific eigenstates, Equation \ref{9.42} shows that with a knowledge of a time-dependent interaction potential of any sort, we can calculate transition rates from the time-correlation function for that potential.The same approach can be taken using the rates of transition in an equilibrium system induced by a harmonic perturbation\[\overline {w} _ {k \ell} = \frac {\pi} {2 \hbar^{2}} \sum _ {\ell , k} p _ {\ell} \left| V _ {k \ell} \right|^{2} \left[ \delta \left( \omega _ {k \ell} - \omega \right) + \delta \left( \omega _ {k \ell} + \omega \right) \right] \label{9.43}\]resulting in a similar expression for the transition rate in terms of a interaction potential time-correlation function\[ \begin{align} \overline {w} _ {k \ell} &= \frac {1} {\hbar^{2}} \int _ {- \infty}^{+ \infty} d t\, e^{- i \omega t} \left\langle V _ {I} ( 0 ) V _ {I} (t) \right\rangle \\[4pt] &= \frac {1} {\hbar^{2}} \int _ {- \infty}^{+ \infty} d t \,e^{i \omega t} \left\langle V _ {I} (t) V _ {I} ( 0 ) \right\rangle \label{9.44} \end{align}\]We will look at this closer in the following section. Note that here the transfer rate is expressed in terms of a Fourier transform over a correlation function for the time-dependent interaction potential. Although Equation \ref{9.42} is not written as a Fourier transform, it can in practice be evaluated by a Fourier transformation and evaluating its value at zero frequency.This page titled 10.4: Transition Rates from Correlation Functions is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,272 |
11.1: Classical Linear Response Theory
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/11%3A_Linear_Response_Theory/11.01%3A_Classical_Linear_Response_Theory | We will use linear response theory as a way of describing a real experimental observable. Specifically this will tell us how an equilibrium system changes in response to an applied potential. The quantity that will describe this is a response function, a real observable quantity. We will go on to show how it is related to correlation functions. Embedded in this discussion is a particularly important observation. We will now deal with a nonequilibrium system, but we will show that when the changes are small away from equilibrium, the equilibrium fluctuations dictate the nonequilibrium response! Thus knowledge of equilibrium dynamics is useful in predicting the outcome of nonequilibrium processes.So, the question is “How does the system respond if you drive it away from equilibrium?” We will examine the case where an equilibrium system, described by a Hamiltonian \(H_0\) interacts weakly with an external agent, \(V(t)\). The system is moved away from equilibrium by the external agent, and the system absorbs energy from the external agent. How do we describe the time-dependent properties of the system? We first take the external agent to interact with the system through an internal variable \(A\). So the Hamiltonian for this problem is given by\[H = H _ {0} - f (t) A \label{10.1}\]Here \(f(t)\) is the time-dependent action of the external agent, and the deviation from equilibrium is linear in the internal variable. We describe the behavior of an ensemble initially at thermal equilibrium by assuming that each member of the ensemble is subject to the same interaction with the external agent, and then ensemble averaging. Initially, the system is described by \(H_0\). It is at equilibrium and the internal variable is characterized by an equilibrium ensemble average \(\langle A \rangle\). The external agent is then applied at time t0, and the system is moved away from equilibrium, and is characterized through a nonequilibrium ensemble average, \(\overline {A}\). \(\langle A \rangle \neq \overline {A (t)}\) as a result of the interaction.For a weak interaction with the external agent, we can describe \(\overline {A (t)}\) by performing an expansion in powers of \(f(t)\)\[\begin{align} \overline {A (t)} &= \left( \text {terms} f^{( 0 )} \right) + \left( \text {terms} f^{( 1 )} \right) + \ldots \label{10.2} \\[4pt] &= \langle A \rangle + \int d t _ {0} R \left( t , t _ {0} \right) f \left( t _ {0} \right) + \ldots \label{10.3} \end{align}\]In this expression the agent is applied at 0 t , and we observe the system att. The leading term in this expansion is independent of f, and is therefore equal to A . The next term in Equation \ref{10.3} describes the deviation from the equilibrium behavior in terms of a linear dependence on the external agent. \(R \left( t , t _ {0} \right)\) is the linear response function, the quantity that contains the microscopic information on the system and how it responds to the applied agent. The integration in the last term of Equation \ref{10.3} indicates that the nonequilibrium behavior depends on the full history of the application of the agent \(f \left( t _ {0} \right)\) and the response of the system to it. We are seeking a quantum mechanical description of \(R\).1. Causal: Causality refers to the common sense observation that the system cannot respond before the force has been applied. Therefore \(R \left( t , t _ {0} \right) = 0\) for \(t < t\), and the time-dependent change in \(A\) is\[\overline {\delta A (t)} = \overline {A (t)} - \langle A \rangle = \int _ {- \infty}^{t} d t _ {0} R \left( t , t _ {0} \right) f \left( t _ {0} \right) \label{10.4}\]The lower integration limit is set to \(- \infty\) to reflect that the system is initially at equilibrium, and the upper limit is the time of observation. We can also make the statement of causality explicit by writing the linear response function with a step response: \(\Theta \left( t - t _ {0} \right) R \left( t , t _ {0} \right)\), where\[\Theta \left( t - t _ {0} \right) \equiv \left\{\begin{array} {l l} {0} & {\left( t < t _ {0} \right)} \\ {1} & {\left( t \geq t _ {0} \right)} \end{array} \right. \label{10.5}\]2. Stationary: Similar to our discussion of correlation functions, the time-dependence of the system only depends on the time interval between application of the potential and observation. Therefore we write\[R \left( t , t _ {0} \right) = R \left( t - t _ {0} \right)\]and\[\delta \overline {A (t)} = \int _ {- \infty}^{t} d t _ {0} R \left( t - t _ {0} \right) f \left( t _ {0} \right) \label{10.6}\]This expression says that the observed response of the system to the agent is a convolution of the material response with the time-development of the applied force. Rather than the absolute time points, we can define a time-interval \(\tau = t - t _ {0}\), so that we can write\[\delta \overline {A (t)} = \int _ {0}^{\infty} d \tau R ( \tau ) f ( t - \tau ) \label{10.7}\]3. Impulse response: Note that for a delta function perturbation:\[f (t) = \lambda \delta \left( t - t _ {0} \right) \label{10.8}\]We obtain\[\overline {\delta A (t)} = \lambda R \left( t - t _ {0} \right) \label{10.9}\]Thus, \(R\) describes how the system behaves when an abrupt perturbation is applied and is often referred to as the impulse response function. An impulse response kicks the system away from the equilibrium established under H0, and therefore the shape of a response function will always rise from zero and ultimately return to zero. In other words, it will be a function that can be expanded in sines. Thus the response to an arbitrary f(t) can be described through a Fourier analysis, suggesting that a spectral representation of the response function would be useful.The observed temporal behavior of the nonequilibrium system can also be cast in the frequency domain as a spectral response function, or susceptibility. We start with Equation \ref{10.7} and Fourier transform both sides:\[\left.\begin{aligned} \overline {\delta A ( \omega )} & \equiv \int _ {- \infty}^{+ \infty} d t \delta \overline {A (t)} e^{i \omega t} \\ & = \int _ {- \infty}^{+ \infty} d t \left[ \int _ {0}^{\infty} d \tau R ( \tau ) f ( t - \tau ) \right] e^{i \omega t} \end{aligned} \right. \label{10.10}\]Now we insert \(e^{- i \omega \tau} e^{+ i \omega \tau} = 1\) and collect terms to give\[ \begin{align} \delta \overline {A ( \omega )} &= \int _ {- \infty}^{+ \infty} d t \int _ {0}^{\infty} d \tau R ( \tau ) f ( t - \tau ) e^{i \omega ( t - \tau )} e^{i \omega \tau} \label{10.11} \\[4pt] &= \int _ {- \infty}^{+ \infty} d t^{\prime} e^{i \omega r^{\prime}} f \left( t^{\prime} \right) \int _ {0}^{\infty} d \tau R ( \tau ) e^{i \omega \tau} \label{10.12} \end{align}\]or\[\delta \overline {A ( \omega )} = \tilde {f} ( \omega ) \chi ( \omega ) \label{10.13}\]In Equation \ref{10.12} we switched variables, setting \(t^{\prime} = t - \tau\). The first term \(\tilde {f} ( \omega )\) is a complex frequency domain representation of the driving force, obtained from the Fourier transform of \(f \left( t^{\prime} \right)\). The second term \(\chi ( \omega )\) is the susceptibility which is defined as the Fourier–Laplace transform (i.e., single-sided Fourier transform) of the impulse response function. It is a frequency domain representation of the linear response function. Switching between time and frequency domains shows that a convolution of the force and response in time leads to the product of the force and response in frequency. This is a manifestation of the convolution theorem:\[A (t) \otimes B (t) \equiv \int _ {- \infty}^{\infty} d \tau A ( t - \tau ) B ( \tau ) = \int _ {- \infty}^{\infty} d \tau A ( \tau ) B ( t - \tau ) = \mathcal {H}^{- 1} [ \tilde {A} ( \omega ) \tilde {B} ( \omega ) ] \label{10.14}\]Here \(\otimes\) refers to convolution, \(\tilde {A} ( \omega ) = \mathcal {F} [ A (t) ]] \), \(\mathcal {F}\) is a Fourier transform, and \(\mathcal {F}^{- 1} [ \cdots ]\) is an inverse Fourier transform.Note that \(R(\tau)\) is a real function, since the response of a system is an observable. The susceptibility \(\chi ( \omega )\) is complex:\[\chi ( \omega ) = \chi^{\prime} ( \omega ) + i \chi^{\prime \prime} ( \omega ) \label{10.15}\]Since\[\chi ( \omega ) = \int _ {0}^{\infty} d \tau R ( \tau ) e^{i \omega \tau} \label{10.16}\]However, the real and imaginary contributions are not independent. We have\[\chi^{\prime} = \int _ {0}^{\infty} d \tau R ( \tau ) \cos \omega \tau \label{10.17}\]and\[\chi^{\prime \prime} = \int _ {0}^{\infty} d \tau R ( \tau ) \sin \omega \tau \label{10.18}\]\(\chi^{\prime}\) and \(\chi^{\prime \prime}\) are even and odd functions of frequency:\[\chi^{\prime} ( \omega ) = \chi^{\prime} ( - \omega ) \label{10.19}\]\[\chi^{\prime \prime} ( \omega ) = - \chi^{\prime \prime} ( - \omega ) \label{10.20}\]
The two are related by the Kramers–Krönig relationships:\[\begin{align} \chi^{\prime} ( \omega ) &= \frac {1} {\pi} P \int _ {- \infty}^{+ \infty} \frac {\chi^{\prime \prime} \left( \omega^{\prime} \right)} {\omega^{\prime} - \omega} d \omega^{\prime} \label{10.24} \\[4pt] \chi^{\prime \prime} ( \omega ) &= - \frac {1} {\pi} P \int _ {- \infty}^{+ \infty} \frac {\chi^{\prime} \left( \omega^{\prime} \right)} {\omega^{\prime} - \omega} d \omega^{\prime} \label{10.25} \end{align}\]These are obtained by substituting the inverse sine transform of Equation \ref{10.18} into Equation \ref{10.17}\[\begin{align} \chi^{\prime} ( \omega ) &= \frac {1} {\pi} \int _ {0}^{\infty} d t \cos \omega t \int _ {- \infty}^{+ \infty} \chi^{\prime \prime} \left( \omega^{\prime} \right) \sin \omega^{\prime} t d \omega^{\prime} \\[4pt] &= \frac {1} {\pi} \lim _ {L \rightarrow \infty} \int _ {- \infty}^{+ \infty} d \omega^{\prime} \chi^{\prime \prime} \left( \omega^{\prime} \right) \int _ {0}^{L} \cos \omega t \sin \omega^{\prime} t \,d t \end{align}\]Using \(\cos a x \sin b x=\frac{1}{2} \sin (a+b) x+\frac{1}{2} \sin (b-a) x\) this can be written as\[\chi^{\prime} ( \omega ) = \frac {1} {\pi} \lim _ {L \rightarrow \infty} \mathrm {P} \int _ {- \infty}^{+ \infty} d \omega^{\prime} \chi^{\prime \prime} ( \omega ) \frac {1} {2} \left[ \frac {- \cos \left( \omega^{\prime} + \omega \right) L + 1} {\omega^{\prime} + \omega} - \frac {\cos \left( \omega^{\prime} - \omega \right) L + 1} {\omega^{\prime} - \omega} \right] \label{10.27}\]If we choose to evaluate the limit \(L \rightarrow \infty\), the cosine terms are hard to deal with, but we expect they will vanish since they oscillate rapidly. This is equivalent to averaging over a monochromatic field. Alternatively, we can average over a single cycle: \(L=2 \pi /\left(\omega^{\prime}-\omega\right)\) to obtain eq. (10.24). The other relation can be derived in a similar way. Note that the Kramers– Krönig relationships are a consequence of causality, which dictate the lower limit of \(T_{tinitial}=0\) on the first integral evaluated above.Example \(\PageIndex{1}\): Driven Harmonic OscillatorOne can classically model the absorption of light through a resonant interaction of the electromagnetic field with an oscillating dipole, using Newton’s equations for a forced damped harmonic oscillator:\[\ddot {x} - \gamma \dot {x} + \omega _ {0}^{2} x = F (t) / m \label{10.28}\]Here the \(x\) is the coordinate being driven, \(\gamma\) is the damping constant, and \(\omega_{0}=\sqrt{k / m}\) is the natural frequency of the oscillator. We originally solved this problem is to take the driving force to have the form of a monochromatic oscillating source\[F (t) = F _ {0} \cos \omega t \label{10.29}\]Then, Equation \ref{10.28} has the solution\[x (t) = \frac {F _ {0}} {m} \left( \left( \omega^{2} - \omega _ {0}^{2} \right)^{2} + \gamma^{2} \omega^{2} \right)^{- 1 / 2} \sin ( \omega t + \delta ) \label{10.30}\]with\[\tan \delta = \omega _ {0}^{2} - \omega^{2} / \gamma \omega \label{10.31}\]This shows that the driven oscillator has an oscillation period that is dictated by the driving frequency \(\omega\), and whose amplitude and phase shift relative to the driving field is dictated by its detuning from resonance.
\[\begin{align*} P _ {a v g} ( \omega ) &= \langle F (t) \cdot \dot {x} (t) \rangle \label{10.32} \\[4pt] &= = \frac {\gamma \omega^{2} F _ {0}^{2}} {2 m} \left[ \left( \omega _ {0}^{2} - \omega^{2} \right)^{2} + \gamma^{2} \omega^{2} \right]^{- 1 / 2} \end{align*}\]To determine the response function for the damped harmonic oscillator, we seek a solution to Equation \ref{10.28} using an impulsive driving force\[F (t) = F _ {0} \delta \left( t - t _ {0} \right) \nonumber\]The linear response of this oscillator to an arbitrary force is\[x (t) = \int _ {0}^{\infty} d \tau R ( \tau ) F ( t - \tau ) \label{10.33}\]so that time-dependence with an impulsive driving force is directly proportional to the response function, \(x(t)=F_{0} R(t)\). For this case, we obtain\[R ( \tau ) = \frac {1} {m \Omega} \exp \left( - \frac {\gamma} {2} \tau \right) \sin \Omega \tau \label{10.34}\]The reduced frequency is defined as\[\Omega = \sqrt {\omega _ {0}^{2} - \gamma^{2} / 4} \label{10.35}\]From this, we evaluate eq. (10.16) and obtain the susceptibility\[\chi ( \omega ) = \frac {1} {m \left( \omega _ {0}^{2} - \omega^{2} - i \gamma \omega \right)} \label{0.36}\]As we will see shortly, the absorption of light by the oscillator is proportional to the imaginary part of the susceptibility\[\chi^{\prime \prime} ( \omega ) = \frac {\gamma \omega} {m \left[ \left( \omega _ {0}^{2} - \omega^{2} \right)^{2} + \gamma^{2} \omega^{2} \right]} \label{10.37}\]The real part is\[\chi^{\prime} ( \omega ) = \frac {\omega _ {0}^{2} - \omega^{2}} {m \left[ \left( \omega _ {0}^{2} - \omega^{2} \right)^{2} + \gamma^{2} \omega^{2} \right]} \label{10.38}\]For the case of weak damping \(\gamma < < \omega _ {0}\) commonly encountered in molecular spectroscopy, Equation \ref{10.36} is written as a Lorentzian lineshape by using the near-resonance approximation\[\omega^{2} - \omega _ {0}^{2} = \left( \omega + \omega _ {0} \right) \left( \omega - \omega _ {0} \right) \approx 2 \omega \left( \omega - \omega _ {0} \right) \label{10.39}\]\[\chi ( \omega ) \approx \frac {1} {2 m \omega _ {0}} \frac {1} {\omega - \omega _ {0} + i \gamma / 2} \label{10.40}\]
Then the imaginary part of the susceptibility shows asymmetric lineshape with a line width of \(\gamma\) full width at half maximum.
\[\chi^{\prime \prime} ( \omega ) \approx \frac {1} {2 m \omega _ {0}} \frac {\gamma} {\left( \omega - \omega _ {0} \right)^{2} + \gamma^{2} / 4} \label{10.41}\]\[\chi^{\prime} ( \omega ) \approx \frac {1} {m \omega _ {0}} \frac {\left( \omega - \omega _ {0} \right)} {\left( \omega - \omega _ {0} \right)^{2} + \gamma^{2} / 4} \label{10.42}\]If the system does not respond in a manner linearly proportional to the applied potential but still perturbative, we can include nonlinear terms, i.e. higher expansion orders of \(\overline {A (t)}\) in Equation \ref{10.3}.Let’s look at second order:\[\delta \overline {A (t)}^{( 2 )} = \int d t _ {1} \int d t _ {2} R^{( 2 )} \left( t ; t _ {1} , t _ {2} \right) f _ {1} \left( t _ {1} \right) f _ {2} \left( t _ {2} \right) \label{10.43}\]Again we are integrating over the entire history of the application of two forces \(f_1\) and \(f_2\), including any quadratic dependence on \(f\). In this case, we will enforce causality through a time ordering that requires\[R^{( 2 )} \left( t ; t _ {1} , t _ {2} \right) \Rightarrow R^{( 2 )} \cdot \Theta \left( t - t _ {2} \right) \cdot \Theta \left( t _ {2} - t _ {1} \right) \label{10.44}\]which leads to\[\delta \overline {A (t)}^{( 2 )} = \int _ {- \infty}^{t} d t _ {2} \int _ {- \infty}^{t _ {2}} d t _ {1} R^{( 2 )} \left( t ; t _ {1} , t _ {2} \right) f _ {1} \left( t _ {1} \right) f _ {2} \left( t _ {2} \right) \label{10.45}\]Now we will call the system stationary so that we are only concerned with the time intervals between consecutive interaction times. If we define the intervals between adjacent interactions\[ \left. \begin{array} {l} {\tau _ {1} = t _ {2} - t _ {1}} \\ {\tau _ {2} = t - t _ {2}} \end{array} \right. \label{10.46} \]Then we have\[\delta \overline {A (t)}^{( 2 )} = \int _ {0}^{\infty} d \tau _ {1} \int _ {0}^{\infty} d \tau _ {2} R^{( 2 )} \left( \tau _ {1} , \tau _ {2} \right) f _ {1} \left( t - \tau _ {1} - \tau _ {2} \right) f _ {2} \left( t - \tau _ {2} \right) \label{10.47}\]This page titled 11.1: Classical Linear Response Theory is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,273 |
11.2: Quantum Linear Response Functions
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/11%3A_Linear_Response_Theory/11.02%3A_Quantum_Linear_Response_Functions | To develop a quantum description of the linear response function, we start by recognizing that the response of a system to an applied external agent is a problem we can solve in the interaction picture. Our time-dependent Hamiltonian is\[ \begin{align} H (t) &= H _ {0} - f (t) \hat {A} \\[4pt] &= H _ {0} + V (t) \label{10.48} \end{align}\]\(H_o\) is the material Hamiltonian for the equilibrium system. The external agent acts on the equilibrium system through \(\hat{A}\), an operator in the system states, with a time-dependence \(f(t)\). We take \(V(t)\) to be a small change, and treat this problem with perturbation theory in the interaction picture.We want to describe the nonequilibrium response \(\overline {A(t)}\), which we will get by ensemble averaging the expectation value of \( \hat{A}\), i.e. \(\overline {\langle A (t) \rangle}\). Remember the expectation value for a pure state in the interaction picture is\[ \begin{align} \langle A (t) \rangle & = \left\langle \psi _ {I} (t) \left| A _ {I} (t) \right| \psi _ {I} (t) \right\rangle \\[4pt] & = \left\langle \psi _ {0} \left| U _ {I}^{\dagger} A _ {I} U _ {I} \right| \psi _ {0} \right\rangle \label{10.49} \end{align} \]The interaction picture Hamiltonian for Equation \ref{10.48} is \[\left.\begin{aligned} V _ {I} (t) & = U _ {0}^{\dagger} (t) V (t) U _ {0} (t) \\[4pt] & = - f (t) A _ {I} (t) \end{aligned} \right. \label{10.50}\]To calculate an ensemble average of the state of the system after applying the external potential, we recognize that the nonequilibrium state of the system characterized by described by \(| \psi _ {I} (t) \rangle\) is in fact related to the initial equilibrium state of the system \(| \psi _ o\rangle\) through a time-propagator, as seen in Equation \ref{10.49}. So the nonequilibrium expectation value \(\overline {A (t)}\) is in fact obtained by an equilibrium average over the expectation value of \(U _ {I}^{\dagger} A _ {I} U _ {I}\):\[ \overline {A (t)} = \sum _ {n} p _ {n} \left\langle n \left| U _ {I}^{\dagger} A _ {I} U _ {I} \right| n \right\rangle \label{10.51}\]Again \(| n \rangle \) are eigenstates of \(H_o\). Working with the first order solution to \(U_I(t)\)\[ U _ {I} \left( t - t _ {0} \right) = 1 + \dfrac {i} {\hbar} \int _ {t _ {0}}^{t} d t^{\prime} f \left( t^{\prime} \right) A _ {I} \left( t^{\prime} \right)\label{10.52}\]we can now calculate the value of the operator \(\hat{A}\) at time \(t\), integrating over the history of the applied interaction \(f(t')\):\[\left.\begin{aligned} A (t) & = U _ {I}^{\dagger} A _ {I} U _ {I} \\ & = \left\{1 - \frac {i} {\hbar} \int _ {t _ {0}}^{t} d t^{\prime} f \left( t^{\prime} \right) A _ {I} \left( t^{\prime} \right) \right\} A _ {I} (t) \left\{1 + \frac {i} {\hbar} \int _ {t _ {0}}^{t} d t^{\prime} f \left( t^{\prime} \right) A _ {I} \left( t^{\prime} \right) \right\} \end{aligned} \right. \label {10.53} \]Here note that \(f\) is the time-dependence of the external agent. It does not involve operators in \(H_o\) and commutes with \(A\). Working toward the linear response function, we just retain the terms linear in\[ \left.\begin{aligned} A (t) & \cong A _ {I} (t) + \dfrac {i} {\hbar} \int _ {t _ {0}}^{t} d t^{\prime}\, f \left( t^{\prime} \right) \left\{A _ {I} (t) A _ {I} \left( t^{\prime} \right) - A _ {I} \left( t^{\prime} \right) A _ {I} (t) \right\} \\[4pt] & = A _ {I} (t) + \dfrac {i} {\hbar} \int _ {t _ {0}}^{t} d t^{\prime} \,f \left( t^{\prime} \right) \left[ A _ {I} (t) , A _ {I} \left( t^{\prime} \right) \right] \end{aligned} \right. \label{10.54}\]Since our system is initially at equilibrium, we set \(t _ {0} = - \infty\) and switch variables to the time interval \(\tau = t - t^{\prime} \) and using\[A _ {I} (t) = U _ {0}^{\dagger} (t) A U _ {0} (t)\]to obtain\[ A (t) = A _ {I} (t) + \dfrac {i} {\hbar} \int _ {0}^{\infty} d \tau \,f ( t - \tau ) \left[ A _ {I} ( \tau ) , A _ {I} ( 0 ) \right] \label{10.55}\]We can now calculate the expectation value of \(A\) by performing the ensemble-average described in Equation \ref{10.51}. Noting that the force is applied equally to each member of ensemble, we have\[\overline {A (t)} = \langle A \rangle + \dfrac {i} {\hbar} \int _ {0}^{\infty} d \tau f ( t - \tau ) \left\langle \left[ A _ {I} ( \tau ) , A _ {I} ( 0 ) \right] \right\rangle \label{10.56}\]The first term is independent of \(f\), and so it comes from an equilibrium ensemble average for the value of \(A\).\[ \langle A (t) \rangle = \sum _ {n} p _ {n} \left\langle n \left| A _ {I} \right| n \right\rangle = \langle A \rangle \label{10.57}\]The second term is just an equilibrium ensemble average over the commutator in \(A_I(t)\):\[ \left\langle \left[ A _ {I} ( \tau ) , A _ {I} ( 0 ) \right] \right\rangle = \sum _ {n} p _ {n} \left\langle n \left| \left[ A _ {I} ( \tau ) , A _ {I} ( 0 ) \right] \right| n \right\rangle \label{10.58}\]Comparing Equation \ref{10.56} with the expression for the linear response function, we find that the quantum linear response function is\[ \left. \begin{array} {r l} {R ( \tau )} & {= - \dfrac {i} {\hbar} \left\langle \left[ A _ {I} ( \tau ) , A _ {I} ( 0 ) \right] \right\rangle} & {\tau \geq 0} \\[4pt] {} & {= 0} & {\tau < 0} \end{array} \right. \label{10.59}\]or as it is sometimes written with the unit step function in order to enforce causality: \[ R ( \tau ) = - \dfrac {i} {\hbar} \Theta ( \tau ) \left\langle \left[ A _ {I} ( \tau ) , A _ {I} ( 0 ) \right] \right\rangle \label{10.60}\]The important thing to note is that the time-development of the system with the applied external potential is governed by the dynamics of the equilibrium system. All of the time-dependence in the response function is under \(H_o\).The linear response function is therefore the sum of two correlation functions with the order of the operators interchanged, which is the imaginary part of the correlation function \(C''(\tau)\)\[ \left.\begin{aligned} R ( \tau ) & = - \dfrac {i} {\hbar} \Theta ( \tau ) \left\{\left\langle A _ {I} ( \tau ) A _ {I} ( 0 ) \right\rangle - \left\langle A _ {I} ( 0 ) A _ {I} ( \tau ) \right\rangle \right\} \\[4pt] & = - \dfrac {i} {\hbar} \Theta ( \tau ) \left( C _ {A A} ( \tau ) - C _ {A A}^{*} ( \tau ) \right) \\[4pt] & = \dfrac {2} {\hbar} \Theta ( \tau ) C^{\prime \prime} ( \tau ) \end{aligned} \right.\label{10.61}\]As we expect for an observable, the response function is real. If we express the correlation function in the eigenstate description:\[ C (t) = \sum _ {n , m} p _ {n} \left| A _ {m n} \right|^{2} e^{- i \omega _ {m n} t} \label{10.62}\]then\[ R (t) = \dfrac {2} {\hbar} \Theta (t) \sum _ {n , m} p _ {n} \left| A _ {m n} \right|^{2} \sin \omega _ {m n} t \label{10.63}\]\(R(t)\) can always be expanded in sines—an odd function of time. This reflects that fact that the impulse response must have a value of 0 (the deviation from equilibrium) at \(t = t_o\), and move away from 0 at the point where the external potential is applied.This page titled 11.2: Quantum Linear Response Functions is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,274 |
11.3: The Response Function and Energy Absorption
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/11%3A_Linear_Response_Theory/11.03%3A_The_Response_Function_and_Energy_Absorption | Let’s investigate the relationship between the linear response function and the absorption of energy from the external agent—in this case an electromagnetic field. We will relate this to the absorption coefficient \(\alpha = \dot {E} / I\) which we have described previously. For this case,\[ H = H _ {0} - f (t) A = H _ {0} - \mu \cdot E (t) \label{10.64}\]This expression gives the energy of the system, so the rate of energy absorption averaged over the nonequilibrium ensemble is described by:\[ \dot {E} = \dfrac {\partial \overline {H}} {\partial t} = - \dfrac {\partial f} {\partial t} \overline {A (t)} \label{10.65} \]We will want to cycle-average this over the oscillating field, so the time-averaged rate of energy absorption is \[ \begin{align} \dot {E} & = \dfrac {1} {T} \int _ {0}^{T} d t \left[ - \dfrac {\partial f} {\partial t} \overline {A (t)} \right] \\[4pt] & = \dfrac {1} {T} \int _ {0}^{T} d t \dfrac {\partial f (t)} {\partial t} \left[ \langle A \rangle + \int _ {0}^{\infty} d \tau R ( \tau ) f ( t - \tau ) \right] \label{10.66} \end{align}\]Here the response function is\[R ( \tau ) = - i \langle [ \mu ( \tau ) , \mu ( 0 ) ] \rangle / \hbar.\]For a monochromatic electromagnetic field, we can write (and expand)\[ \begin{align} f (t) &= E _ {0} \cos \omega t \\[4pt] &= \dfrac {1} {2} \left[ E _ {0} e^{- i \omega t} + E _ {0}^{*} e^{i \omega t} \right] \label{10.67} \end{align}\]which leads to the following for the second term in Equation \ref{10.66}:\[ \dfrac {1} {2} \int _ {0}^{\infty} d \tau R ( \tau ) \left[ E _ {0} e^{- i \omega ( t - \tau )} + E _ {0}^{*} e^{i \omega ( t - \tau )} \right] = \dfrac {1} {2} \left[ E _ {0} e^{- i \omega t} \chi ( \omega ) + E _ {0}^{*} e^{i \omega t} \chi ( - \omega ) \right] \label{10.68}\]By differentiating Equation \ref{10.67}, and using it with Equation \ref{10.68} in Equation \ref{10.66}, we have\[ \dot {E} = - \dfrac {1} {T} \langle A \rangle [ f ( T ) - f ( 0 ) ] - \dfrac {1} {4 T} \int _ {0}^{T} d t \left[ - i \omega E _ {0} e^{- i \omega t} + i \omega E _ {0}^{*} e^{i \omega t} \right] \left[ E _ {0} e^{- i \omega t} \chi ( \omega ) + E _ {0}^{*} e^{i \omega t} \chi ( - \omega ) \right] \label{10.69}\]We will now cycle-average this expression, setting \(T = 2 \pi / \omega\). The first term vanishes and the cross terms in second integral vanish, because\[\dfrac {1} {T} \int _ {0}^{T} d t e^{- i \omega t} e^{+ i \omega t} = 1\]and\[\int _ {0}^{T} d t e^{- i \omega t} e^{- i \omega t} = 0.\]The rate of energy absorption from the field is \[ \left.\begin{aligned} \dot {E} & = \dfrac {i} {4} \omega \left| E _ {0} \right|^{2} [ \chi ( - \omega ) - \chi ( \omega ) ] \\[4pt] & = \dfrac {\omega} {2} \left| E _ {0} \right|^{2} \chi^{\prime \prime} ( \omega ) \end{aligned} \right. \label{10.70}\]So, the absorption of energy by the system is related to the imaginary part of the susceptibility. Now, from the intensity of the incident field,\[I = \dfrac{c \left| E _ {0} \right|^{2}}{8 \pi}\]the absorption coefficient is\[ \alpha ( \omega ) = \dfrac {\dot {E}} {I} = \dfrac {4 \pi \omega} {c} \chi^{\prime \prime} ( \omega ) \label{10.71} \]This page titled 11.3: The Response Function and Energy Absorption is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,275 |
11.4: Relaxation of a Prepared State
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/11%3A_Linear_Response_Theory/11.04%3A_Relaxation_of_a_Prepared_State | The impulse response function \(R(t)\) describes the behavior of a system initially at equilibrium that is driven by an external field. Alternatively, we may need to describe the relaxation of a prepared state, in which we follow the return to equilibrium of a system initially held in a nonequilibrium state. This behavior is described by step response function \(S(t)\). The step response comes from holding the system with a constant field \(H = H _ {0} - f A\) until a time \(t_0\) when the system is released, and it relaxes to the equilibrium state governed by \(H=H_o\). We can anticipate that the forms of these two functions are related. Just as we expect that the impulse response to rise from zero and be expressed as an odd function in time, the step response should decay from a fixed value and look even in time. In fact, we might expect to describe the impulse response by differentiating the step response, as seen in the classical case. \[R (t) = \frac {1} {k T} \frac {d} {d t} S (t) \label{10.72}\]An empirical derivation of the step response begins with a few observations. First, response functions must be real since they are proportional to observables, however quantum correlation functions are complex and follow\[C ( - t ) = C^{*} (t).\]Classical correlation functions are real and even,\[C (t) = C ( - t )\]and have the properties of a step response. To obtain the relaxation of a real observable that is even in time, we can construct a symmetrized function, which is just the real part of the correlation function: \[\begin{align} S _ {A A} (t) & = \frac {1} {2} \left\{\left\langle A _ {I} (t) A _ {I} ( 0 ) \right\rangle + \left\langle A _ {I} ( 0 ) A _ {I} (t) \right\rangle \right\} \\ & = \frac {1} {2} \left\{C _ {A A} (t) + C _ {A A} ( - t ) \right\} \\ & = C _ {A A}^{\prime} (t) \end{align} \label{10.74}\]The step response function \(S\) defined as follows for \(t \ge 0\).\[S ( \tau ) \equiv \frac {1} {\hbar} \Theta ( \tau ) \left\langle \left[ A _ {I} ( \tau ) , A _ {I} ( 0 ) \right] \right\rangle _ {+}\]From the eigenstate representation of the correlation function,\[C (t) = \sum _ {n , m} p _ {n} \left| A _ {m n} \right|^{2} e^{- i \omega _ {m n} t} \label{10.75}\]we see that the step response function can be expressed as an expansion in cosines \[S (t) = \frac {2} {\hbar} \Theta (t) \sum _ {n , m} p _ {n} \left| A _ {m n} \right|^{2} \cos \omega _ {m n} t \label{10.76}\]Further, one can readily show that the real and imaginary parts are related by \[ \begin{align} \omega \dfrac {d C^{\prime}} {d t} &= C^{\prime \prime} \\[4pt] \omega \dfrac {d C^{\prime \prime}} {d t} &= C^{\prime} \end{align} \label{10.77}\]Which shows how the impulse response is related to the time-derivative of the step response. In the frequency domain, the spectral representation of the step response is obtained from the Fourier–Laplace transform \[S _ {A A} ( \omega ) = \int _ {0}^{\infty} d t S _ {A A} (t) e^{i \omega t} \label{10.78}\]\[\begin{align} S _ {A A} ( \omega ) & = \frac {1} {2} \left[ C _ {A A} ( \omega ) + C _ {A A} ( - \omega ) \right] \\ & = \frac {1} {2} \left( 1 + e^{- \beta \hbar \omega} \right) C _ {A A} ( \omega ) \label{10.79} \end{align}\]Now, with the expression for the imaginary part of the susceptibility, \[\chi^{\prime \prime} ( \omega ) = \frac {1} {2 \hbar} \left( 1 - e^{- \beta \hbar \omega} \right) C _ {AA} ( \omega ) \label{10.80} \]we obtain the relationship\[\chi^{\prime \prime} ( \omega ) = \frac {1} {\hbar} \tanh \left( \frac {\beta \hbar \omega} {2} \right) S _ {A A} ( \omega ) \label{10.81}\]Equation \ref{10.81} is the formal expression for the fluctuation-dissipation theorem, proven in 1951 by Callen and Welton. It followed an observation made many years earlier by Lars Onsager for which he was awarded the 1968 Nobel Prize in Chemistry: “The relaxation of macroscopic nonequilibrium disturbance is governed by the same laws as the regression of spontaneous microscopic fluctuations in an equilibrium state.” Noting that\[\tanh (x) = \dfrac{e^{x} - e^{- x}}{e^{x} + e^{- x}}\]and\[\tanh (x) \rightarrow x\]for \(x \gg 1\), we see that in the high temperature (classical) limit \[\chi^{\prime \prime} ( \omega ) \Rightarrow \frac {1} {2 k T} \omega S _ {A A} ( \omega ) \label{10.82}\]We can show more directly how the impulse and step response are related. To begin, let’s consider the step response experiment, \[H = \left\{\begin{array} {l l} {H _ {0} - f A} & {t < 0} \\ {H _ {0}} & {t \geq 0} \end{array} \right. \label{10.83}\]and write the expectation values of the internal variable A for the system equilibrated under \(H\) at time \(t = 0\) and \(t = ∞\). \[\langle A \rangle _ {0} = \left\langle \frac {e^{- \beta \left( H _ {0} - f A \right)}} {Z _ {0}} A \right\rangle \label{10.84A}\]with\[Z _ {0} = \left\langle e^{- \beta \left( H _ {0} - f A \right)} \right\rangle \label{10.84B}\]and\[\langle A \rangle _ {\infty} = \left\langle \frac {e^{- \beta H _ {0}}} {Z _ {\infty}} A \right\rangle \label{10.85A}\]with\[Z _ {\infty} = \left\langle e^{- \beta H _ {0}} \right\rangle \label{10.85B}\]If we make the classical linear response approximation, which states that when the applied potential \(fA\) is very small relative to \(0_o\), then \[e^{- \beta \left( H _ {0} - f A \right)} \approx e^{- \beta H _ {0}} ( 1 + \beta f A ) \label{10.86}\]and \(Z _ {0} \approx Z _ {\infty}\), that \[\delta A = \langle A \rangle _ {0} - \langle A \rangle _ {\infty} \approx \beta f \left\langle A^{2} \right\rangle \label{10.87}\]and the time dependent relaxation is given by the classical correlation function \[\delta A (t) = \beta f \langle A ( 0 ) A (t) \rangle \label{10.88}\]For a description that works for the quantum case, let’s start with the system under \(H_o\) at \(t=-∞\), ramp up the external potential at a slow rate \(\eta\) until \(t=0\), and then abruptly shut off the external potential and watch the system. We will describe the behavior in the limit \(\eta → 0\). \[H = \left\{\begin{array} {l l} {H _ {0} + f A e^{\eta t}} & {t < 0} \\ {H _ {0}} & {t \geq 0} \end{array} \right. \label{10.89}\]Writing the time-dependence in terms of a convolution over the impulse response function \(R\), we have\[\overline {\delta A (t)} = \lim _ {\eta \rightarrow 0} \int _ {- \infty}^{0} d t^{\prime} \Theta \left( t - t^{\prime} \right) R \left( t - t^{\prime} \right) e^{\eta t^{\prime}} f \label{10.90}\]Although the integral over the applied force (t’) is over times t<0, the step response factor ensures that t≥0. Now, expressing R as a Fourier transform over the imaginary part of the susceptibility, we obtain \[\left.\begin{aligned} \overline {\delta A (t)} & = \lim _ {\eta \rightarrow 0} \frac {f} {2 \pi} \int _ {- \infty}^{0} d t^{\prime} \int _ {- \infty}^{\infty} d \omega e^{( \eta - i \omega ) t^{\prime}} e^{i \omega t} \chi^{\prime \prime} ( \omega ) \\ & = \frac {f} {2 \pi} \int _ {- \infty}^{\infty} d \omega P P \left( \frac {1} {- i \omega} \right) \chi^{\prime \prime} ( \omega ) e^{i \omega t} \\ & = \frac {f} {2 \pi i} \int _ {- \infty}^{\infty} d \omega \chi^{\prime} ( \omega ) e^{i \omega t} \\ & = f C^{\prime} (t) \end{aligned} \right. \label{10.91}\]A more careful derivation of this result that treats the quantum mechanical operators properly is found in the references. This page titled 11.4: Relaxation of a Prepared State is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,276 |
12.1: A Classical Description of Spectroscopy
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/12%3A_Time-domain_Description_of_Spectroscopy/12.01%3A_A_Classical_Description_of_Spectroscopy | The traditional quantum mechanical treatment of spectroscopy is a static representation of a very dynamic process. An oscillating light field acts to drive bound charges in matter, which under resonance conditions leads to efficient exchange of energy between the light and matter. This dynamical picture emerges from a time-domain description, which shares many similarities to a classical description. Since much of the physical intuition that is helpful in understanding spectroscopy naturally emerges from the classical view, we will describe it first.The classical view begins with the observation that atoms and molecules are composed of charged particles, and these charges are the handle by which an electromagnetic field exerts a force on the atom or molecule. The force exerted on the molecules depends on the form of the potential binding the charges together, the magnitude of the charges, and the strength of the external field.The simplest elements of a model that captures what happens in absorption spectroscopy require us to consider a charged particle in a bound potential interacting with an oscillating driving force. The matter can be expressed in terms of a particle with charge \(z\) in a harmonic potential (the leading term in any expansion of the potential in the coordinate \(Q\)):\[V _ {r e s} (t) = \dfrac {1} {2} \kappa Q^{2} \label{11.1}\]Here \(k\) is the restoring force constant. For the light field, we use the traditional expression\[V _ {e x t} (t) = - \overline {\mu} \cdot \overline {E} (t) \label{11.2}\]for an external electromagnetic field interacting with the dipole moment of the system, \(\overline {\mu} = z Q\). We describe the behavior of this system using Newton’s equation of motion F=ma, which we write as\[m \dfrac {\partial^{2} Q} {\partial t^{2}} = F _ {r e s} + F _ {d a m p} + F _ {e x t} \label{11.3}\]On the right hand side of Equation \ref{11.3} there are three forces: the harmonic restoring force, a damping force, and the driving force exerted by the light. Remembering that\[F = - ( \partial V / \partial Q )\]we can write Equation \ref{11.3} as\[m \dfrac {\partial^{2} Q} {\partial t^{2}} = - \kappa Q - b \dfrac {\partial Q} {\partial t} + F _ {0} \cos ( \omega t ) \label{11.4}\]Here, \(b\) describes the rate of damping. For the field, we have only considered the time-dependence\[\overline {E} (t) = \overline {E} _ {0} \cos ( \omega t )\]and the amplitude of the driving force\[F _ {0} = \left( \dfrac {\partial \overline {\mu}} {\partial Q} \right) \cdot \overline {E} _ {0} \label{11.5}\]Equation \ref{11.5} indicates that increasing the force on the oscillator is achieved by raising the magnitude of the field, increasing how much the charge is displaced, or improving the alignment between the electric field polarization and the transition dipole moment. We can rewrite Equation \ref{11.4} as the driven harmonic oscillator equation:\[\dfrac {\partial^{2} Q} {\partial t^{2}} + 2 \gamma \dfrac {\partial Q} {\partial t} + \omega _ {0}^{2} Q = \dfrac {F _ {0}} {m} \cos ( \omega t ) \label{11.6}\]Here the damping constant \(\gamma = b / 2 m\) and the harmonic resonance frequency \(\omega _ {0} = \sqrt {\kappa / m}\).Let’s look at the solution to Equation \ref{11.6} for a couple of simple cases.First, for the case (red curve) that there is no damping or driving force (\(\gamma = F _ {0} = 0\)), we have simple harmonic solutions in which oscillate at a frequency \(\omega _ {0}\):\[Q (t) = A \sin \left( \omega _ {0} t \right) + B \cos \left( \omega _ {0} t \right)\]Let’s just keep the \(sin\) term for now. Now if you add damping to the equation:\[Q (t) = A e^{- \gamma t} \sin \Omega _ {0} t\]The coordinate oscillates at a reduced frequency\[\Omega _ {0} = \sqrt {\omega _ {0}^{2} - \gamma^{2}}\]As we continue, let’s assume a case with weak damping for which \(\Omega _ {0} \approx \omega _ {0}\) (blue curve).The solution to Equation \ref{11.6} takes the form\[Q (t) = \dfrac {F _ {0} / m} {\sqrt {\left( \omega _ {0}^{2} - \omega^{2} \right)^{2} + 4 \gamma^{2} \omega^{2}}} \sin ( \omega t + \beta ) \label{11.7}\]where the phase factor is\[\tan \beta = \left( \omega _ {0}^{2} - \omega^{2} \right) / 2 \gamma \omega \label{11.8}\]So this solution to the displacement of the particle says that the amplitude certainly depends on the magnitude of the driving force, but more importantly on the resonance condition. The frequency of the driving field should match the natural resonance frequency of the system, \(\omega _ {0} \approx \infty\) … like pushing someone on a swing. When you drive the system at the resonance frequency there will be an efficient transfer of power to the oscillator, but if you push with arbitrary frequency, nothing will happen. Indeed, that is what an absorption spectrum is: a measure of the power absorbed by the system from the field.Notice that the coordinate oscillates at the driving frequency ω and not at the resonance frequency \(\omega_0\). Also, the particle oscillates as a sin, that is, 90° out-of-phase with the field when driven on resonance. This reflects the fact that the maximum force can be exerted on the particle when it is stationary at the turning points. The phase shift \(\beta\), depends varies with the detuning from resonance. Now we can make some simplifications to Equation \ref{11.7} and calculate the absorption spectrum. For weak damping \(\gamma < < \omega _ {0}\) and near resonance \(\omega _ {0} \approx \infty\), we can write\[\left( \omega _ {0}^{2} - \omega^{2} \right)^{2} = \left( \omega _ {0} - \omega \right)^{2} \left( \omega _ {0} + \omega \right)^{2} \approx 4 \omega _ {0}^{2} \left( \omega _ {0} - \omega \right)^{2} \label{11.9}\]The absorption spectrum is a measure of the power transferred to the oscillator, so we can calculate it by finding the power absorbed from the force on the oscillator times the velocity, averaged over a cycle of the field.\[ \begin{align} P _ {a v g} &= \left\langle F (t) \cdot \dfrac {\partial Q} {\partial t} \right\rangle _ {a v g} \\[4pt] &= \dfrac {\gamma F _ {0}^{2}} {2 m} \dfrac {1} {\left( \omega - \omega _ {0} \right)^{2} + \gamma^{2}} \label{11.10} \end{align}\]This is the Lorentzian lineshape, which is peaked at the resonance frequency and has a line width of \(2\gamma\) (full width half-maximum, FWHM). The area under the lineshape is \(\pi F _ {0}^{2} / 4 m\).This page titled 12.1: A Classical Description of Spectroscopy is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,277 |
12.2: Time-Correlation Function Description of Absorption Lineshape
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/12%3A_Time-domain_Description_of_Spectroscopy/12.02%3A_Time-Correlation_Function_Description_of_Absorption_Lineshape | The interaction of light and matter as we have described from Fermi’s Golden Rule gives the rates of transitions between discrete eigenstates of the material Hamiltonian \(H_0\). The frequency dependence to the transition rate is proportional to an absorption spectrum. We also know that interaction with the light field prepares a superposition of the eigenstates of \(H_0\), and this leads to the periodic oscillation of amplitude between the states. Nonetheless, the transition rate expression really seems to hide any time-dependent description of motions in the system. An alternative approach to spectroscopy is to recognize that the features in a spectrum are just a frequency domain representation of the underlying molecular dynamics of molecules. For absorption, the spectrum encodes the time-dependent changes of the molecular dipole moment for the system, which in turn depends on the position of electrons and nuclei.A time-correlation function for the dipole operator can be used to describe the dynamics of an equilibrium ensemble that dictate an absorption spectrum. We will make use of the transition rate expressions from first-order perturbation theory that we derived in the previous section to express the absorption of radiation by dipoles as a correlation function in the dipole operator. Let’s start with the rate of absorption and stimulated emission between an initial state \(| \ell \rangle\) and final state \(| k \rangle\) induced by a monochromatic field\[w _ {k \ell} = \dfrac {\pi E _ {0}^{2}} {2 \hbar^{2}} | \langle k | \hat {\varepsilon} \cdot \overline {\mu} | \ell \rangle |^{2} \left[ \delta \left( \omega _ {k \ell} - \omega \right) + \delta \left( \omega _ {k \ell} + \omega \right) \right] \label{11.11}\]For shorthand we have written\[\left| \overline {\mu} _ {k \ell} \right|^{2} = | \langle k | \hat {\mathcal {E}} \cdot \overline {\mu} | \ell \rangle |^{2}.\]We would like to use this to calculate the experimentally observable absorption coefficient (cross-section) which describes the transmission through the sample\[T = \exp [ - \Delta N \alpha ( \omega ) L ] \label{11.12}\]The absorption cross section describes the rate of energy absorption per unit time relative to the intensity of light incident on the sample\[\alpha = \dfrac {- \dot {E} _ {r a d}} {I} \label{11.13}\]The incident intensity is\[I = \frac {c} {8 \pi} E _ {0}^{2} \label{11.14}\]If we have two discrete states \(| m \rangle\) and \(| n \rangle\) with \(E _ {m} > E _ {n}\), the rate of energy absorption is proportional to the absorption rate and the transition energy\[- \dot {E} _ {r a d} = w _ {n n} \cdot \hbar \omega _ {n m} \label{11.15}\]For an ensemble this rate must be scaled by the probability of occupying the initial state.More generally, we want to consider the rate of energy loss from the field as a result of the difference in rates of absorption and stimulated emission between states populated with a thermal distribution.So, summing all possible initial and final states \(| \ell \rangle\) and \(| k \rangle\) over all possible upper and lower states \(| m \rangle\) and \(| n \rangle\) with\[\left.\begin{aligned} - \dot {E} _ {\text {rad}} & = \sum _ {\ell , k \in [ m , n \}} p _ {\ell} w _ {k \ell} \hbar \omega _ {k \ell} \\ & = \dfrac {\pi E _ {0}^{2}} {2 \hbar} \sum _ {\ell , k \in [ m , n \}} \omega _ {k \ell} p _ {\ell} \left| \overline {\mu} _ {k \ell} \right|^{2} \left[ \delta \left( \omega _ {k \ell} - \omega \right) + \delta \left( \omega _ {k \ell} + \omega \right) \right] \end{aligned} \right. \label{11.16}\]The cross section including the net change in energy as a result of absorption \(| n \rangle \rightarrow | m \rangle\) and stimulated emission \(| m \rangle \rightarrow | n \rangle\) is:\[\alpha ( \omega ) = \dfrac {4 \pi^{2}} {\hbar c} \sum _ {n , m} \left[ \omega _ {m n} p _ {n} \left| \overline {\mu} _ {m n} \right|^{2} \delta \left( \omega _ {m n} - \omega \right) + \omega _ {n m} p _ {m} \left| \overline {\mu} _ {n m} \right|^{2} \delta \left( \omega _ {n m} + \omega \right) \right] \label{11.17}\]To simplify Equation \ref{11.17}, we note:So,\[\alpha ( \omega ) = \dfrac {4 \pi^{2} \omega} {\hbar c} \sum _ {n , m} \left( p _ {n} - p _ {m} \right) \left| \overline {\mu} _ {m n} \right|^{2} \delta \left( \omega _ {m n} - \omega \right) \label{11.18}\]Here we see that the absorption coefficient depends on the population difference between the two states. This is expected since absorption will lead to loss of intensity, whereas stimulated emission leads to gain. With equal populations in the upper and lower state, no change to the incident field would be expected. Since\[p _ {n} - p _ {m} = p _ {n} \left( 1 - \exp \left[ - \beta \hbar \omega _ {m n} \right] \right) \label{11.19}\]\[\alpha ( \omega ) = \dfrac {4 \pi^{2}} {\hbar c} \omega \left( 1 - e^{- \beta \hbar \omega} \right) \sum _ {n , m} p _ {n} \left| \overline {\mu} _ {m n} \right|^{2} \delta \left( \omega _ {m n} - \omega \right) \label{11.20}\]Again the \(\omega _ {m n}\) factor has been replaced with \(\omega\). We can now separate \(\alpha\) into a product of factors that represent the field, and the matter, where the matter is described by \(\sigma ( \omega )\), the absorption lineshape.\[\alpha ( \omega ) = \dfrac {4 \pi^{2}} {\hbar c} \omega \left( 1 - e^{- \beta \hbar \omega} \right) \sigma ( \omega ) \label{11.21}\]\[\sigma ( \omega ) = \sum _ {n , m} p _ {n} \left| \overline {\mu} _ {m n} \right|^{2} \delta \left( \omega _ {m n} - \omega \right) \label{11.22}\]To express the lineshape in terms of a correlation function we use one representation of the delta function through a Fourier transform of a complex exponential:\[\delta \left( \omega _ {m n} - \omega \right) = \dfrac {1} {2 \pi} \int _ {- \infty}^{+ \infty} d t \,e^{i \left( \omega _ {m n} - \omega \right) t} \label{11.23}\]to write\[\sigma ( \omega ) = \dfrac {1} {2 \pi} \int _ {- \infty}^{+ \infty} d t \sum _ {n , m} p _ {n} \langle n | \overline {\mu} | m \rangle \langle m | \overline {\mu} | n \rangle e^{i \left( \omega _ {m n} - \omega \right) t} \label{11.24}\]Now equating\[U _ {0} | n \rangle = e^{- i H _ {0} t / \hbar} | n \rangle = e^{- i E _ {n} t / \hbar} | n \rangle\]and recognizing that our expression contains the projection operator\[\sigma ( \omega ) = \dfrac {1} {2 \pi} \int _ {- \infty}^{+ \infty} d t \sum _ {n , m} p _ {n} \langle n | \overline {\mu} | m \rangle \left\langle m \left| U _ {0}^{\dagger} \overline {\mu} U _ {0} \right| n \right\rangle e^{- i \omega t}\]\[= \dfrac {1} {2 \pi} \int _ {- \infty}^{+ \infty} d t \sum _ {n , m} p _ {n} \left\langle n \left| \overline {\mu} _ {I} ( 0 ) \overline {\mu} _ {I} (t) \right| n \right\rangle e^{- i \omega t} \label{11.25}\]But this last expression is just a dipole moment correlation function: the equilibrium thermal average over a pair of time-dependent dipole operators:\[\sigma ( \omega ) = \dfrac {1} {2 \pi} \int _ {- \infty}^{+ \infty} d t e^{- i \omega t} \left\langle \overline {\mu} _ {I} ( 0 ) \overline {\mu} _ {I} (t) \right\rangle \label{11.26}\]The absorption lineshape is given by the Fourier transform of the dipole correlation function. The correlation function describes the time-dependent behavior or spontaneous fluctuations in the dipole moment in absence of E field and contains information on states of system and broadening due to relaxation. Additional manipulations can be used to switch the order of operators by taking the complex conjugate of the exponential\[\sigma ( \omega ) = \dfrac {1} {2 \pi} \int _ {- \infty}^{+ \infty} d t e^{i \omega t} \left\langle \overline {\mu} _ {I} (t) \overline {\mu} _ {I} ( 0 ) \right\rangle \label{11.27}\]and we can add back the polarization of the light field to the matrix element\[\sigma ( \omega ) = \dfrac {1} {2 \pi} \int _ {- \infty}^{+ \infty} d t e^{i \omega t} \left\langle \hat {\varepsilon} \cdot \overline {\mu} _ {I} (t) \hat {\varepsilon} \cdot \overline {\mu} _ {I} ( 0 ) \right\rangle \label{11.28}\]to emphasize the orientational component to this correlation function. Here we have written operators emphasizing the interaction picture representation. As we move forward, we will drop this notation, and take it as understood that for the purposes of spectroscopy, the dipole operator is expressed in the interaction picture and evolves under the material Hamiltonian \(H_0\).This page titled 12.2: Time-Correlation Function Description of Absorption Lineshape is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,278 |
12.3: Different Types of Spectroscopy Emerge from the Dipole Operator
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/12%3A_Time-domain_Description_of_Spectroscopy/12.03%3A_Different_Types_of_Spectroscopy_Emerge_from_the_Dipole_Operator | So the absorption spectrum in any frequency region is given by the Fourier transform over the dipole correlation function that describes the time-evolving change distributions in molecules, solids, and nanosystems. Let’s consider how this manifests itself in a few different spectroscopies, which have different contributions to the dipole operator. In general the dipole operator is a relatively simple representation of the charged particles of the system:\[\vec{\mu} = \sum _ {i} q _ {i} \left( \vec{r} _ {i} - \vec{r} _ {0} \right) \label{11.29}\]The complexity arises from the time-dependence of this operator, which evolves under the full Hamiltonian for the system:\[\vec{\mu} (t) = e^{i H _ {0} t / \hbar} \vec{\mu} ( 0 ) e^{- i H _ {0} t / \hbar} \label{11.30}\]where\[\begin{align} H _ {0} = H _ {e l e c} + H _ {v i b} + H _ {r o t} + H _ {t r a n s} + H _ {s p i n} + \cdots + H _ {b a t h} + \cdots \\[4pt] + \sum _ {i , j \in [ e , v , r , t , s , b , E M \}} H _ {i - j} + \cdots \label{11.31} \end{align}\]The full Hamiltonian accounts for the dynamics of all electronic, nuclear, and spin degrees of freedom. It is expressed in Equation \ref{11.31} in terms of separable contributions to all possible degrees of freedom and a bath Hamiltonian that contains all of the dark degrees of freedom not explicitly included in the dipole operator. We could also include an electromagnetic field. The last term describes pairwise couplings between different degrees of freedom, and emphasizes that interactions such as electron-nuclear interactions \(He_{e-v}\) and spin-orbit coupling \(He_{e-s}\). The wavefunction for the system can be expressed in terms of product states of the wavefunctions for the different degrees of freedom,\[| \psi \rangle = | \psi _ {e l e c} \psi _ {v i b} \psi _ {r o t} \cdots \rangle \label{11.32}\]When the \(H_{i-j}\) interaction terms are neglected, the correlation function can be separated into a product of correlation functions from various sources:\[C _ {\mu \mu} (t) = C _ {e l e c} (t) C _ {v i b} (t) C _ {r o t} (t) \cdots \label{11.33}\]which are each expressed in the form shown here for the vibrational states\[C _ {\mu \mu} (t) = C _ {e l e c} (t) C _ {v i b} (t) C _ {r o t} (t) \cdots \label{11.34}\]\(\Phi _ {n}\) is the wavefunction for the nth vibrational eigenstate. The net correlation function will have oscillatory components at many frequencies and its Fourier transform will give the full absorption spectrum from the ultraviolet to the microwave regions of the spectrum. Generally speaking the highest frequency contributions (electronic or UV/Vis) will be modulated by contributions from lower frequency motions (… such as vibrations and rotations). However, we can separately analyze each of these contributions to the spectrum.\(H _ {0} = H _ {\text {atom}}\). For hydrogenic orbitals, \(| n \rangle \rightarrow | n \ell m _ {\ell} \rangle\).From a classical perspective, the dipole moment can be written in terms of a permanent dipole moment with amplitude and direction\[\vec{\mu} = \mu _ {0} \hat {u} \label{11.35}\]\[\sigma ( \omega ) = \int _ {- \infty}^{+ \infty} d t e^{- i \omega t} \mu _ {0}^{2} \langle \hat {\varepsilon} \cdot \hat {u} ( 0 ) \hat {\varepsilon} \cdot \hat {u} (t) \rangle \label{11.36}\]The lineshape is the Fourier transform of the rotational motion of the permanent dipole vector in the laboratory frame. \(\mu _ {0}\) is the magnitude of the permanent dipole moment averaged over the fast electronic and vibrational degrees of freedom. The frequency of the resonance would depend on the rate of rotation—the angular momentum and the moment of inertia. Collisions or other damping would lead to the broadening of the lines.Quantum mechanically we expect a series of rotational resonances that mirror the thermal occupation and degeneracy of rotational states for the system. Taking the case of a rigid rotor with cylindrical symmetry as an example, the Hamiltonian is\[H _ {m t} = \dfrac {\overline {L}^{2}} {2 I} \label{11.37}\]and the wavefunctions are spherical harmonics, \(Y _ {J , M} ( \theta , \phi )\) which are described by\[\overline {L}^{2} | Y _ {J , M} \rangle = \hbar^{2} J ( J + 1 ) | Y _ {J , M} \rangle \label{11.38A}\]with \(J = 0,1,2 \ldots \label{11.38B}\) and\[L _ {z} | Y _ {J , M} \rangle = M \hbar | Y _ {J , M} \rangle \label{11.38C}\]with\[M = - J , - J + 1 , \ldots , J \label{11.38D}\]where \(J\) is the rotational quantum number and \(M\) (or \(M_J\)) refers to its projection onto an axis (z), and has a degeneracy of \(g_M(J)=2J+1\). The energy eigenvalues for Hrot are\[E _ {J , M} = \tilde{B} J ( J + 1 ) \label{11.39}\]where the rotational constant, here in units of joules, is\[\tilde{B} = \frac {\hbar^{2}} {2 I} \label{11.40}\]If we take a dipole operator in the form of Equation \ref{11.35}, then the far-infrared rotational spectrum will be described by the correlation function\[ C _ {r o t} (t) = \sum _ {J , M} p _ {J , M} \left| \mu _ {0} \right|^{2} \left\langle Y _ {J , M} \left| e^{i H _ {m t} t h} ( \hat {u} \cdot \hat {\varepsilon} ) e^{- i H _ {m} t / h} ( \hat {u} \cdot \hat {\varepsilon} ) \right| Y _ {J , M} \right\rangle \label{11.41}\]The evaluation of this correlation function involves an orientational average, which is evaluated as follows\[ \left\langle Y _ {J , M} | f ( \theta , \phi ) | Y _ {J , M} \right\rangle = \frac {1} {4 \pi} \int _ {0}^{2 \pi} d \varphi \int _ {0}^{\pi} \sin \theta d \theta Y _ {J , M}^{*} f ( \theta , \phi ) Y _ {J , M} \label{11.42}\]Recognizing that\(\left( \hat {u} \cdot \hat {\varepsilon} _ {z} \right) = \cos \theta\), we can evaluate Equation \ref{11.41} using the reduction formula ,\[\cos \theta | Y _ {J , M} \rangle = c _ {J +} | Y _ {J + 1 , M} \rangle + c _ {J -} | Y _ {J - 1 , M} \rangle \label{11.43}\]with\[c _ {J +} = \sqrt {\frac {( J + 1 )^{2} - M^{2}} {4 ( J + 1 )^{2} - 1}}\]and\[c _ {J -} = \sqrt {\frac {J^{2} + M^{2}} {4 J^{2} - 1}} \label{11.44}\]and the orthogonality of spherical harmonics\[\left\langle Y _ {J^{\prime} , M^{\prime}} | Y _ {J , M} \right\rangle = 4 \pi \delta _ {J , J} \delta _ {M^{\prime} , M} \label{11.45}\]The factor \(p _ {J , M}\) in Equation \ref{11.41} is the probability of thermally occupying a particular \(J,\,M\) level. For this we recognize that\[p _ {J , M} = g _ {M} ( J ) e^{- \beta E _ {J}} / Z _ {r o t}\]so that Equation \ref{11.41} leads to the correlation function\[\left\langle Y _ {J^{\prime} , M^{\prime}} | Y _ {J , M} \right\rangle = 4 \pi \delta _ {J , J} \delta _ {M^{\prime} , M} \label{11.46}\]Fourier transforming Equation \ref{11.46} leads to the lineshape\[\sigma _ {r o t} ( \omega ) = \frac {\left| \mu _ {0} \right|^{2}} {Z _ {r o t}} \hbar \sum _ {J} ( 2 J + 1 ) e^{- \beta \overline {B} J ( J + 1 ) / \hbar} [ \delta ( \hbar \omega - 2 \overline {B} ( J + 1 ) ) + \delta ( \hbar \omega + 2 \overline {B} J ) ] \label{11.47}\]The two terms reflect the fact that each thermally populated level with \(J > 0\) contributes both to absorptive and stimulated emission processes, and the observed intensity reflects the difference in populations.Vibrational spectroscopy can be described by taking the dipole moment to be weakly dependent on the displacement of vibrational coordinates\[\overline {\mu} = \overline {\mu} _ {0} + \left. \frac {\partial \overline {\mu}} {\partial q} \right| _ {q = q _ {0}} q + \cdots \label{11.48}\]Here the first expansion term is the permanent dipole moment and the second term is the transition dipole moment. If we are performing our ensemble average over vibrational states, the lineshape becomes the Fourier transform of a correlation function in the vibrational coordinate\[\sigma ( \omega ) = \left| \frac {\partial \overline {\mu}} {\partial q} \right|^{2} \int _ {- \infty}^{+ \infty} d t \, e^{- i \omega t} \langle q ( 0 ) q (t) \rangle \label{11.49}\]The vector nature of the transition dipole has been dropped here. So the time-dependent dynamics of the vibrational coordinate dictate the IR lineshape.This approach holds for the classical and quantum mechanical cases. In the case of quantum mechanics, the change in charge distribution in the transition dipole moment is replaced with the equivalent transition dipole matrix element\[| \partial \overline {\mu} / \partial q |^{2} \Rightarrow \left| \overline {\mu} _ {k \ell} \right|^{2}\]If we take the vibrational Hamiltonian to be that of a harmonic oscillator,\[H _ {v i b} = \frac {1} {2 m} p^{2} + \frac {1} {2} m \omega _ {0}^{2} q^{2} = \hbar \omega _ {0} \left( a^{\dagger} a + \frac {1} {2} \right)\]then the time-dependence of the vibrational coordinate, expressed as raising and lowering operators is\[q (t) = \sqrt {\frac {\hbar} {2 m \omega _ {0}}} \left( a^{\dagger} e^{i \omega _ {0} t} + a e^{- i \omega _ {0} t} \right)\]The absorption lineshape is then obtained from Equation \ref{11.49}.\[\sigma _ {v i b} ( \omega ) = \frac {1} {Z _ {v i b}} \sum _ {n} e^{- \beta n \hbar \omega _ {0}} \left[ \left| \overline {\mu} _ {( n + 1 ) n} \right|^{2} ( \overline {n} + 1 ) \delta \left( \omega - \omega _ {0} \right) + \left| \overline {\mu} _ {( n - 1 ) n} \right|^{2} \overline {n} \delta \left( \omega + \omega _ {0} \right) \right]\]where \(\overline {n} = \left( e^{\beta \hbar \omega _ {0}} - 1 \right)^{- 1}\) is the thermal occupation number. For the low temperature limit applicable to most vibrations under room temperature conditions \(\overline {n} \rightarrow 0\) and\[\sigma _ {v i b} ( \omega ) = \left| \overline {\mu} _ {10} \right|^{2} \delta \left( \omega - \omega _ {0} \right)\]Technically, we need second-order perturbation theory to describe Raman scattering, because transitions between two states are induced by the action of two light fields whose frequency difference equals the energy splitting between states. But much the same result is obtained is we replace the dipole operator with an induced dipole moment generated by the incident field: \(\overline {\mu} \Rightarrow \overline {\mu} _ {i n d}\). The incident field \(E_i\) polarizes the molecule,\[\overline {\mu} _ {i n d} = \overline {\overline {\alpha}} \cdot \overline {E} _ {i} (t) \label{11.54}\](\(\overline {\alpha}\) is the polarizability tensor), and the scattered light field results from the interaction with this induced dipole\[\begin{align} V (t) & = - \overline {\mu} _ {i n d} \cdot \overline {E} _ {s} (t) \\[4pt] & = \overline {E} _ {s} (t) \cdot \overline {\overline {\alpha}} \cdot \overline {E} _ {i} (t) \\[4pt] & = E _ {s} (t) E _ {i} (t) \left( \hat {\varepsilon} _ {s} \cdot \overline {\alpha} \cdot \hat {\varepsilon} _ {i} \right) \label{11.55} \end{align} \]Here we have written the polarization components of the incident (\(i\)) and scattered (\(s\)) light projecting onto the polarizability tensor \( \overline{\overline {\alpha}}\). Equation \ref{11.55} leads to an expression for the Raman lineshape as\[\begin{align} \sigma ( \omega ) &= \int _ {- \infty}^{+ \infty} d t e^{- i \omega t} \left\langle \hat {\varepsilon} _ {s} \cdot \overline {\alpha} ( 0 ) \cdot \hat {\varepsilon} _ {i} \hat {\varepsilon} _ {s} \cdot \overline {\overline {\alpha}} (t) \cdot \hat {\varepsilon} _ {i} \right\rangle \\[4pt] &= \int _ {- \infty}^{+ \infty} d t e^{- i \omega t} \langle \overline {\overline {\alpha}} ( 0 ) \overline {\overline {\alpha}} (t) \rangle \label{11.56} \end{align} \]To evaluate this, the polarizability tensor can also be expanded in the nuclear coordinates\[\overline {\overline {\alpha}} = \overline {\overline {\alpha} _ {0}} + \left. \frac {\partial \overline {\overline {\alpha}}} {\partial q} \right| _ {q = q _ {0}} q + \cdots \label{11.57}\]where the leading term would lead to Raleigh scattering and rotational Raman spectra, and the second term would give vibrational Raman scattering. Also remember that the polarizability tensor is a second rank tensor that tells you how well a light field polarized along \(i\) can induce a dipole moment (light-field-induced charge displacement) in the s direction. For cylindrically symmetric systems which have a polarizability component \(\alpha _ {\|}\) along the principal axis of the molecule and a component \(\alpha _ {\perp}\) perpendicular to that axis, this usually takes the form\[\overline {\overline {\alpha}} = \left( \begin{array} {c c} {\alpha _ {\|}} & {} \\ {} & {\alpha _ {\perp}} \\ {} & {} & {\alpha _ {\perp}} \end{array} \right) = \alpha \mathbf {I} + \frac {1} {3} \beta \left( \begin{array} {c c} {2} & {} \\ {} & {- 1} \\ {} & {} & {- 1} \end{array} \right) \label{11.58}\]where \(\alpha\) is the isotropic component of polarizability tensor and \(\beta\) is the anisotropic component.This page titled 12.3: Different Types of Spectroscopy Emerge from the Dipole Operator is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,279 |
12.4: Ensemble Averaging and Line-Broadening
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/12%3A_Time-domain_Description_of_Spectroscopy/12.04%3A_Ensemble_Averaging_and_Line-Broadening | We have seen that an absorption lineshape can represent the dynamics of the dipole or be broadened by energy relaxation, for instance through coupling to a continuum. However, there are numerous processes that can influence the lineshape. These can be separated by dynamic processes intrinsic to the molecular system, which is termed homogeneous broadening, and static effects known as inhomogeneous broadening, which can be considered an ensemble averaging effect. To illustrate, imagine that the dipole correlation function has an oscillatory, damped form\[C _ {\mu \mu} (t) = \sum _ {g , e} p _ {g} \left| \mu _ {e g} \right|^{2} \exp \left[ - i \omega _ {e g} t - \Gamma t \right] \label{11.59}\]Then the Fourier transform would give a lineshape\[\operatorname {Re} \left[ \tilde {C} _ {\mu \mu} ( \omega ) \right] = \sum _ {g , e} p _ {g} \frac {\left| \mu _ {e g} \right|^{2} \Gamma} {\left( \omega - \omega _ {e g} \right)^{2} - \Gamma^{2}} \label{11.60}\]Here the homogeneous effects are reflected in the factor \(\Gamma\), the damping rate and linewidth, whereas inhomogeneous effects arise from averaging over the ensemble.Several dynamical mechanisms can potentially contribute to damping and line-broadening. These intrinsically molecular processes, often referred to as homogeneous broadening, are commonly assigned a time scale \(T _ {2} = \Gamma^{- 1}\).Population relaxation refers to decay in the coherence created by the light field as a result of the finite lifetime of the coupled states, and is often assigned a time scale \(T_1\). This can have contributions from radiative decay, such as spontaneous emission, or non-radiative processes such as relaxation as a result of coupling to a continuum.\[\frac {1} {T _ {1}} = \frac {1} {\tau _ {r a d}} + \frac {1} {\tau _ {N R}} \label{11.61}\]The observed population relaxation time depends on both the relaxation times of the upper and lower states (\(m\) and \(n\)) being coupled by the field:\[1 / T _ {1} = w _ {m n} + w _ {n m}.\]When the energy splitting is high compared to \(k_BT\), only the downward rate contributes, which is why the rate is often written \(1/2T_1\).Pure dephasing is characterized by a time constant \(T_2^{*}\) that characterizes the randomization of phase within an ensemble as a result of molecular interactions. This is a dynamic effect in which memory of the phase of oscillation of a molecule is lost as a result of intermolecular interactions that randomize the phase. Examples include collisions in a dense gas, or fluctuations induced by a solvent. This process does not change the population of the states involved.Orientational relaxation (\(\tau_{or}\)) also leads to relaxation of the dipole correlation function and to line-broadening. Since the correlation function depends on the projection of the dipole onto a fixed axis in the laboratory frame, randomization of the initial dipole orientations is an ensemble averaged dephasing effect. In solution, this process is commonly treated as an orientational diffusion problem in which \(\tau_{or}\) is proportional to the diffusion constant.If these homogeneous processes are independent, the rates for different processes contribute additively to the damping and line width:\[\frac {1} {T _ {2}} = \frac {1} {T _ {1}} + \frac {1} {T _ {2}^{*}} + \frac {1} {\tau _ {o r}} \label{11.62}\]Absorption lineshapes can also be broadened by a static distribution of frequencies. If molecules within the ensemble are influenced static environmental variations more than other processes, then the observed lineshape reports on the distribution of environments. This inhomogeneous broadening is a static ensemble averaging effect, which hides the dynamical content in the homogeneous linewidth. The origin of the inhomogeneous broadening can be molecular (for instance a distribution of defects in crystals) or macroscopic (i.e., an inhomogeneous magnetic field in NMR).The inhomogeneous linewidth is dictated the width of the distribution \(\Delta\).The total observed broadening of the absorption lineshape reflects the contribution of all of these effects:\[C _ {\mu \mu} \propto \exp \left[ - i \omega _ {e g} t - \left( \frac {1} {T _ {2}^{*}} + \frac {1} {2 T _ {1}} + \frac {1} {\tau _ {o r}} \right) t - \frac {\Delta^{2}} {2} t^{2} \right] \label{11.63}\]These effects can be wrapped into a lineshape function \(g(t)\). The lineshape for the broadening of a given transition can be written as the Fourier transform over the oscillating transition frequency damped and modulated by a complex \(g(t)\):\[\sigma ( \omega ) = \int _ {- \infty}^{+ \infty} d t \, e^{i \omega t} e^{- i \omega _ {eg} t - g (t)} \label{11.64}\]All of these effects can be present simultaneously in an absorption spectrum.This page titled 12.4: Ensemble Averaging and Line-Broadening is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,280 |
13.1: The Displaced Harmonic Oscillator Model
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/13%3A_Coupling_of_Electronic_and_Nuclear_Motion/13.01%3A_The_Displaced_Harmonic_Oscillator_Model | Here we will discuss the displaced harmonic oscillator (DHO), a widely used model that describes the coupling of nuclear motions to electronic states. Although it has many applications, we will look at the specific example of electronic absorption experiments, and thereby gain insight into the vibronic structure in absorption spectra. Spectroscopically, it can also be used to describe wavepacket dynamics; coupling of electronic and vibrational states to intramolecular vibrations or solvent; or coupling of electronic states in solids or semiconductors to phonons. As we will see, further extensions of this model can be used to describe fundamental chemical rate processes, interactions of a molecule with a dissipative or fluctuating environment, and Marcus Theory for nonadiabatic electron transfer.Molecular excited states have geometries that are different from the ground state configuration as a result of varying electron configuration. This parametric dependence of electronic energy on nuclear configuration results in a variation of the electronic energy gap between states as one stretches bond vibrations of the molecule. We are interested in describing how this effect influences the electronic absorption spectrum, and thereby gain insight into how one experimentally determines the coupling of between electronic and nuclear degrees of freedom. We consider electronic transitions between bound potential energy surfaces for a ground and excited state as we displace a nuclear coordinate \(q\). The simplified model consists of two harmonic oscillators potentials whose 0-0 energy splitting is \(E _ {e} - E _ {g}\) and which depends on \(q\). We will calculate the absorption spectrum in the interaction picture using the time-correlation function for the dipole operator.We start by writing a Hamiltonian that contains two terms for the potential energy surfaces of the electronically excited state \(| E \rangle\) and ground state \(| G \rangle\)\[H _ {0} = H _ {G} + H _ {E} \label{12.1}\]These terms describe the dependence of the electronic energy on the displacement of a nuclear coordinate \(q\). Since the state of the system depends parametrically on the level of vibrational excitation, we describe it using product states in the electronic and nuclear configuration, \(| \Psi \rangle = | \psi _ {\text {elec}} , \Phi _ {n u c} \rangle\), or in the present case\[\begin{align} | G \rangle &= | g , n _ {g} \rangle \\[4pt] | E \rangle &= | e , n _ {e} \rangle \label{12.2} \end{align}\]Implicit in this model is a Born-Oppenheimer approximation in which the product states are the eigenstates of \(H_0\), i.e.\[H _ {G} | G \rangle = \left( E _ {g} + E _ {n _ {g}} \right) | G \rangle\]The Hamiltonian for each surface contains an electronic energy in the absence of vibrational excitation, and a vibronic Hamiltonian that describes the change in energy with nuclear displacement.\[\begin{align} H _ {G} & = | g \rangle E _ {g} \langle g | + H _ {g} ( q ) \\[4pt] H _ {E} & = | e \rangle E _ {e} \langle e | + H _ {e} ( q ) \label{12.3} \end{align}\]For our purposes, the vibronic Hamiltonian is harmonic and has the same curvature in the ground and excited states, however, the excited state is displaced by d relative to the ground state along a coordinate \(q\).\[\begin{align} H _ {g} &= \frac {p^{2}} {2 m} + \frac {1} {2} m \omega _ {0}^{2} q^{2} \label{12.4} \\[4pt] H _ {e} &= \frac {p^{2}} {2 m} + \frac {1} {2} m \omega _ {0}^{2} ( q - d )^{2} \label{12.5} \end{align}\]The operator \(q\) acts only to changes the degree of vibrational excitation on the \(| E \rangle\) or \(| G \rangle\) surface.We now wish to evaluate the dipole correlation function\[\begin{align} C _ {\mu \mu} (t) & = \langle \overline {\mu} (t) \overline {\mu} ( 0 ) \rangle \\[4pt] & = \sum _ {\ell = E , G} p _ {\ell} \left\langle \ell \left| e^{i H _ {0} t / h} \overline {\mu} e^{- i H _ {0} t / h} \overline {\mu} \right| \ell \right\rangle \label{12.6} \end{align} \]Here \(p_{\ell}\) is the joint probability of occupying a particular electronic and vibrational state, \(p _ {\ell} = p _ {\ell , e l e c} p _ {\ell , v i b}\). The time propagator is\[e^{- i H _ {d} t / h} = | G \rangle e^{- i H _ {c} t h} \langle G | + | E \rangle e^{- i H _ {E} t / h} \langle E | \label{12.7}\]We begin by making the Condon Approximation, which states that there is no nuclear dependence for the dipole operator. It is only an operator in the electronic states.\[\overline {\mu} = | g \rangle \mu _ {g e} \langle e | + | e \rangle \mu _ {e g} \langle g | \label{12.8}\]This approximation implies that transitions between electronic surfaces occur without a change in nuclear coordinate, which on a potential energy diagram is a vertical transition.Under typical conditions, the system will only be on the ground electronic state at equilibrium, and substituting Equations \ref{12.7} and \ref{12.8} into Equation \ref{12.6}, we find:\[C _ {\mu \mu} (t) = \left| \mu _ {e g} \right|^{2} e^{- i \left( E _ {e} - E _ {g} \right) t h} \left\langle e^{i H _ {g} t h} e^{- i H _ {\ell} t / h} \right\rangle \label{12.9}\]Here the oscillations at the electronic energy gap are separated from the nuclear dynamics in the final factor, the dephasing function:\[\begin{align} F (t) & = \left\langle e^{i H _ {g} t / \hbar} e^{- i H _ {c} t / h} \right\rangle \\[4pt] & = \left\langle U _ {g}^{\dagger} U _ {e} \right\rangle \label{12.10} \end{align}\]The average \(\langle \ldots \rangle\) in Equations \ref{12.9} and \ref{12.10} is only over the vibrational states \(| n _ {g} \rangle\). Note that physically the dephasing function describes the time-dependent overlap of the nuclear wavefunction on the ground state with the time-evolution of the same wavepacket initially projected onto the excited state\[F (t) = \left\langle \varphi _ {g} (t) | \varphi _ {e} (t) \right\rangle \label{12.11}\]This is a perfectly general expression that does not depend on the particular form of the potential. If you have knowledge of the nuclear and electronic eigenstates or the nuclear dynamics on your ground and excited state surfaces, this expression is your route to the absorption spectrum.For further information on this see:To evaluate \(F(t)\) for this problem, it helps to realize that we can write the nuclear Hamiltonians as\[\begin{align} H _ {g} &= \hbar \omega _ {0} \left( a^{\dagger} a + \ce{1/2} \right) \label{12.12} \\[4pt] H _ {e} &= \hat {D} H _ {g} \hat {D}^{\dagger} \label{12.13} \end{align} \]Here \(\hat {D}\) is the spatial displacement operator\[\hat {D} = \exp ( - i \hat {p} d / \hbar ) \label{12.14}\]which shifts an operator in space as:\[\hat {D} \hat {q} \hat {D}^{\dagger} = \hat {q} - d \label{12.15}\]Note \(\hat{p}\) is only an operator in the vibrational degree of freedom. We can now express the excited state Hamiltonian in terms of a shifted ground state Hamiltonian in Equation \ref{12.13}, and also relate the time propagators on the ground and excited states\[e^{- i H _ {c} t / h} = \hat {D} e^{- i H _ {g} t / h} \hat {D}^{\dagger} \label{12.16}\]Substituting Equation \ref{12.16} into Equation \ref{12.10} allows us to write\[\begin{align} F (t) & = \left\langle U _ {g}^{\dagger} e^{- i d p / h} U _ {g} e^{i d p / h} \right\rangle \\ & = \left\langle \hat {D} (t) \hat {D}^{\dagger} ( 0 ) \right\rangle \label{12.17} \end{align}\]Equation \ref{12.17} says that the effect of the nuclear motion in the dipole correlation function can be expressed as a time-correlation function for the displacement of the vibration.To evaluate Equation \ref{12.17} we write it as\[F (t) = \left\langle e^{- i d \hat {p} (t) / \hbar} e^{i d \hat {p} ( 0 ) / \hbar} \right\rangle \label{12.18}\]since\[\hat {p} (t) = U _ {g}^{\dagger} \hat {p} ( 0 ) U _ {g} \label{12.19}\]The time-evolution of \(\hat{p}\) is obtained by expressing it in raising and lowering operator form,\[\hat {p} = i \sqrt {\frac {m \hbar \omega _ {0}} {2}} \left( a^{\dagger} - a \right) \label{12.20}\]and evaluating Equation \ref{12.19} using Equation \ref{12.12}. Remembering \(a^{\dagger} a = n\), we find\[\left. \begin{array} {l} {U _ {g}^{\dagger} a U _ {g} = e^{i n \omega _ {0} t} a e^{- i n \omega _ {0} t} = a e^{i ( n - 1 ) \omega _ {0} t} e^{- i n \omega _ {0} t} = a e^{- i \omega _ {0} t}} \\ {U _ {g}^{\dagger} a^{\dagger} U _ {g} = a^{\dagger} e^{+ i \omega _ {0} t}} \end{array} \right. \label{12.21}\]which gives\[\hat {p} (t) = i \sqrt {\frac {m \hbar \omega _ {0}} {2}} \left( a^{\dagger} e^{i \omega _ {0} t} - a e^{- i \omega _ {0} t} \right) \label{12.22}\]So for the dephasing function we now have\[F (t) = \left\langle \exp \left[ d \left( a^{\dagger} e^{i \omega _ {0} t} - a e^{- i \omega _ {0} t} \right) \right] \exp \left[ - d \left( a^{\dagger} - a \right) \right] \right\rangle \label{12.23}\]where we have defined a dimensionless displacement variable\[\underset{\sim}{d} = d \sqrt {\frac {m \omega _ {0}} {2 \hbar}} \label{12.24}\]Since \(a^{\dagger}\) and \(a\) do not commute (\(\left[ a^{\dagger} , a \right] = - 1\)), we split the exponential operators using the identity\[e^{\hat {A} + \hat {B}} = e^{\hat {A}} e^{\hat {B}} e^{- \frac {1} {2} [ \hat {A} , \hat {B} ]} \label{12.25}\]or specifically for \(a^{\dagger}\) and \(a\),\[e^{\lambda a^{\dagger} + \mu a} = e^{\lambda a^{\dagger}} e^{\mu a} e^{\frac {1} {2} \lambda \mu} \label{12.26}\]This leads to\[ F (t) = \left \langle \exp \left[ \underset{\sim}{d} \,a^{\dagger}\, e^{i \omega _ {0} t} \right] \exp \left[ - \underset{\sim}{d}\, a\, e^{- i \omega _ {0} t} \right] \exp \left[ - \frac {1} {2} \underset{\sim}{d}^{2} \right] \exp \left[ - \underset{\sim}{d}\, a^{\dagger} \right] \exp [ \underset{\sim}{d}\, a ] \exp \left[ - \frac {1} {2} \underset{\sim}{d}^{2} \right] \right \rangle \label{12.27}\]Now to simplify our work further, let’s specifically consider the low temperature case in which we are only in the ground vibrational state at equilibrium \(| n _ {s} \rangle = | 0 \rangle\). Since \(a | 0 \rangle = 0\) and \(\langle 0 | a^{t} = 0\)\[ \begin{align} e^{-\lambda a} | 0 \rangle &= | 0 \rangle \\[4pt] \langle 0 | e^{\lambda a^{\dagger}} &= \langle 0 | \label{12.28} \end{align} \]and\[F (t) = e^{- \underset{\sim}{d}^{2}} \left\langle 0 \left| \exp \left[ - \underset{\sim}{d} a e^{- i \omega _ {b} t} \right] \exp \left[ - \underset{\sim}{d} a^{\dagger} \right] \right| 0 \right \rangle \label{12.29}\]In principle these are expressions in which we can evaluate by expanding the exponential operators. However, the evaluation becomes much easier if we can exchange the order of operators. Remembering that these operators do not commute, and using\[e^{\hat {A}} e^{\hat {B}} = e^{\hat {B}} e^{\hat {A}} e^{- [ \hat {B} , \hat {A} ]} \label{12.30}\]we can write\[\begin{align} F (t) & {= e^{- \underset{\sim}{d}^{2}} \langle 0 \left| \exp \left[ - \underset{\sim}{d} a^{\dagger} \right] \exp \left[ - \underset{\sim}{d} \,a \, e^{- i \omega _ {0} t} \right] \exp \left[ \underset{\sim}{d}^{2} e^{- i \omega _ {0} t} \right] \| _ {0} \right\rangle} \\ & = \exp \left[ \underset{\sim}{d}^{2} \left( e^{- i \omega _ {0} t} - 1 \right) \right] \label{12.31} \end{align}\]So finally, we have the dipole correlation function:\[C _ {\mu \mu} (t) = \left| \mu _ {e g} \right|^{2} \exp \left[ - i \omega _ {e g} t + D \left( e^{- i \omega _ {v} t} - 1 \right) \right] \label{12.32}\]\(D\) is known as the Huang-Rhys parameter (which should be distinguished from the displacement operator \(\hat{D}\)). It is a dimensionless factor related to the mean square displacement\[D = d^{2} = \underset{\sim}{d}^{2} \frac {m \omega _ {0}} {2 \hbar} \label{12.33}\]and therefore represents the strength of coupling of the electronic states to the nuclear degree of freedom. Note our correlation function has the form\[C _ {\mu \mu} (t) = \sum _ {n} p _ {n} \left| \mu _ {m n} \right|^{2} e^{- i \omega _ {m n} t - g (t)} \label{12.34}\]Here \(g(t)\) is our lineshape function\[g (t) = - D \left( e^{- i \omega _ {0} t} - 1 \right) \label{12.35}\]To illustrate the form of these functions, below is plotted the real and imaginary parts of \(C _ {\mu \mu} (t)\), \(F(t)\), \(g(t)\) for \(D = 1\), and \(\omega_{eg} = 10\omega_0\). \(g(t)\) oscillates with the frequency of the single vibrational mode. \(F(t)\) quantifies the overlap of vibrational wavepackets on ground and excited state, which peaks once every vibrational period. \(C _ {\mu \mu} (t)\) has the same information as \(F(t)\), but is also modulated at the electronic energy gap \(\omega_{eg}\).The absorption lineshape is obtained by Fourier transforming Equation \ref{12.32}\[\begin{align} \sigma _ {a b s} ( \omega ) & = \int _ {- \infty}^{+ \infty} d t \,e^{i \omega t} C _ {\mu \mu} (t) \\[4pt] & = \left| \mu _ {e g} \right|^{2} e^{- D} \int _ {- \infty}^{+ \infty} d t\, e^{i \omega t} e^{- i \omega _ {e s} t} \exp \left[ D e^{- i \omega _ {0} t} \right] \label{12.36} \end{align}\]If we now expand the final term as\[\exp \left[ D \mathrm {e}^{- i \omega _ {0} t} \right] = \sum _ {n = 0}^{\infty} \frac {1} {n !} D^{n} \left( e^{- i \omega _ {0} t} \right)^{n} \label{12.37}\]the lineshape is\[\sigma _ {a b s} ( \omega ) = \left| \mu _ {e g} \right|^{2} \sum _ {n = 0}^{\infty} e^{- D} \frac {D^{n}} {n !} \delta \left( \omega - \omega _ {e g} - n \omega _ {0} \right) \label{12.38}\]The spectrum is a progression of absorption peaks rising from \(\omega_{eg}\), separated by \(\omega_0\) with a Poisson distribution of intensities. This is a vibrational progression accompanying the electronic transition. The amplitude of each of these peaks are given by the Franck–Condon coefficients for the overlap of vibrational states in the ground and excited states\[\left| \left\langle n _ {g} = 0 | n _ {e} = n \right\rangle \right|^{2} = | \langle 0 | \hat {D} | n \rangle |^{2} = e^{- D} \frac {D^{n}} {n !}\]The intensities of these peaks are dependent on \(D\), which is a measure of the coupling strength between nuclear and electronic degrees of freedom. Illustrated below is an example of the normalized absorption lineshape corresponding to the correlation function for \(D = 1\) in .Now let’s investigate how the absorption lineshape depends on \(D\).For \(D = 0\), there is no dependence of the electronic energy gap \(\omega_{eg}\) on the nuclear coordinate, and only one resonance is observed. For \(D < 1\), the dependence of the energy gap on \(q\) is weak and the absorption maximum is at \(\omega_{eg}\), with the amplitude of the vibronic progression falling off as \(D^n\). For \(D >1\), the strong coupling regime, the transition with the maximum intensity is found for peak at \(n \approx D\). So \(D\) corresponds roughly to the mean number of vibrational quanta excited from \(q = 0\) in the ground state. This is the Franck-Condon principle, that transition intensities are dictated by the vertical overlap between nuclear wavefunctions in the two electronic surfaces.To investigate the envelope for these transitions, we can perform a short time expansion of the correlation function applicable for \(t < 1/\omega_{0}\) and for \(D \gg 1\). If we approximate the oscillatory term in the lineshape function as\[\exp \left( - i \omega _ {0} t \right) \approx 1 - i \omega _ {0} t - \frac {1} {2} \omega _ {0}^{2} t^{2} \label{12.40}\]then the lineshape envelope is\[\begin{align} \sigma _ {e n v} ( \omega ) & = \left| \mu _ {e g} \right|^{2} \int _ {- \infty}^{+ \infty} d t e^{i \omega t} e^{- i \omega _ {e g} t} e^{D \left( \exp \left( - i \omega _ {0} t \right) - 1 \right)} \\ & \approx \left| \mu _ {e g} \right|^{2} \int _ {- \infty}^{+ \infty} d t e^{i \left( \omega - \omega _ {e g} t \right)} e^{D \left[ - i \omega _ {0} t - \frac {1} {2} \omega _ {0}^{2} t^{2} \right]} \\ & = \left| \mu _ {e g} \right|^{2} \int _ {- \infty}^{+ \infty} d t e^{i \left( \omega - \omega _ {e g} - D \omega _ {0} \right) t} e^{- \frac {1} {2} D \omega _ {0}^{2} t^{2}} \label{12.41} \end{align}\]This can be solved by completing the square, giving\[\sigma _ {e n v} ( \omega ) = \left| \mu _ {e g} \right|^{2} \sqrt {\frac {2 \pi} {D \omega _ {0}^{2}}} \exp \left[ - \frac {\left( \omega - \omega _ {e g} - D \omega _ {0} \right)^{2}} {2 D \omega _ {0}^{2}} \right] \label{12.42}\]The envelope has a Gaussian profile which is centered at Franck–Condon vertical transition\[\omega = \omega _ {e g} + D \omega _ {0} \label{12.43}\]Thus we can equate \(D\) with the mean number of vibrational quanta excited in \(| E \rangle\) on absorption from the ground state. Also, we can define the vibrational energy vibrational energy in \(| E \rangle\) on excitation at \(q=0\)\[ \begin{align} \lambda &= D \hbar \omega _ {0} \\[4pt] &= \frac {1} {2} m \omega _ {0}^{2} d^{2} \label{12.44} \end{align}\]\(\lambda\) is known as the reorganization energy. This is the value of \(H_e\) at \(q=0\), which reflects the excess vibrational excitation on the excited state that occurs on a vertical transition from the ground state. It is therefore the energy that must be dissipated by vibrational relaxation on the excited state surface as the system re-equilibrates following absorption.Illustration of how the strength of coupling \(D\) influences the absorption lineshape \(\sigma\) (Equation \ref{12.38}) and dipole correlation function \(C _ {\mu \mu}\) (Equation \ref{12.32}).Also shown, the Gaussian approximation to the absorption profile (Equation \ref{12.42}), and the dephasing function (Equation \ref{12.31}).The DHO model also leads to predictions about the form of the emission spectrum from the electronically excited state. The vibrational excitation on the excited state potential energy surface induced by electronic absorption rapidly dissipates through vibrational relaxation, typically on picosecond time scales. Vibrational relaxation leaves the system in the ground vibrational state of the electronically excited surface, with an average displacement that is larger than that of the ground state. In the absence of other non-radiative processes relaxation processes, the most efficient way of relaxing back to the ground state is by emission of light, i.e., fluorescence. In the Condon approximation this occurs through vertical transitions from the excited state minimum to a vibrationally excited state on the ground electronic surface. The difference between the absorption and emission frequencies reflects the energy of the initial excitation which has been dissipated non-radiatively into vibrational motion both on the excited and ground electronic states, and is referred to as the Stokes shift.From the DHO model, the emission lineshape can be obtained from the dipole correlation function assuming that the initial state is equilibrated in \(| e , 0 \rangle\), centered at a displacement \(q= d\), following the rapid dissipation of energy \(\lambda\) on the excited state. Based on the energy gap at \(q=d\), we see that a vertical emission from this point leaves \(\lambda\) as the vibrational energy that needs to be dissipated on the ground state in order to re-equilibrate, and therefore we expect the Stokes shift to be \(2\lambda\)Beginning with our original derivation of the dipole correlation function and focusing on emission, we find that fluorescence is described by\[\begin{align} C _ {\Omega} & = \langle e , 0 | \mu (t) \mu ( 0 ) | e , 0 \rangle = C _ {\mu \mu}^{*} (t) \\ & = \left| \mu _ {e g} \right|^{2} e^{- i \omega _ {\mathrm {g}} t} F^{*} (t) \label{12.45} \\[4pt] F^{*} (t) & = \left\langle U _ {e}^{\dagger} U _ {g} \right\rangle \\[4pt] & = \exp \left[ D \left( e^{i \omega _ {0} t} - 1 \right) \right] \label{12.46} \end{align}\]We note that \(C _ {\mu \mu}^{*} (t) = C _ {\mu \mu} ( - t )\) and \(F^{*} (t) = F ( - t )\).Then we can obtain the fluorescence spectrum\[\begin{align} \sigma _ {f} ( \omega ) & = \int _ {- \infty}^{+ \infty} d t \,e^{i \omega t} C _ {\mu \mu}^{*} (t) \\[4pt] & = \left| \mu _ {e g} \right|^{2} \sum _ {n = 0}^{\infty} e^{- D} \frac {D^{n}} {n !} \delta \left( \omega - \omega _ {e g} + n \omega _ {0} \right) \end{align} \label{12.47}\]This is a spectrum with the same features as the absorption spectrum, although with mirror symmetry about \(\omega_{eg}\).A short time expansion confirms that the splitting between the peak of the absorption and emission lineshape envelopes is \(2 D \hbar \omega_0\), or \(2\lambda\). Further, one can establish that\[\left.\begin{aligned} \sigma _ {a b s} ( \omega ) & = \int _ {- \infty}^{+ \infty} dt\, e^{i \left( \omega - \omega _ {e g} \right) t + g (t)} \\ \sigma _ {f l} ( \omega ) & = \int _ {- \infty}^{+ \infty} dt\, e^{i \left( \omega - \omega _ {e g} \right) t + g^{*} (t)} \\ g (t) & = D \left( e^{- i \omega _ {0} t} - 1 \right) \end{aligned} \right. \label{12.48}\]Note that our description of the fluorescence lineshape emerged from our semiclassical treatment of the light–matter interaction, and in practice fluorescence involves spontaneous emission of light into a quantum mechanical light field. However, while the light field must be handled differently, the form of the dipole correlation function and the resulting lineshape remains unchanged. Additionally, we assumed that there was a time scale separation between the vibrational relaxation in the excited state and the time scale of emission, so that the system can be considered equilibrated in \(| e , 0 \rangle\). When this assumption is not valid then one must account for the much more complex possibility of emission during the course of the relaxation process.This page titled 13.1: The Displaced Harmonic Oscillator Model is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,281 |
13.2: Coupling to a Harmonic Bath
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/13%3A_Coupling_of_Electronic_and_Nuclear_Motion/13.02%3A_Coupling_to_a_Harmonic_Bath | It is worth noting a similarity between the Hamiltonian for this displaced harmonic oscillator problem, and a general form for the interaction of an electronic “system” that is observed in an experiment with a harmonic oscillator “bath” whose degrees of freedom are invisible to the observable, but which influence the behavior of the system. This reasoning will in fact be developed more carefully later for the description of fluctuations. While the Hamiltonians we have written so far describe coupling to a single bath degree of freedom, the DHO model is readily generalized to many vibrations or a continuum of nuclear motions. Coupling to a continuum, or a harmonic bath, is the starting point for developing how an electronic system interacts with a continuum of intermolecular motions and phonons typical of condensed phase systems.So, what happens if the electronic transition is coupled to many vibrational coordinates, each with its own displacement? The extension is straightforward if we still only consider two electronic states (\(e\) and \(g\)) to which we couple a set of independent modes, i.e., a bath of harmonic normal modes. Then we can write the Hamiltonian for \(N\) vibrations as a sum over all the independent harmonic modes\[H _ {e} = \sum _ {\alpha = 1}^{N} H _ {e}^{( \alpha )} = \sum _ {\alpha = 1}^{N} \left( \frac {p _ {\alpha}^{2}} {2 m _ {\alpha}} + \frac {1} {2} m _ {\alpha} \omega _ {a}^{2} \left( q _ {\alpha} - d _ {\alpha} \right)^{2} \right) \label{12.49}\]each with their distinct frequency and displacement. We can specify the state of the system in terms of product states in the electronic and nuclear occupation, i.e.,\[\left.\begin{aligned} | G \rangle & = | g ; n _ {1} , n _ {2} , \ldots , n _ {N} \rangle \\ & = | g \rangle \prod _ {\alpha = 1}^{N} | n _ {\alpha} \rangle \end{aligned} \right. \label{12.50}\]Additionally, we recognize that the time propagator on the electronic excited potential energy surface is\[U _ {e} = \exp \left[ - \frac {i} {\hbar} H _ {e} t \right] = \prod _ {\alpha = 1}^{N} U _ {e}^{( \alpha )} \label{12.51}\]where\[U _ {e}^{( \alpha )} = \exp \left[ - \frac {i} {\hbar} H _ {e}^{( \alpha )} t \right] \label{12.52}\]Defining \(D _ {\alpha} = d _ {\alpha}^{2} \left( m \omega _ {\alpha} / 2 \hbar \right)\)\[\left.\begin{aligned} F^{( \alpha )} & = \left\langle \left[ U _ {g}^{( \alpha )} \right]^{\dagger} U _ {e}^{( \alpha )} \right\rangle \\ & = \exp \left[ D _ {\alpha} \left( e^{- i \omega _ {a} t} - 1 \right) \right] \end{aligned} \right. \label{12.53}\]the dipole correlation function is then just a product of multiple dephasing functions that characterize the time-evolution of the different vibrations.\[C _ {\mu \mu} (t) = \left| \mu _ {e g} \right|^{2} e^{- i \omega _ {e g} t} \cdot \prod _ {\alpha = 1}^{N} F^{( \alpha )} (t) \label{12.54}\]or\[C _ {\mu \mu} (t) = \left| \mu _ {e g} \right|^{2} e^{- i \omega _ {e g} t + g (t)} \label{12.55}\]with\[g (t) = \sum _ {\alpha} D _ {\alpha} \left( e^{- i \omega _ {a} t} - 1 \right) \label{12.56}\]In the time domain this is a complex beating pattern, which in the frequency domain appears as a spectrum with several superimposed vibronic progressions that follow the rules developed above. Also, the reorganization energy now reflects to total excess nuclear potential energy required to make the electronic transition:\[\lambda = \sum _ {\alpha} D _ {\alpha} \hbar \omega _ {\alpha} \label{12.57}\]Taking this a step further, the generalization to a continuum of nuclear states emerges naturally. Given that we have a continuous frequency distribution of normal modes characterized by a density of states, \(W(\omega)\), and a continuously varying frequency-dependent coupling \(D ( \omega )\), we can change the sum in Equation \ref{12.56} to an integral over the density of states:\[g (t) = \int d \omega \,W ( \omega ) D ( \omega ) \left( e^{- i \omega t} - 1 \right) \label{12.58}\]Here the product \(W ( \omega )D ( \omega )\) is a coupling-weighted density of states, and is commonly referred to as a spectral density.What this treatment does is provide a way of introducing a bath of states that the spectroscopically interrogated transition couples with. Coupling to a bath or continuum provides a way of introducing relaxation effects or damping of the electronic coherence in the absorption spectrum. You can see that if \(g(t)\) is associated with a constant \(\Gamma\), we obtain a Lorentzian lineshape with width \(\Gamma\). This emerges under certain circumstances, for instance if the distribution of states and coupling is large and constant, and if the integral in Equation \ref{12.58} is over a distribution of low frequencies, such that \(e^{- i \omega t} \approx 1 - i \omega t\). More generally the lineshape function is complex, and the real part describes damping and the imaginary part modulates the primary frequency and leads to fine structure. We will discuss these circumstances in more detail later.As described above, the single mode DHO model above is for a pure state, but the approach can be readily extended to describe a canonical ensemble. In this case, the correlation function is averaged over a thermal distribution of initial states. If we take the initial state of the system to be in the electronic ground state and its vibrational levels (\(n_g\)) to be occupied as a Boltzmann distribution, which is characteristic of ambient temperature samples, then the dipole correlation function can be written as a thermally averaged dephasing function:\[C _ {\mu \mu} (t) = \left| \mu _ {e g} \right|^{2} e^{- i \omega _ {g} t} F (t) \label{12.59}\]\[F (t) = \sum _ {n _ {g}} p \left( n _ {g} \right) \left\langle n _ {g} \left| U _ {g}^{\dagger} U _ {e} \right| n _ {g} \right\rangle \label{12.60}\]\[p \left( n _ {g} \right) = \frac {e^{- \beta \ln a _ {b}}} {Z} \label{12.61}\]Evaluating these expressions using the strategies developed above leads to\[C _ {\mu \mu} (t) = \left| \mu _ {e g} \right|^{2} e^{- i \omega _ {e g} t} \exp \left[ D \left[ ( \overline {n} + 1 ) \left( e^{- i \omega _ {0} t} - 1 \right) + \overline {n} \left( e^{+ i \omega _ {0} t} - 1 \right) \right] \right\rceil \label{12.62}\]\(\overline {n}\) is the thermally averaged occupation number of the harmonic vibrational mode.\[\overline {n} = \left( e^{\beta \hbar \omega _ {0}} - 1 \right)^{- 1} \label{12.63}\]Note that in the low temperature limit, \(\overline {n} \rightarrow 0\), and Equation \ref{12.62} equals our original result Equation \ref{12.32}. The dephasing function has two terms underlined in Equation \ref{12.62}, of which the first describes those electronic absorption events in which the vibrational quantum number increases or is unchanged (\(n_e≥n_g\)), whereas the second are for those processes where the vibrational quantum number decreases or is unchanged (\(n_e≤n_g\)). The latter are only allowed at elevated temperature where thermally excited states are populated and are known as “hot bands”.Now, let’s calculate the lineshape. If we separate the dephasing function into a product of two exponential terms and expand each of these exponentials, we can Fourier transform to give\[\sigma _ {a b s} ( \omega ) = \left| \mu _ {e g} \right|^{2} e^{- D ( 2 \overline {n} + 1 )} \sum _ {j = 0}^{\infty} \sum _ {k = 0}^{\infty} \left( \frac {D^{j + k}} {j ! k !} \right) ( \overline {n} + 1 )^{j} \overline {n}^{k} \delta \left( \omega - \omega _ {e g} - ( j - k ) \omega _ {0} \right) \label{12.64}\]Here the summation over \(j\) describes \(n_e≥n_g\) transitions, whereas the summation over \(k\) describes \(n_e≤n_g\). For any one transition frequency, (\(\omega \mathrm {eg}^{+} n \omega 0\)), the net absorption is a sum over all possible combination of transitions at the energy splitting with \(n=(j-k)\). Again, if we set \(\overline {n} \rightarrow 0\), we obtain our original result Equation 13.1.47. The contributions where \(kExamples of temperature dependence to lineshape and dephasing functions for \(D = 1\). The real part changes in amplitude, growing with temperature, whereas the imaginary part is unchanged.We can extend this description to describe coupling to a many independent nuclear modes or coupling to a continuum. We write the state of the system in terms of the electronic state and the nuclear quantum numbers, i.e., \(| E \rangle = | e ; n _ {1} , n _ {2} , n _ {3} \dots \rangle\) and from that:\[F (t) = \exp \left[ \sum _ {j} D _ {j} \left[ \left( \overline {n} _ {j} + 1 \right) \left( e^{- i \omega _ {j} t} - 1 \right) + \overline {n} _ {j} \left( e^{i \omega _ {j} t} - 1 \right) \right] \right] \label{12.65}\]or changing to an integral over a continuous frequency distribution of normal modes characterized by a density of states, \(W(\omega)\)\[F (t) = \exp \left[ \int d \omega \, W ( \omega ) D ( \omega ) \left[ ( \overline {n} ( \omega ) + 1 ) \left( e^{- i \omega t} - 1 \right) + \overline {n} ( \omega ) \left( e^{i \omega t} - 1 \right) \right] \right] \label{12.66}\]\(D ( \omega )\) is the frequency dependent coupling. Let’s look at the envelope of the nuclear structure on the transition by doing a short-time expansion on the complex exponential as in Equation 13.1.49\[F (t) = \exp \left[ \int d \omega \,D ( \omega ) W ( \omega ) \left( - i \omega t - ( 2 \overline {n} + 1 ) \frac {\omega^{2} t^{2}} {2} \right) \right] \label{12.67}\]The lineshape is calculated from\[\sigma _ {a b s} ( \omega ) = \int _ {- \infty}^{+ \infty} d t \,e^{i \left( \omega - \omega _ {e g} \right) t} \exp [ - i \langle \omega \rangle t ] \exp \left[ - \frac {1} {2} \left\langle \omega^{2} \right\rangle t^{2} \right] \label{12.68}\]where we have defined the mean vibrational excitation on absorption\[\begin{align} \langle \omega \rangle & = \int d \omega \, W ( \omega ) D ( \omega ) \omega \\[4pt] & = \lambda / \hbar \label{12.69} \end{align}\]and\[\left\langle \omega^{2} \right\rangle = \int d \omega\, W ( \omega ) D ( \omega ) \omega^{2} ( 2 \overline {n} ( \omega ) + 1 ) \label{12.70}\]\(\left\langle \omega^{2} \right\rangle\) reflects the thermally averaged distribution of accessible vibrational states. Completing the square of Equation \ref{12.68} gives\[\sigma _ {a b s} ( \omega ) = \left| \mu _ {e g} \right|^{2} \sqrt {\frac {2 \pi} {\left\langle \omega^{2} \right\rangle}} \exp \left[ \frac {- \left( \omega - \omega _ {e g} - \langle \omega \rangle \right)^{2}} {2 \left\langle \omega^{2} \right\rangle} \right] \label{12.71}\]The lineshape is Gaussian, with a transition maximum at the electronic resonance plus reorganization energy. Although the frequency shift \(\langle \omega \rangle\) is not temperature dependent, the width of the Gaussian is temperature-dependent as a result of the thermal occupation factor in Equation \ref{12.70}.This page titled 13.2: Coupling to a Harmonic Bath is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,282 |
13.3: Semiclassical Approximation to the Dipole Correlation Function
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/13%3A_Coupling_of_Electronic_and_Nuclear_Motion/13.03%3A_Semiclassical_Approximation_to_the_Dipole_Correlation_Function | In introducing the influence of dark degrees of freedom on the spectroscopy of a bright state, we made some approximations that are not always valid, such as the Condon approximation and the Second Cumulant Approximation. To develop tools that allow us to work outside of these approximations, it is worth revisiting the evaluation of the dipole correlation function and looking at this a bit more carefully. In particular, we will describe the semiclassical approximation, which is a useful representation of the dipole correlation function when one wants to describe the dark degrees of freedom (the bath) using classical molecular dynamics simulations.For a quantum mechanical material system interacting with a light field, the full Hamiltonian is\[H = H _ {0} + V (t) \label{12.72}\]\[V (t) = - \overline {m} \cdot \overline {E} (t) \label{12.73}\]\(\overline {m} = \sum _ {i} z _ {i} \overline {r} _ {i}\) is the quantum mechanical dipole operator, where \(z_i\) are charges. The absorption lineshape is given by the Fourier transformation of the dipole autocorrelation function \(C _ {\mu \mu}\):\[C _ {\mu \mu} ( \tau ) = \langle \overline {m} (t) \overline {m} ( 0 ) \rangle = \operatorname {Tr} \left( \rho _ {e q} \overline {m} (t) \overline {m} ( 0 ) \right) \label{12.74)}\]and the time dependence in \(\overline {m}\) is expressed in terms of the usual time-propagator:\[\overline {m} (t) = \hat {U} _ {0}^{\dagger} \overline {m} \hat {U} _ {0} \label{12.75}\]\[\hat {U} _ {0} = e _ {+}^{- \frac {i} {\hbar} \int _ {0}^{t} H _ {0} (t) d t} \label{12.76}\]In principle, the time development of the dipole moment for all degrees of freedom can be obtained directly from ab initio molecular dynamics simulations.For a more practical expression in which we wish to focus on one or a few bright degrees of freedom, we next partition the Hamiltonian into system and bath\[H _ {0} = H _ {S} ( Q ) + H _ {B} ( q ) + H _ {s b} ( Q , q ) \label{12.77}\]For purposes of spectroscopy, the system \(H_S\) refers to those degrees of freedom (\(Q\)) with which the light will interacts, and which will be those in which we calculate matrix elements. The bath \(H_B\) refers to all of the other degrees of freedom (\(q\)), and the interaction between the two is accounted for in \(H_{SB}\). Although the interaction of the light depends on how \(\overline {m}\) varies with \(Q\), the dipole operator remains a function of system and bath coordinates: \(\overline {m} ( Q , q )\).We now use the interaction picture transformation to express the time propagator under the full material Hamiltonian \(\hat {U} _ {0}\) in terms of a product of propagators in the individual terms in \(H_0\):\[\hat {U} _ {0} = U _ {S} U _ {B} U _ {S B} \label{12.78}\]\[\mathbf {H} _ {\mathrm {sB}} (t) = e^{i \left( H _ {S} + H _ {B} \right) t} H _ {S B} e^{- i \left( H _ {S} + H _ {B} \right) t} \label{12.79}\]\[\mathbf {H} _ {\mathrm {sB}} (t) = e^{i \left( H _ {S} + H _ {B} \right) t} H _ {S B} e^{- i \left( H _ {S} + H _ {B} \right) t} \label{12.80}\]Then the dipole autocorrelation function becomes\[C _ {\mu \mu} = \sum _ {n} p _ {n} \left\langle n \left| U _ {S B}^{\dagger} U _ {B}^{\dagger} U _ {S}^{\dagger} \overline {m} U _ {S} U _ {B} U _ {S B} \overline {m} \right| n \right\rangle \label{12.81}\]where\[p _ {n} = \left\langle n \left| e^{- \beta H _ {0}} \right| n \right\rangle / \operatorname {Tr} \left( e^{- \beta H _ {0}} \right)\]Further, to make this practical, we make an adiabatic separation between the system and bath coordinates, and say that the interaction between the system and bath is weak. This allows us to write the state of the system as product states in the system (\(a\)) and bath (\(\alpha\)); \(| n \rangle = | a , \alpha \rangle\):\[\left( H _ {S} + H _ {B} \right) | a , \alpha \rangle = \left( E _ {a} + E _ {\alpha} \right) | a , \alpha \rangle \label{12.82}\]With this we evaluate Equation \ref{12.81} as\[\begin{align} C _ {\mu \mu} & = \sum _ {a , \alpha} p _ {a} p _ {\alpha} \left\langle a , \alpha \left| U _ {S B}^{\dagger} U _ {B}^{\dagger} U _ {S}^{\dagger} \overline {m} U _ {B} U _ {B B} \overline {m} \right| a , \alpha \right\rangle \\ & = \sum _ {a , b} p _ {a} p _ {\alpha} \left\langle \alpha \left| \left\langle a \left| U _ {S B}^{\dagger} U _ {S}^{\dagger} U _ {B}^{\dagger} \overline {m} U _ {B} U _ {S} U _ {S B} \right| b \right\rangle \overline {m} _ {b a} \right| \alpha \right\rangle \label{12.83} \end{align} \]where \(\overline {m} _ {b a} = \langle b | \overline {m} | a \rangle\), and we have made use of the fact that \(H_S\) and \(H_B\) commute. Also,\[p _ {a} = e^{- E _ {j} / k T} / Q _ {s}.\]Now, by recognizing that the time propagators in the system and system-bath Hamiltonians describe time evolution at the system eigenstate energy plus any modulations that the bath introduces to it\[U _ {S} U _ {S B} | b \rangle = e^{- i H _ {s} t} | b \rangle e^{- i \int _ {0}^{t} d^{\prime} \delta E _ {b} \left( t^{\prime} \right)} = | b \rangle e^{- i \int _ {0}^{t} d t^{\prime} E _ {b} \left( t^{\prime} \right)} \label{12.84}\]and we can write our correlation function as\[C _ {\mu \mu} = \sum _ {a , b} p _ {a} p _ {\alpha} \left\langle \alpha \left| e^{i \int _ {0}^{t} d t E _ {a} \left( t^{\prime} \right)} U _ {B}^{\dagger} \overline {m} _ {a b} U _ {B} e^{- i \int _ {0}^{t} d t^{\prime} E _ {b} \left( t^{\prime} \right)} \overline {m} _ {b a} \right| \alpha \right\rangle \label{12.85}\]\[C _ {\mu \mu} = \left\langle \overline {m} _ {a b} (t) \overline {m} _ {b a} ( 0 ) e^{- i \int _ {0}^{t} d t^{\prime} \omega _ {b a} \left( t^{\prime} \right)} \right\rangle _ {B} \label{12.86}\]\[\overline {m} _ {a b} (t) = e^{- i H _ {B} t} \overline {m} _ {a b} e^{- i H _ {B} t} \label{12.87}\]Equation \ref{12.86} is the first important result. It describes a correlation function in the dipole operator expressed in terms of an average over the time-dependent transition moment, including its orientation, and the fluctuating energy gap. The time dependence is due to the bath and refers to a trace over the bath degrees of freedom.Let’s consider the matrix elements. These will reflect the strength of interaction of the electromagnetic field with the motion of the system coordinate, which may also be dependent on the bath coordinates. Since we have made an adiabatic approximation, to evaluate the matrix elements we would typically expand the dipole moment in the system degrees of freedom, \(Q\). As an example for one system coordinate (\(Q\)) and many bath coordinates \(q\), we can expand:\[\overline {m} ( Q , q ) = \overline {m} _ {0} + \frac {\partial \overline {m}} {\partial Q} Q + \sum _ {\alpha} \frac {\partial^{2} \overline {m}} {\partial Q \partial q _ {\alpha}} Q q _ {\alpha} + \cdots \label{12.88}\]\(\overline {m} _ {0}\) is the permanent dipole moment, which we can take as a constant. In the second term, \(\partial \overline {m} / \partial Q\) is the magnitude of the transition dipole moment. The third term includes the dependence of the transition dipole moment on the bath degrees of freedom, i.e., non-Condon terms. So now we can evaluate\[\left.\begin{aligned} \overline {m} _ {a b} & = \left\langle a \left| \overline {m} _ {0} + \frac {\partial \overline {m}} {\partial Q} Q + \sum _ {\alpha} \frac {\partial^{2} \overline {m}} {\partial Q \partial q _ {\alpha}} Q q _ {\alpha} \right| b \right\rangle \\ & = \frac {\partial \overline {m}} {\partial Q} \langle a | Q | b \rangle + \sum _ {\alpha} \frac {\partial} {\partial q _ {\alpha}} \frac {\partial \overline {m}} {\partial Q} \langle a | Q | b \rangle q _ {\alpha} \end{aligned} \right. \label{12.89}\]We have set \(\left\langle a \left| \overline {m} _ {0} \right| b \right\rangle = 0\). Now defining the transition dipole matrix element,\[\overline {\mu} _ {a b} = \frac {\partial \overline {m}} {\partial Q} \langle a | Q | b \rangle \label{12.90}\]we can write\[\overline {m} _ {a b} = \overline {\mu} _ {a b} \left( 1 + \sum _ {\alpha} \frac {\partial \overline {\mu} _ {a b}} {\partial q _ {\alpha}} q _ {\alpha} \right) \label{12.91}\]Remember that \(\overline {\mu} _ {a b}\) is a vector. The bath can also change the orientation of the transition dipole moment. If we want to separate the orientational and remaining dynamics this we could split the matrix element into an orientational component specified by a unit vector along \(\partial \overline {m} / \partial Q\) and a scalar that encompasses the amplitude factors: \(\overline {\mu} _ {a b} = \hat {u} _ {a b} \mu _ {a b}\). Then Equation \ref{12.86} becomes\[\overline {m} _ {a b} = \overline {\mu} _ {a b} \left( 1 + \sum _ {\alpha} \frac {\partial \overline {\mu} _ {a b}} {\partial q _ {\alpha}} q _ {\alpha} \right) \label{12.92}\]Mixed quantum-classical spectroscopy models apply a semiclassical approximation to Equation \ref{12.86}. Employing the semiclassical approximation says that we will replace the quantum mechanical operator mab (t) with a classicalMab (t), i.e., we replace the time propagator \(U_B\) with classical propagation of the dynamics. Also, the trace over the bath in the correlation function becomes an equilibrium ensemble average over phase space.How do you implement the semiclassical approximation? Replacing the time propagator \(U_B\) with classical dynamics amounts to integrating Newton’s equations for all of the bath degrees of freedom. Then you must establish how the bath degrees of freedom influence \(\omega _ {b a} (t)\) and \(\overline {m} _ {a b} (t)\). For the quantum operator \(\overline {m} ( Q , q , t )\), only the system coordinate \(Q\) remains quantized, and following Equation \ref{12.91} we can express the orientation and magnitude of the dipole moment and the dynamics depends on the classical degrees of freedom \(\tilde {q} _ {\alpha}\).\[\overline {m} _ {a b} = \overline {\mu} _ {a b} \left( 1 + \sum _ {\alpha} a _ {\alpha} \tilde {q} _ {\alpha} \right) \label{12.93}\]\(a _ {\alpha}\) is a (linear) mapping coefficient\[a _ {\alpha} = \partial \overline {\mu} _ {a b} / \partial \tilde {q} _ {\alpha}\]between the bath and the transition dipole moment.In practice, use of this approximation has been handled in different ways, but practical considerations have dictated that \(\omega _ {b a} (t)\) and \(\overline {m} _ {a b} (t)\) are not separately calculated for each time step, but are obtained from a mapping of these variables to the bath coordinates \(q\). This mapping may be to local or collective bath coordinates, and to as many degrees of freedom as are necessary to obtain a highly correlated single valued mapping of \(\omega _ {b a} (t)\) and \(\overline {m} _ {a b} (t)\). Examples of these mappings include correlating \(\omega_{ba}\) with the electric field of the bath acting on the system coordinate.Let’s evaluate the dipole correlation function for an arbitrary HSB and an arbitrary number of system eigenstates. From Equation \ref{12.83} we have\[C _ {\mu \mu} = \sum _ {a b c d \\ \alpha} p _ {a} p _ {\alpha} \left\langle \alpha \left| \left\langle a \left| U _ {s B}^{\dagger} \right| c \right\rangle U _ {B}^{\dagger} \left\langle c \left| U _ {s}^{\dagger} \overline {m} U _ {s} \right| d \right\rangle U _ {B} \left\langle d \left| U _ {s B} \right| b \right\rangle \langle b | \overline {m} | a \rangle \right| \alpha \right\rangle \label{12.94}\]\[\left\langle c \left| U _ {s}^{\dagger} \overline {m} U _ {S} \right| d \right\rangle = e^{- i \left( E _ {d} - E _ {c} \right) t} \overline {m} _ {c d} \label{12.95}\]\[\overline {m} _ {c d} (t) = U _ {B}^{\dagger} \overline {m} _ {c d} U _ {B} \label{12.96}\]\[\left\langle a \left| U _ {S B}^{\dagger} \right| c \right\rangle = \left\langle a \left| e^{i \int _ {0}^{t} d t^{\prime} \mathbf {H} _ {\mathrm {sb}} \left( t^{\prime} \right)} \right| c \right\rangle = \exp \left[ i \int _ {0}^{t} d t^{\prime} \left[ \mathbf {H} _ {\mathrm {sB}} \right] _ {a c} \left( t^{\prime} \right) \right] \label{12.97}\]\[\begin{align} C _ {\mu \mu} &= \sum _ {c \kappa t d} p _ {a} \left \langle e^{- i \omega _ {d c} t} e^{i \int _ {0}^{t} d t^{\prime} \left[ H _ {S B} \right] _ {a c}} \overline {m} _ {c d} e^{- i \int _ {0}^{t} d t^{\prime} \left[ H _ {S B} \right] _ {d b} \left( t^{\prime} \right)} \overline {m} _ {b a} \right \rangle _ {B} \label{12.98} \\[4pt] &= \left\langle \overline {m} _ {c d} (t) \overline {m} _ {b a} ( 0 ) \exp \left[ - i \omega _ {d c} t - i \int _ {0}^{t} d t^{\prime} \left[ H _ {S B} \right] _ {d b} \left( t^{\prime} \right) - \left[ H _ {S B} \right] _ {a c} \left( t^{\prime} \right) \right] \right\rangle _ {B} \label{12.99} \end{align}\]This page titled 13.3: Semiclassical Approximation to the Dipole Correlation Function is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,283 |
14.1: Fluctuations and Randomness - Some Definitions
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/14%3A_Fluctuations_in_Spectroscopy/14.01%3A_Fluctuations_and_Randomness_-_Some_Definitions | “Fluctuations” is my word for the time-evolution of a randomly perturbed system at or near equilibrium. For chemical problems in the condensed phase we constantly come up against the problem of random fluctuations to dynamical variables as a result of their interactions with their environment. It is unreasonable to think that you will come up with an equation of motion for the internal deterministic variable, but we should be able to understand the behavior statistically and come up with equations of motion for probability distributions. Models of this form are commonly referred to as stochastic. A stochastic equation of motion is one which includes a random component to the time-development.When we introduced correlation functions, we discussed the idea that a statistical description of a system is commonly formulated in terms of probability distribution functions \(P\). Observables are commonly described by moments of this distribution, which are obtained by integrating over \(P\), for instance\[\left.\begin{aligned} \langle x \rangle & = \int d x \,x \mathrm {P} (x) \\ \left\langle x^{2} \right\rangle & = \int d x \,x^{2} \mathrm {P} (x) \end{aligned} \right. \label{13.1}\]For time-dependent processes, we recognize that it is possible that the probability distribution carries a time-dependence.\[\begin{align} \langle x (t) \rangle & = \int d x \,x (t) \mathrm {P} ( x , t ) \\ \left\langle x^{2} (t) \right\rangle & = \int d x \,x^{2} (t) \mathrm {P} ( x , t ) \label{13.2} \end{align} \]Correlation functions go a step further and depend on joint probability distributions \(\mathrm {P} \left( t^{\prime \prime} , A ; t^{\prime} , B \right)\) that give the probability of observing a value of \(A\) at time \(t''\) and a value of \(B\) at time \(t'\):\[\left\langle A \left( t^{\prime \prime} \right) B \left( t^{\prime} \right) \right\rangle = \int d A \int d B \, A B \,\mathrm {P} \left( t^{\prime \prime} , A ; t^{\prime} , B \right)\label{13.3} \]The statistical description of random fluctuations are described through these time-dependent probability distributions, and we need a stochastic equation of motion to describe their behavior. A common example of such a process is Brownian motion, the fluctuating position of a particle under the influence of a thermal environment.It is not practical to describe the absolute position of the particle, but we can formulate an equation of motion for the probability of finding the particle in time and space given that you know its initial position. Working from a random walk model, one can derive an equation of motion that takes the form of the well-known diffusion equation, here written in one dimension:\[\dfrac {\partial \mathrm {P} ( x , t )} {\partial t} = \mathcal {D} \dfrac {\partial^{2}} {\partial x^{2}} \mathrm {P} ( x , t ) \label{13.4}\]Here \(\mathcal {D}\) is the diffusion constant which sets the time scale and spatial extent of the random motion. [Note the similarity of this equation to the time-dependent Schrödinger equation for a free particle if \(\mathcal {D}\) is taken as imaginary]. Given the initial condition \(\mathrm {P} \left( x , t _ {0} \right) = \delta \left( x - x _ {0} \right)\), the solution is a conditional probability density\[\mathrm {P} \left( x , t ; x _ {0} , t _ {0} \right) = \dfrac {1} {\sqrt {2 \pi \mathcal {D} \left( t - t _ {0} \right)}} \exp \left( - \dfrac {\left( x - x _ {0} \right)^{2}} {4 \mathcal {D} \left( t - t _ {0} \right)} \right) \label{13.5}\]The probability distribution describes the statistics for fluctuations in the position of a particle averaged over many trajectories. Analyzing the moments of this probability density using Equation \ref{13.2} we find that\[\begin{align} \langle x (t) \rangle & = x _ {0} \\[4pt] \left\langle \delta x (t)^{2} \right\rangle & = 2 \mathcal {D} t .\end{align}\]where\[\delta x (t) = x (t) - x _ {0}\]So, the distribution maintains a Gaussian shape centered at \(x_0\), and broadens with time as \(2\mathcal {D}t\).Brownian motion is an example of a Gaussian-Markovian process. Here Gaussian refers to cases in which we describe the probability distribution for a variable \(P(x)\) as a Gaussian normal distribution. Here in one dimension:\[ \begin{align} \mathrm {P} (x) &= A e^{- \left( x - x _ {0} \right)^{2} / 2 \Delta^{2}} \\[4pt] \Delta^{2} &= \left\langle x^{2} \right\rangle - \langle x \rangle^{2} \label{13.6} \end{align}\]The Gaussian distribution is important, because the central limit theorem states that the distribution of a continuous random variable with finite variance will follow the Gaussian distribution. Gaussian distributions also are completely defined in terms of their first and second moments, meaning that a time-dependent probability density \(P(x,t)\) is uniquely characterized by a mean value in the observable variable \(x\) and a correlation function that describes the fluctuations in \(x\). Gaussian distributions for systems at thermal equilibrium are also important for the relationship between Gaussian distributions and parabolic free energy surfaces:\[G (x) = - k _ {B} T \ln \mathrm {P} (x) \label{13.7}\]If the probability density is Gaussian along \(x\), then the system’s free energy projected onto this coordinate (often referred to as a potential of mean force) has a harmonic shape. Thus Gaussian statistics are effective for describing fluctuations about an equilibrium mean value \( x_o\).Markovian means that the time-dependent behavior of a system does not depend on its earlier history, statistically speaking. Naturally the state of any one molecule depends on its trajectory through phase space, however we are saying that from the perspective of an ensemble there is no memory of the state of the system at an earlier time. This can be stated in terms of joint probability functions as\[\mathrm {P} \left( x _ {2} , t _ {2} ; x _ {1} , t _ {1} ; x _ {0} , t _ {0} \right) = \mathrm {P} \left( x _ {2} , t _ {2} ; x _ {1} , t _ {1} \right) \mathrm {P} \left( x _ {1} , t _ {1} ; x _ {0} , t _ {0} \right)\]or\[\mathrm {P} \left( t _ {2} ; t _ {1} ; t _ {0} \right) = \mathrm {P} \left( t _ {2} ; t _ {1} \right) \mathrm {P} \left( t _ {1} ; t _ {0} \right)\]The probability of observing a trajectory that takes you from state 1 at time 1 to state 2 at time 2 does not depend on where you were at time 0. Further, given the knowledge of the probability of executing changes during a single time interval, you can exactly describe \(P\) for any time interval. Markovian therefore refers to time-dependent processes on a time scale long compared to correlation time for the internal variable that you care about. For instance, the diffusion equation only holds after the particle has experienced sufficient collisions with its surroundings that it has no memory of its earlier position and momentum: \(t > \tau_c\).This page titled 14.1: Fluctuations and Randomness - Some Definitions is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,284 |
14.2: Line-Broadening and Spectral Diffusion
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/14%3A_Fluctuations_in_Spectroscopy/14.02%3A_Line-Broadening_and_Spectral_Diffusion | We will investigate how a fluctuating environment influences measurements of an experimentally observed internal variable. Specifically we focus on the spectroscopy of a chromophore, and how the chromophore’s interactions with its environment influence its transition frequency and absorption lineshape. In the absence of interactions, the resonance frequency that we observe is \(\omega_{eg}\). However, we have seen that interactions of this chromophore with its environment can shift this frequency. In condensed matter, time-dependent interactions with the surroundings can lead to time-dependent frequency shifts, known as spectral diffusion. How these dynamics influence the line width and lineshape of absorption features depends on the distribution of frequencies available to your system (\(\Delta\)) and the time scale of sampling varying environments (\(\tau_c\)). Consider the following cases of line broadening:This page titled 14.2: Line-Broadening and Spectral Diffusion is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,285 |
14.3: Gaussian-Stochastic Model for Spectral Diffusion
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/14%3A_Fluctuations_in_Spectroscopy/14.03%3A_Gaussian-Stochastic_Model_for_Spectral_Diffusion | We will begin with a classical description of how random fluctuations in frequency influence the absorption lineshape, by calculating the dipole correlation function for the resonant transition. This is a Gaussian stochastic model for fluctuations, meaning that we will describe the time-dependence of the transition energy as random fluctuations about an average value through a Gaussian distribution.\[ \begin{align} \omega (t) &= \langle \omega \rangle + \delta \omega (t) \label{13.9} \\[4pt] \langle \delta \omega (t) \rangle &= 0 \label{13.10} \end{align}\]The fluctuations in \(\omega\) allow the system to explore a Gaussian distribution of transitions frequencies characterized by a variance:\[\Delta = \sqrt {\left\langle \omega^{2} \right\rangle - \langle \omega \rangle^{2}} = \sqrt {\left\langle \delta \omega^{2} \right\rangle} \label{13.11}\]In many figures the width of the Gaussian distribution is labeled with the standard deviation (here \(\Delta\)). This is meant to symbolize that \(\Delta\) is the parameter that determines the width, and not that it is the line width. For Gaussian distributions, the full line width at half maximum amplitude (FWHM) is \(2.35 \Delta\).The time scales for the frequency shifts will be described in terms of a frequency correlation function\[C _ {\delta \omega s \omega} (t) = \langle \delta \omega (t) \delta \omega ( 0 ) \rangle \label{13.12}\]Furthermore, we will describe the time scale of the random fluctuations through a correlation time \(\tau_c\).The absorption lineshape is described with a dipole time-correlation function. Let’s treat the dipole moment as an internal variable to the system, whose value depends on that of \(\omega\). Qualitatively, it is possible to write an equation of motion for \(\mu\) by associating the dipole moment with the displacement of a bound particle (\(x\)) times its charge, and using our intuition regarding how the system behaves. For a unperturbed state, we expect that \(x\) will oscillate at a frequency \(\omega\), but with perturbations, it will vary through the distribution of available frequencies. One function that has this behavior is\[x (t) = x _ {0} e^{- i \omega (t) t} \label{13.13}\]If we differentiate this equation with respect to time and multiply by charge we have\[\frac {\partial \mu} {\partial t} = - i \omega (t) \mu (t) \label{13.14}\]Although it is a classical equation, note the similarity to the quantum Heisenberg equation for the dipole operator:\[\partial \mu / \partial t = i H (t) \mu / \hbar + h . c .\]The correspondence of \(\omega (t)\) with \(H (t) / \hbar\) offers some insight into how the quantum version of this problem will look.The solution to Equation \ref{13.14} is\[\mu (t) = \mu ( 0 ) \exp \left[ - i \int _ {0}^{t} d \tau \, \omega ( \tau ) \right] \label{13.15}\]Substituting this expression and Equation \ref{13.9} into the dipole correlation function gives or\[C _ {\mu \mu} (t) = | \mu |^{2} e^{- i \langle \omega \rangle t} F (t) \label{13.16}\]where\[F (t) = \left\langle \exp \left[ - i \int _ {0}^{t} d \tau \, \delta \omega ( \tau ) \right] \right\rangle \label{13.17}\]The dephasing function (\(F(t)\)) is obtained by performing an equilibrium average of the exponential argument over fluctuating trajectories. For ergodic systems, this is equivalent to averaging long enough over a single trajectory.The dephasing function is a bit complicated to work with as written. However, for the case of Gaussian statistics for the fluctuations, it is possible to simplify \(F(t)\) by expanding it as a cumulant expansion of averages (see appendix below for details).\[F (t) = \exp \left[ - i \int _ {0}^{t} d \tau^{\prime} \, \left\langle \delta \omega \left( \tau^{\prime} \right) \right\rangle + \frac {i^{2}} {2 !} \int _ {0}^{t} d \tau^{\prime} \, \int _ {0}^{t} d \tau^{\prime \prime} \, \left\{\left\langle \delta \omega \left( \tau^{\prime} \right) \delta \omega \left( \tau^{\prime \prime} \right) \right\rangle - \left\langle \delta \omega \left( \tau^{\prime} \right) \right\rangle \left\langle \delta \omega \left( \tau^{\prime \prime} \right) \right\rangle \right\} \right] \label{13.18}\]In this expression, the first term is zero, since \(\langle \delta \omega \rangle = 0\). Only the second term survives for a system with Gaussian statistics. Now recognizing that we have a stationary system, we have\[F (t) = \exp \left[ - \frac {1} {2} \int _ {0}^{t} d \tau^{\prime} \, \int _ {0}^{t} d \tau^{\prime \prime} \, \left\langle \delta \omega \left( \tau^{\prime} - \tau^{\prime \prime} \right) \delta \omega ( 0 ) \right\rangle \right] \label{13.19}\]We have rewritten the dephasing function n terms of a correlation function that describes the fluctuating energy gap. Note that this is a classical exception, so there is no time-ordering to the exponential. \(F(t)\) can be rewritten through a change of variables (\(\tau = \tau^{\prime} - \tau^{\prime \prime}\)):\[F (t) = \exp \left[ - \int _ {0}^{t} d \tau ( t - \tau ) \langle \delta \omega ( \tau ) \delta \omega ( 0 ) \rangle \right] \label{13.20}\]So the Gaussian stochastic model allows the influence of the frequency fluctuations on the lineshape to be described by \(C _ {\delta \omega \delta v} (t)\) a frequency correlation function that follows Gaussian statistics. Note, we are now dealing with two different correlation functions \(C _ {\delta \omega \delta \omega}\) and \(C _ {\mu \mu}\). The frequency correlation function encodes the dynamics that result from molecules interacting with the surroundings, whereas the dipole correlation function describes how the system interacts with a light field and thereby the absorption spectrum.Now, we will calculate the lineshape assuming that \(C _ {\delta \omega \delta \omega}\) decays with a correlation time \(\tau_c\) and takes on an exponential form\[C _ {\delta \omega \delta \omega} (t) = \Delta^{2} \exp \left[ - t / \tau _ {c} \right] \label{13.21}\]Then Equation \ref{13.20} gives\[F (t) = \exp \left[ - \Delta^{2} \tau _ {c}^{2} \left( \exp \left( - t / \tau _ {c} \right) + t / \tau _ {c} - 1 \right) \right] \label{13.22}\]which is in the form we have seen earlier \(F (t) = \exp ( - g (t) )\)\[g (t) = \Delta^{2} \tau _ {c}^{2} \left( \exp \left( - t / \tau _ {c} \right) + t / \tau _ {c} - 1 \right) \label{13.23}\]to interpret this lineshape function, let’s look at its limiting forms.Long correlation times (\( t \ll \tau_c\))This corresponds to the inhomogeneous case where \(C _ {\delta \omega \delta \omega} (t) = \Delta^{2}\), a constant. For \(t \ll \tau _ {c}\), we can perform a short time expansion of exponential\[e^{- t / \tau _ {c}} \approx 1 - \frac {t} {\tau _ {c}} + \frac {t^{2}} {2 \tau _ {c}^{2}} + \ldots \label{13.24}\]and from Equation \ref{13.23} we obtain\[g (t) = \Delta^{2} t^{2} / 2 \label{13.25}\]At short times, the dipole correlation function will have a Gaussian decay with a rate given by \(\Delta^{2}\):\[F (t) = \exp \left( - \Delta^{2} t^{2} / 2 \right)\]This has the proper behavior for a classical correlation function, i.e., even in time\[C _ {\mu \mu} (t) = C _ {\mu \mu} ( - t ).\]In this limit, the absorption lineshape is:\[\begin{align} \sigma ( \omega ) & = | \mu |^{2} \int _ {- \infty}^{+ \infty} d t \, e^{i \omega t} e^{- i ( \omega ) t - g (t)} \\ & = | \mu |^{2} \int _ {- \infty}^{+ \infty} d t \, e^{i ( \omega - ( \omega ) ) t} e^{- \Delta^{2} t^{2} / 2} \\ & = \sqrt {\frac {2 \pi} {\Delta^{2}}} | \mu |^{2} \exp \left( - \frac {( \omega - \langle \omega \rangle )^{2}} {2 \Delta^{2}} \right) \label{13.26} \end{align}\]We obtain a Gaussian inhomogeneous lineshape centered at the mean frequency with a width dictated by the frequency distribution.Short Correlation Times (\( t \gg \tau_c\))This corresponds to the homogeneous limit in which you can approximate\[C _ {\delta \omega \delta \omega} (t) = \Delta^{2} \delta (t).\]For \(t \gg \tau _ {c}\) we set \(e^{- t / \tau _ {c}} \approx 0\), \(t / \tau _ {c} > > 1\), and Equation \ref{13.23} gives\[g (t) = - \Delta^{2} \tau _ {c} t \label{13.27}\]If we define the constant\[\Delta^{2} \tau _ {c} \equiv \Gamma \label{13.28}\]we see that the dephasing function has an exponential decay:\[F (t) = \exp [ - \Gamma t ] \label{13.29}\]The lineshape for short correlation times (or fast fluctuations) takes on a Lorentzian shape\[ \begin{array} {c} {\sigma ( \omega ) = | \mu |^{2} \int _ {- \infty}^{+ \infty} d t \, e^{i ( \omega - \langle \omega ) t} e^{- \Gamma t}} \\ {\operatorname {Re} \sigma ( \omega ) = | \mu |^{2} \frac {\Gamma} {( \omega - \langle \omega \rangle )^{2} + \Gamma^{2}}} \end{array} \label{13.30}\]This represents the homogeneous limit. Even with a broad distribution of accessible frequencies, if the system explores all of these frequencies on a time scale fast compared to the inverse of the distribution (\(\Delta \tau _ {\mathrm {c}} > 1\), then the resonance will be “motionally narrowed” into a Lorentzian line.More generally, the envelope of the dipole correlation function will look Gaussian at short times and exponential at long times.The correlation time is the separation between these regimes. The behavior for varying time scales of the dynamics (\(\tau_c\)) are best characterized with respect to the distribution of accessible frequencies (\(\Delta\)). So we can define a factor\[\kappa = \Delta \cdot \tau _ {c} \label{13.31}\]\(\kappa \ll 1\) is the fast modulation limit and \(\kappa \gg 1\) is the slow modulation limit. Let’s look at how \(C _ {\delta o \delta \omega}\), \(F (t)\), and \(\sigma _ {a b s} ( \omega )\) change as a function of \(\kappa\).We see that for a fixed distribution of frequencies \(\Delta\) the effect of increasing the time scale of fluctuations through this distribution (decreasing \(\tau_c\)) is to gradually narrow the observed lineshape from a Gaussian distribution of static frequencies with width (FWHM) of \(2.35\Delta\) to a motionally narrowed Lorentzian lineshape with width (FWHM) of\[\Delta^{2} \tau _ {c} / \pi = \Delta \cdot \kappa / \pi.\]This is analogous to the motional narrowing effect first described in the case of temperature dependent NMR spectra of two exchanging species. Assume we have two resonances at \(\omega_A\) and \(\omega_B\) associated with two chemical species that are exchanging at a rate \(k_{AB}\)\[\ce{A <=> B}\]If the rate of exchange is slow relative to the frequency splitting, \(k _ {A B} < < \omega _ {A} - \omega B\) then we expect two resonances, each with a linewidth dictated by the molecular relaxation processes (\(T_2\)) and transfer rate of each species. On the other hand, when the rate of exchange between the two species becomes faster than the energy splitting, then the two resonances narrow together to form one resonance at the mean frequency.For a statistical description of the random variable \(x\), we wish to characterize the moments of \(x\): \(\langle x \rangle\), \(\langle x^2 \rangle\), .... Then the average of an exponential of \(x\) can be expressed as an expansion in moments\[\underbrace{\left\langle e^{i k x} \right\rangle = \sum _ {n = 0}^{\infty} \frac {( i k )^n} {n !} \left\langle x^n \right\rangle}_{\text{expansion in moments}} \label{13.31A}\]An alternate way of expressing this expansion is in terms of cumulants\[\underbrace{\left\langle e^{i k x} \right\rangle = \exp \left( \sum _ {n = 1}^{\infty} \frac {( i k )^{n}} {n !} c _ {n} (x) \right)}_{\text{expansion in cumulants}} \label{13.32}\]where the first few cumulants are:\[ \begin{align} c _ {1} (x) &= \langle x \rangle \tag{mean} \label{13.33} \\[4pt] c _ {2} (x) &= \left\langle x^{2} \right\rangle - \langle x \rangle^{2} \label{13.34} \tag{variance} \\[4pt] c _ {3} (x) &= \left\langle x^{3} \right\rangle - 3 \langle x \rangle \left\langle x^{2} \right\rangle + 2 \langle x \rangle^{3} \tag{skewness} \label{13.35} \end{align}\]An expansion in cumulants converges much more rapidly than an expansion in moments, particularly when you consider that \(x\) may be a time-dependent variable. Particularly useful is the observation that all cumulants with \(n > 2\) vanish for a system that obeys Gaussian statistics.We obtain the cumulants above by expanding Equation \ref{13.31} and \ref{13.32}, and comparing terms in powers of \(x\). We start by postulating that, instead of expanding the exponential directly, we can instead expand the exponential argument in powers of an operator or variable \(H\)\[F = \exp [ c ] = 1 + c + \frac {1} {2} c^{2} + \cdots \label{13.36}\]\[c = c _ {1} H + \frac {1} {2} c _ {2} H^{2} + \cdots \label{13.37}\]Inserting Equation \ref{13.37} into Equation \ref{13.36} and collecting terms in orders of \(H\) gives\[\begin{aligned} F & = 1 + \left( c _ {1} H + \frac {1} {2} c _ {2} H^{2} + \cdots \right) + \frac {1} {2} \left( c _ {1} H + \frac {1} {2} c _ {2} H^{2} + \cdots \right)^{2} + \cdots \\ & = 1 + \left( c _ {1} \right) H + \frac {1} {2} \left( c _ {2} + c _ {1}^{2} \right) H^{2} + \cdots \end{aligned} \label{13.38}\]Now comparing this with the expansion of the exponential operator (of \(H\))\[\begin{align} F & = \exp [ f H ] \\ & = 1 + f _ {1} H + \frac {1} {2} f _ {2} H^{2} + \cdots \label{13.39} \end{align}\]allows one to see that\[\begin{array} {l} {c _ {1} = f _ {1}} \\ {c _ {2} = f _ {2} - f _ {1}^{2}} \end{array} \label{13.40}\]The cumulant expansion can also be applied to time-correlations. Applying this to the time-ordered exponential operator we obtain:\[\begin{align} F (t) & = \left\langle \exp _ {+} \left[ - i \int _ {0}^{t} d t \omega (t) \right] \right\rangle \\ & \approx \exp \left[ c _ {1} (t) + c _ {2} (t) \right] \label{13.42} \end{align} \]\[\begin{aligned} c _ {1} & = - i \int _ {0}^{t} d \tau \langle \omega ( \tau ) \rangle \\ c _ {2} & = - \int _ {0}^{t} d \tau _ {2} \int _ {0}^{\tau _ {2}} d \tau _ {1} \left\{\left\langle \omega \left( \tau _ {2} \right) \omega \left( \tau _ {1} \right) \right\rangle - \left\langle \omega \left( \tau _ {2} \right) \right\rangle \left\langle \omega \left( \tau _ {1} \right) \right\rangle \right\} \\ & = - \int _ {0}^{t} d \tau _ {2} \int _ {0}^{\tau _ {2}} d \tau _ {1} \left\langle \delta \omega \left( \tau _ {2} \right) \delta \omega \left( \tau _ {1} \right) \right\rangle \end{aligned} \label{13.43}\]For Gaussian statistics, all higher cumulants vanish.This page titled 14.3: Gaussian-Stochastic Model for Spectral Diffusion is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,286 |
14.4: The Energy Gap Hamiltonian
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/14%3A_Fluctuations_in_Spectroscopy/14.04%3A_The_Energy_Gap_Hamiltonian | In describing fluctuations in a quantum mechanical system, we describe how an experimental observable is influenced by its interactions with a thermally agitated environment. For this, we work with the specific example of an electronic absorption spectrum and return to the Displaced Harmonic Oscillator (DHO) model. We previously described this model in terms of the eigenstates of the material Hamiltonian \(H_0\), and interpreted the dipole correlation function and resulting lineshape in terms of the overlap between two wave packets evolving on the ground and excited surfaces \(| E \rangle\) and \(| G \rangle\).\[C _ {\mu \mu} (t) = \left| \mu _ {e g} \right|^{2} e^{- i \left( E _ {e} - E _ {g} \right) t / \hbar} \left\langle \varphi _ {g} (t) | \varphi _ {e} (t) \right\rangle \label{13.43}\]It is worth noting a similarity between the DHO Hamiltonian, and a general form for the interaction of an electronic two-level “system” with a harmonic oscillator “bath” whose degrees of freedom are dark to the observation, but which influence the behavior of the system.Expressed in a slightly different physical picture, we can also conceive of this process as nuclear motions that act to modulate the electronic energy gap \(\omega _ {e g}\). We can imagine rewriting the same Hamiltonian in a form with a new physical picture that desscribes the electronic energy gap’s dependence on \(q\), i.e., its variation relative to \(\omega _ {e g}\). If we define an Energy Gap Hamiltonian:\[H _ {e g} = H _ {e} - H _ {g}\]we can rewrite the DHO Hamiltonian\[H _ {0} = | e \rangle E _ {e} \langle e | + | g \rangle E _ {g} \langle g | + H _ {e} + H _ {g} \label{13.44}\]as an electronic transition linearly coupled to a harmonic oscillator:\[H _ {0} = | e \rangle E _ {e} \langle e | + | g \rangle E _ {g} \langle g | + H _ {e g} + 2 H _ {g} \label{13.44B}\]Noting that\[H _ {g} = \frac {p^{2}} {2 m} + \frac {1} {2} m \omega _ {0}^{2} q^{2} \label{13.44C}\]we can write this as a system-bath Hamiltonian:\[H _ {0} = H _ {S} + H _ {B} + H _ {S B} \label{13.44D}\]where \(H_{SB}\) describes the interaction of the electronic system (\(H_S\)) with the vibrational bath (\(H_B\)). Here\[H _ {S} = | e \rangle E _ {e} \langle e | + | g \rangle E _ {g} \langle g |\]\[H _ {B} = 2 H _ {g}\]and\[\begin{align} H _ {S B} = H _ {e g} &= \dfrac {1} {2} m \omega _ {0}^{2} ( q - d )^{2} - \frac {1} {2} m \omega _ {0}^{2} q^{2} \\ &= - m \omega _ {0}^{2} d q + \frac {1} {2} m \omega _ {0}^{2} d^{2} \\ &= - c q + \lambda \end{align}\]The Energy Gap Hamiltonian describes a linear coupling between the electronic transition and a harmonic oscillator. The strength of the coupling is \(c\) and the Hamiltonian has a constant energy offset value given by the reorganization energy (\(\lambda\)). Any motion in the bath coordinate \(q\) introduces a proportional change in the electronic energy gap.In an alternate form, the Energy Gap Hamiltonian can also be written to incorporate the reorganization energy into the system:\[\begin{align*} H _ {0} &= | e \rangle E _ {e} \langle e | + | g \rangle E _ {g} \langle g | + H _ {e g} + 2 H _ {g} \label{13.44E} \\[4pt] H _ {S}^{\prime} &= | e \rangle \left( E _ {e} + \lambda \right) \langle e | + | g \rangle E _ {g} \langle g | \\[4pt] H _ {B}^{\prime} &= \frac {p^{2}} {2 m} + \frac {1} {2} m \omega _ {0}^{2} q^{2} \\[4pt] H _ {S B}^{\prime} &= - m \omega _ {0}^{2} d q \end{align*}\]This formulation describes fluctuations about the average value of the energy gap \(\hbar \omega _ {e g} + \lambda\), however, the observables calculated are the same.From the picture of a modulated energy gap one can begin to see how random fluctuations can be treated by coupling to a harmonic bath. If each oscillator modulates the energy gap at a given frequency, and the phase between oscillators is random as a result of their independence, then time-domain fluctuations and dephasing can be cast in terms of a Fourier spectrum of couplings to oscillators with continuously varying frequency.Now let’s work through the description of electronic spectroscopy with the Energy Gap Hamiltonian more carefully. Working from Equations \ref{13.43} and \ref{13.44} we express the energy gap Hamiltonian through reduced coordinates for the momentum, coordinate, and displacement of the oscillator. \[p = \hat {p} \left( 2 \hbar \omega _ {0} m \right)^{- 1 / 2}\]\[q = \hat {q} \left( m \omega _ {0} / 2 \hbar \right)^{1 / 2}\]\[d = d \left( m \omega _ {0} / 2 \hbar \right)^{1 / 2}\]with\[ \begin{align} H _ {e} &= \hbar \omega _ {0} \left( p^{2} + ( q - d )^{2} \right) \\[4pt] H_{g} &= \hbar \omega _ {0} \left( p^{2} + q^{2} \right) \end{align} \label{13.48}\]From Equation \ref{13.43} we have\[\left.\begin{aligned} H _ {e g} & = - 2 \hbar \omega _ {0} d q + \hbar \omega _ {0} d^{2} \\ & = - m \omega _ {0}^{2} d q + \lambda \end{aligned} \right. \label{13.49}\]The energy gap Hamiltonian describes a linear coupling of the electronic system to the coordinate q. The slope of \(H_{eg}\) versus \(q\) is the coupling strength, and the average value of \(H_{eg}\) in the ground state, \(H _ {e g} ( q = 0 )\), is offset by the reorganization energy \(\lambda\). We note that the average value of the energy gap Hamiltonian is \(\left\langle H _ {e g} \right\rangle = \lambda\).To obtain the absorption lineshape from the dipole correlation function\[C _ {\mu \mu} (t) = \left| \mu _ {e g} \right|^{2} e^{- i \omega _ {e g} t} F (t) \label{13.50}\]we must evaluate the dephasing function.\[F (t) = \left\langle e^{i H _ {g} t} e^{- i H _ {e} t} \right\rangle = \left\langle U _ {g}^{\dagger} U _ {e} \right\rangle \label{13.51}\]We want to rewrite the dephasing function in terms of the time dependence to the energy gap \(H_{eg}\); that is, if \(F (t) = \left\langle U _ {c g} \right\rangle\), then what is \(U _ {e g}\)? This involves a unitary transformation of the dynamics to a new frame of reference. The transformation from the DHO Hamiltonian to the EG Hamiltonian is similar to our derivation of the interaction picture.Transformation of time-propagatorsIf we have a time dependent quantity of the form\[e^{i H _ {A} t} A e^{- i H _ {B} t} \label{13.52}\]we can also express the dynamics through the difference Hamiltonian \(H _ {B A} = H _ {B} - H _ {A}\)\[A e^{- i \left( H _ {B} - H _ {A} \right) t} = A e^{- i H _ {B A} t} \label{13.53}\]using a commonly performed unitary transformation. If we write\[H _ {B} = H _ {A} + H _ {B A} \label{13.54}\]we can use the same procedure for partitioning the dynamics in the interaction picture to write\[e^{- i H _ {B t} t} = e^{- i H _ {A} t} \exp _ {+} \left[ - \frac {i} {\hbar} \int _ {0}^{t} d \tau H _ {B A} ( \tau ) \right] \label{13.55}\]where\[H _ {B A} ( \tau ) = e^{i H _ {A} t} H _ {B A} e^{- i H _ {A} t} \label{13.56}\]Then, we can also write:\[e^{i H _ {A} t} e^{- i H _ {B} t} = \exp _ {+} \left[ - \frac {i} {\hbar} \int _ {0}^{t} d \tau H _ {B A} ( \tau ) \right] \label{13.57}\]Noting the mapping to the interaction picture\[H _ {e} = H _ {g} + H _ {e g} \quad \Leftrightarrow \quad H = H _ {0} + V \label{13.58}\]we see that we can represent the time dependence of the electronic energy gap \(H_{eg}\) using\[e^{- i H _ {c} t / h} = e^{- i H _ {g} t / h} \exp _ {+} \left[ \frac {- i} {\hbar} \int _ {0}^{t} d \tau H _ {e g} ( \tau ) \right] \label{13.59}\]\[U _ {e} = U _ {g} U _ {e g}\]where\[\begin{align} H _ {e g} (t) & = e^{i H _ {g} t / \hbar} H _ {e g} e^{- i H _ {g} t / \hbar} \\ & = U _ {g}^{\dagger} H _ {e g} U _ {g} \label{13.60} \end{align} \]Remembering the equivalence between the harmonic mode \(H_g\) and the bath mode(s) \(H_B\) indicates that the time dependence of the EG Hamiltonian reflects how the electronic energy gap is modulated as a result of the interactions with the bath. That is \(U _ {g} \Leftrightarrow U _ {B}\).Equation \ref{13.59} immediately implies that\[F (t) = \left\langle e^{i H _ {g} t / \hbar} e^{- i H _ {e} t / \hbar} \right\rangle = \left\langle \exp _ {+} \left[ \frac {- i} {\hbar} \int _ {0}^{t} d \tau H _ {e g} ( \tau ) \right] \right\rangle \label{13.61}\]Now the quantum dephasing function is in the same form as we saw in our earlier classical derivation. Using the second-order cumulant expansion allows the dephasing function to be written as\[F (t) = \left\langle e^{i H _ {g} t / \hbar} e^{- i H _ {e} t / \hbar} \right\rangle = \left\langle \exp _ {+} \left[ \frac {- i} {\hbar} \int _ {0}^{t} d \tau H _ {e g} ( \tau ) \right] \right\rangle \label{13.62}\]Note that the cumulant expansion is here written as a time-ordered expansion. The first exponential term depends on the mean value of \(H_{eg}\)\[\left\langle H _ {e g} \right\rangle = \hbar \omega _ {0} d^{2} = \lambda \label{13.63}\]This is a result of how we defined \(H_{eg}\). Alternatively, the EG Hamiltonian could have been defined relative to the energy gap at \(Q=0\): \(H _ {e g} = H _ {e} - H _ {g} + \lambda\). In this case the leading term in Equation \ref{13.62} would be zero, and the mean energy gap that describes the high frequency (system) oscillation in the dipole correlation function is \(\omega _ {e g} + \lambda\).The second exponential term in Equation \ref{13.62} is a correlation function that describes the time dependence of the energy gap\[\left. \begin{array} {c} {\left\langle H _ {e g} \left( \tau _ {2} \right) H _ {e g} \left( \tau _ {1} \right) \right\rangle - \left\langle H _ {e g} \left( \tau _ {2} \right) \right\rangle \left\langle H _ {e g} \left( \tau _ {1} \right) \right\rangle} \\ {= \left\langle \delta H _ {e g} \left( \tau _ {2} \right) \delta H _ {e g} \left( \tau _ {1} \right) \right\rangle} \end{array} \right. \label{13.64}\]where\[\left.\begin{aligned} \delta H _ {e g} & = H _ {e g} - \left\langle H _ {e g} \right\rangle \\ & = - m \omega _ {0}^{2} d q \end{aligned} \right. \label{13.65}\]Defining the time-dependent energy gap transition frequency in terms of the EG Hamiltonian as\[\delta \hat {\omega} _ {e g} \equiv \frac {\delta H _ {e g}} {\hbar} \label{13.66}\]we can write the energy gap correlation function\[C _ {e g} \left( \tau _ {2} , \tau _ {1} \right) = \left\langle \delta \hat {\omega} _ {e g} \left( \tau _ {2} - \tau _ {1} \right) \delta \hat {\omega} _ {e g} ( 0 ) \right\rangle \label{13.68}\]It follows that\[F (t) = e^{- i \lambda t / \hbar} e^{- g (t)}\]and\[g (t) = \int _ {0}^{t} d \tau _ {2} \int _ {0}^{\tau _ {2}} d \tau _ {1} C _ {e g} \left( \tau _ {2} , \tau _ {1} \right) \label{13.69}\]and the dipole correlation function can be expressed as\[C _ {\mu \mu} (t) = \left| \mu _ {e g} \right|^{2} e^{- i \left( E _ {e} - E _ {g} + \lambda \right) t / \hbar} e^{- g (t)} \label{13.70}\]This is the correlation function expression that determines the absorption lineshape for a timedependent energy gap. It is a general expression at this point, for all forms of the energy gap correlation function. The only approximation made for the bath is the second cumulant expansion.Now, let’s look specifically at the case where the bath we are coupled to is a single harmonic mode. The energy gap correlation function is evaluated from\[\left.\begin{aligned} C _ {e g} (t) & = \sum _ {n} p _ {n} \left\langle n \left| \delta \hat {\omega} _ {e g} (t) \delta \hat {\omega} _ {e g} ( 0 ) \right| n \right\rangle \\ & = \frac {1} {\hbar^{2}} \sum _ {n} p _ {n} \left\langle n \left| e^{i H _ {g} t / \hbar} \delta H _ {e g} e^{- i H _ {g} t / \hbar} \delta H _ {e g} \right| n \right\rangle \end{aligned} \right. \label{13.71}\]Noting that the bath oscillator correlation function\[C _ {q q} (t) = \langle q (t) q ( 0 ) \rangle = \frac {\hbar} {2 m \omega _ {0}} \left[ ( \overline {n} + 1 ) e^{- i \omega _ {0} t} + \overline {n} e^{i \omega _ {0} t} \right] \label{13.72}\]we find\[C _ {e g} (t) = \omega _ {0}^{2} D \left[ ( \overline {n} + 1 ) e^{- i \omega _ {0} t} + \overline {n} e^{i \omega _ {0} t} \right] \label{13.73}\]Here, as before, \(\beta = 1 / k _ {B} T\), \(\overline {n}\) is the thermally averaged occupation number for the oscillator\[\overline {n} = \sum _ {n} p _ {n} \left\langle n \left| a^{\dagger} a \right| n \right\rangle = \left( e^{\beta \hbar \omega _ {b}} - 1 \right)^{- 1} \label{13.74}\]and \(\beta = 1 / \mathrm {kB} \mathrm {T}\). Note that the energy gap correlation function is a complex function. We can separate the real and imaginary parts of \(C_{eg}\) as\[C _ {e g} (t) = C _ {e g}^{\prime} + i C _ {e g}^{\prime \prime} \label{13.75}\]with\[\begin{align} C _ {e g}^{\prime} (t) &= \omega _ {0}^{2} D \operatorname {coth} \left( \beta \hbar \omega _ {0} / 2 \right) \cos \left( \omega _ {0} t \right) \\[4pt] C _ {e g}^{\prime \prime} (t) &= \omega _ {0}^{2} D \sin \left( \omega _ {0} t \right) \end{align} \label{13.76}\]where we have made use of the relation\[2 \overline {n} ( \omega ) + 1 = \operatorname {coth} ( \beta \hbar \omega / 2 ) \label{13.77}\]and\[\operatorname{coth}(x)=\left(e^{x}+e^{-x}\right) /\left(e^{x}-e^{-x}\right)\]We see that the imaginary part of the energy gap correlation function is temperature independent. The real part has the same amplitude at \(T=0\), and rises with temperature. We can analyze the high and low temperature limits of this expression from\[\begin{align} \lim_{x \rightarrow \infty} \operatorname {coth} (x) = 1 \\[4pt] \lim_{x \rightarrow 0} \operatorname {coth} (x) \approx \frac {1} {x} \end{align} \label{13.78}\]Looking at the low temperature limit \(\operatorname{coth}\left(\beta \hbar \omega_{0} / 2\right) \rightarrow 1\) and \(\overline {n} \rightarrow 0\) we see that Equation \ref{13.82} reduces to Equation \ref{13.84}.In the high temperature limit \(k T > \star \omega _ {0}\), \(\operatorname {coth} \left( \hbar \omega _ {0} / 2 k T \right) \rightarrow 2 k T / \hbar \omega _ {0}\) and we recover the expected classical result. The magnitude of the real component dominates the imaginary part \(\left| C _ {e g}^{\prime} \right| > > \left| C _ {e g}^{\prime \prime} \right|\) and the energy gap correlation function (\(C_{eq}(t)\) becomes real and even in time.Similarly, we can evaluate Equation \ref{13.69}, the lineshape function\[g (t) = - D \left[ ( \overline {n} + 1 ) \left( e^{- i \omega _ {0} t} - 1 \right) + \overline {n} \left( e^{i \omega _ {0} t} - 1 \right) \right] - i D \omega _ {0} t \label{13.79}\]The leading term in Equation \ref{13.79} gives us a vibrational progression, the second term leads to hot bands, and the final term is the reorganization energy (\(- i D \omega _ {0} t = - i \lambda t / \hbar\)). The lineshape function can be written in terms of its real and imaginary parts\[g(t)=g^{\prime}+i g^{\prime \prime}\]with\[\begin{align} g^{\prime} (t) &= D \operatorname {coth} \left( \beta \hbar \omega _ {0} / 2 \right) \left( 1 - \cos \omega _ {0} t \right) \\[4pt] g^{\prime \prime} (t) &= D \left( \sin \omega _ {0} t - \omega _ {0} t \right) \label{13.81} \end{align}\]Because these enter into the dipole correlation function as exponential arguments, the imaginary part of \(g(t)\) will reflect the bath-induced energy shift of the electronic transition gap and vibronic structure, and the real part will reflect damping, and therefore the broadening of the lineshape. Similarly to \(C_{eg}(t)\), in the high temperature limit \(g' \gg g''\). Now, using Equation \ref{13.68}, we see that the dephasing function is given by\[\begin{align} F(t) &=\exp \left[D\left((\bar{n}+1)\left(e^{-i \omega_{0} t}-1\right)+\bar{n}\left(e^{i \omega_{0} t}-1\right)\right)\right] \\[4pt]
&=\exp \left[D\left(\operatorname{coth}\left(\frac{\beta \hbar \omega}{2}\right)(1-\cos \omega t)+i \sin \omega t\right)\right] \end{align} \label{13.82}\]Let’s confirm that we get the same result as with our original DHO model, when we take the low temperature limit. Setting \(\overline {n} \rightarrow 0\) in Equation \ref{13.82}, we have our original result\[F_{k T=0}(t)=\exp \left[D\left(e^{-i \omega_{0} t}-1\right)\right]\label{13.84}\]In the high temperature limit \(g' \gg g''\), and from Equation \ref{13.78} we obtain\[\left.\begin{aligned} F (t) & = \exp \left[ \frac {2 D k T} {\hbar \omega _ {0}} \cos \left( \omega _ {0} t \right) \right] \\ & = \sum _ {j = 0}^{\infty} \frac {1} {j !} \left( \frac {2 D k T} {\hbar \omega _ {0}} \right)^{j} \cos^{j} \left( \omega _ {0} t \right) \end{aligned} \right. \label{13.85}\]which leads to an absorption spectrum which is a series of sidebands equally spaced on either side of \(\text {oleg}\).Since time- and frequency-domain representations are complementary, and one form may be preferable over another, it is possible to express the frequency correlation function in terms of its spectrum. For a complex spectrum of vibrational motions composed of many modes, representing the nuclear motions in terms of a spectrum rather than a beat pattern is often easier. It turns out that calculation are often easier performed in the frequency domain. To start we define a Fourier transform pair that relates the time and frequency domain representations:\[\tilde {C} _ {e g} ( \omega ) = \int _ {- \infty}^{+ \infty} e^{i \omega t} C _ {e g} (t) d t \label{13.86}\]\[C _ {e g} (t) = \frac {1} {2 \pi} \int _ {- \infty}^{+ \infty} e^{- i \omega t} \tilde {C} _ {e g} ( \omega ) d \omega \label{13.87}\]Since the energy gap correlation function has the property\[C _ {e g} ( - t ) = C _ {e g}^{*} (t)\]it also follows from Equation \ref{13.86} that the energy gap correlation spectrum is entirely real:\[\tilde {C} _ {e g} ( \omega ) = 2 \operatorname {Re} \int _ {0}^{\infty} e^{i \omega t} C _ {e g} (t) d t \label{13.88}\]or\[\tilde {C} _ {e g} ( \omega ) = \tilde {C} _ {e g}^{\prime} ( \omega ) + \tilde {C} _ {e g}^{\prime \prime} ( \omega ) \label{13.89}\]Here \(\tilde {C} _ {e s}^{\prime} ( \omega )\) and \(\tilde {C} _ {e g}^{\prime \prime} ( \omega )\) are the Fourier transforms of the real and imaginary components of \(C _ {e s} (t)\), respectively. \(\tilde {C} _ {e s}^{\prime} ( \omega )\) and \(\tilde {C} _ {e g}^{\prime \prime} ( \omega )\) are even and odd in frequency. Thus while \(\tilde {C} _ {e s} ( \omega )\) is entirely real valued, it is asymmetric about \(\omega = 0\).With these definitions in hand, we can write the spectrum of the energy gap correlation function for coupling to a single harmonic mode spectrum (Equation \ref{13.71}):\[\tilde {C} _ {e g} \left( \omega _ {\alpha} \right) = \omega _ {\alpha}^{2} D \left( \omega _ {\alpha} \right) \left[ \left( \overline {n} _ {\alpha} + 1 \right) \delta \left( \omega - \omega _ {\alpha} \right) + \overline {n} _ {\alpha} \delta \left( \omega + \omega _ {\alpha} \right) \right] \label{13.90}\]This is a spectrum that characterizes how bath vibrational modes of a certain frequency and thermal occupation act to modify the observed energy of the system. The first and second terms in Equation \ref{13.90} describe upward and downward energy shifts of the system, respectively. Coupling to a vibration typically leads to an upshift of the energy gap transition energy since energy must be put into the system and bath. However, as with hot bands, when there is thermal energy available in the bath, it also allows for down-shifts in the energy gap. The net balance of upward and downward shifts averaged over the bath follows the detailed balance expression \[\tilde {C} ( - \omega ) = e^{- \beta \hbar \omega} \tilde {C} ( \omega ) \label{13.91}\]The balance of rates tends toward equal with increasing temperature. Fourier transforms of Equation \ref13.76} gives two other representations of the energy gap spectrum\[\tilde {C} _ {e g}^{\prime} \left( \omega _ {\alpha} \right) = \omega _ {\alpha}^{2} D \left( \omega _ {\alpha} \right) \operatorname {coth} \left( \beta \hbar \omega _ {\alpha} / 2 \right) \left[ \delta \left( \omega - \omega _ {\alpha} \right) + \delta \left( \omega + \omega _ {\alpha} \right) \right] \label{13.92}\]\[\tilde {C} _ {e g}^{\prime \prime} \left( \omega _ {\alpha} \right) = \omega _ {\alpha}^{2} D \left( \omega _ {\alpha} \right) \left[ \delta \left( \omega - \omega _ {\alpha} \right) + \delta \left( \omega + \omega _ {\alpha} \right) \right]. \label{13.93}\]The representations in Equation \ref{13.90}, \ref{13.92}, and \ref{13.93} are not independent, but can be related to one another through\[\tilde{C}_{e g}^{\prime}\left(\omega_{\alpha}\right)=\operatorname{coth}\left(\beta \hbar \omega_{\alpha} / 2\right) \tilde{C}_{e g}^{\prime \prime}\left(\omega_{\alpha}\right)\]\[\tilde {C} _ {e g} \left( \omega _ {\alpha} \right) = \left( 1 + \operatorname {coth} \left( \beta \hbar \omega _ {\alpha} / 2 \right) \right) \tilde {C} _ {e g}^{\prime \prime} \left( \omega _ {\alpha} \right) \label{13.95}\]That is, given either the real or imaginary part of the energy gap correlation spectrum, we can predict the other part. As we will see, this relationship is one manifestation of the fluctuationdissipation theorem that we address later. Due to its independence on temperature, the spectral density \(\tilde {C} _ {e g}^{\prime \prime} \left( \omega _ {\alpha} \right)\) is the commonly used representation.Also from Equations.\ref{13.69} and \ref{13.87} we obtain the lineshape function as\[\left.\begin{aligned} g (t) & = \int _ {- \infty}^{+ \infty} d \omega \frac {1} {2 \pi} \frac {\tilde {C} _ {e g} ( \omega )} {\omega^{2}} [ \exp ( - i \omega t ) + i \omega t - 1 ] \\ & = \int _ {0}^{\infty} d \omega \frac {\tilde {C} _ {e g}^{\prime \prime} ( \omega )} {\pi \omega^{2}} \left[ \operatorname {coth} \left( \frac {\beta \hbar \omega} {2} \right) ( 1 - \cos \omega t ) + i ( \sin \omega t - \omega t ) \right] \end{aligned} \right. \label{13.96}\]The first expression relates g(t) to the complex energy gap correlation function, whereas the second separates the real and the imaginary parts and relates them to the imaginary part of the energy gap correlation function.More generally for condensed phase problems, the system coordinates that we observe in an experiment will interact with a continuum of nuclear motions that may reflect molecular vibrations, phonons, or intermolecular interactions. We describe this continuum as continuous distribution of harmonic oscillators of varying mode frequency and coupling strength. The Energy Gap Hamiltonian is readily generalized to the case of a continuous distribution of motions if we statistically characterize the density of states and the strength of interaction between the system and this bath. This method is also referred to as the Spin-Boson Model used for treating a two level spin-½ system interacting with a quantum harmonic bath.Following our earlier discussion of the DHO model, the generalization of the EG Hamiltonian to the multimode case is\[H _ {0} = \hbar \omega _ {e g} + H _ {e g} + H _ {B} \label{13.97}\]\[H _ {B} = \sum _ {\alpha} \hbar \omega _ {\alpha} \left( p _ {\sim}^{2} + q _ {\alpha}^{2} \right) \label{13.98}\]\[H _ {e g} = \sum _ {\alpha} 2 \hbar \omega _ {\alpha} d _ {\alpha} q _ {\alpha} + \lambda \label{13.99}\]\[\lambda = \sum _ {\alpha} \hbar \omega _ {\alpha} d _ {\alpha}^{2} \label{13.100}\]Note that the time-dependence to \(H_{eg}\) results from the interaction with the bath:\[H _ {e g} (t) = e^{i H _ {B} t / \hbar} H _ {e g} e^{- i H _ {B} t / \hbar} \label{13.101}\]Also, since the harmonic modes are normal to one another, the dephasing function and lineshape function are obtained from Equation \ref{13.102}\[F(t)=\prod_{\alpha} F_{\alpha}(t) \quad g(t)=\sum_{\alpha} g_{\alpha}(t)\label{13.102}\]For a continuum, we assume that the number of modes are so numerous as to be continuous, and that the sums in the equations above can be replaced by integrals over a continuous distribution of states characterized by a density of states W Z . Also the interaction with modes of a particular frequency are equal so that we can simply average over a frequency dependent coupling constant 2 D d Z Z . For instance, Equation \ref{13.102} becomes\[g (t) = \int d \omega _ {\alpha} W \left( \omega _ {\alpha} \right) g \left( t , \omega _ {\alpha} \right) \label{13.103}\]Coupling to a continuum leads to dephasing resulting from interaction to a continuum of modes of varying frequency. This will be characterized by damping of the energy gap frequency correlation function\[C _ {e g} (t) = \int d \omega _ {\alpha} C _ {e g} \left( \omega _ {\alpha} , t \right) W \left( \omega _ {\alpha} \right) \label{13.104}\]Here \(C _ {e g} \left( \omega _ {\alpha} , t \right) = \left\langle \delta \omega _ {e g} \left( \omega _ {\alpha} , t \right) \delta \omega _ {e g} \left( \omega _ {\alpha} , 0 \right) \right\rangle\) refers to the energy gap frequency correlation function for a single harmonic mode given in Equation \ref{13.71}. While Equation \ref{13.104} expresses the modulation of the energy gap in the time domain, we can alternatively express the continuous distribution of coupled bath modes in the frequency domain:\[\tilde {C} _ {e g} ( \omega ) = \int d \omega _ {\alpha} W \left( \omega _ {\alpha} \right) \tilde {C} _ {e g} \left( \omega _ {\alpha} \right) \label{13.105}\]An integral of a single harmonic mode spectrum over a continuous density of states provides a coupling weighted density of states that reflects the action spectrum for the system-bath interaction. We evaluate this with the single harmonic mode spectrum, Equation \ref{13.90}. We see that the spectrum of the correlation function for positive frequencies is related to the product of the density of states and the frequency dependent coupling\[\tilde{C}_{e g}(\omega)=\omega^{2} D(\omega) W(\omega)(\bar{n}+1) \quad(\omega>0) \label{13.106}\]\[\tilde{C}_{e g}(\omega)=\omega^{2} D(\omega) W(\omega) \bar{n} \quad(\omega<0) \label{13.107}\]This is an action spectrum that reflects the coupling weighted density of states of the bath that contributes to the spectrum.In practice, the unusually symmetry of \(\tilde {C} _ {e g} ( \omega )\) and its growth as \(\omega^{2}\) make it difficult to work with. Therefore we choose to express the frequency domain representation of the coupling-weighted density of states in Equation \ref{13.106} as a spectral density, defined as\[\left.\begin{aligned} \rho ( \omega ) & \equiv \frac {\tilde {C} _ {e g}^{\prime \prime} ( \omega )} {\pi \omega^{2}} \\ & = \frac {1} {\pi} \int d \omega _ {\alpha} W \left( \omega _ {\alpha} \right) D \left( \omega _ {\alpha} \right) \delta \left( \omega - \omega _ {\alpha} \right) \\ & = \frac {1} {\pi} W ( \omega ) D ( \omega ) \end{aligned} \right. \label{13.108}\]This expression is real and defined only for positive frequencies. Note \(\tilde {C} _ {e g}^{\prime \prime} ( \omega )\) is an odd function in \(\infty\), and therefore \(\rho(\infty)\) is also.The reorganization energy can be obtained from the first moment of the spectral density\[\lambda = \hbar \int _ {0}^{\infty} d \omega \omega \rho ( \omega ) \label{13.109}\]Furthermore, from Equation \ref{13.69} and \ref{13.105} we obtain the lineshape function in two forms\[\left.\begin{aligned} g (t) & = \int _ {- \infty}^{+ \infty} d \omega \frac {1} {2 \pi} \frac {\tilde {C} _ {e g} ( \omega )} {\omega^{2}} [ \exp ( - i \omega t ) + i \omega t - 1 ] \\[4pt] & = - \frac {i \lambda t} {\hbar} + \int _ {0}^{\infty} d \omega \rho ( \omega ) \left[ \operatorname {coth} \left( \frac {\beta \hbar \omega} {2} \right) ( 1 - \cos \omega t ) + i \sin \omega t \right] \end{aligned} \right. \label{13.110}\]In this expression the temperature dependence implies that in the high temperature limit, the real part of \(g(t)\) will dominate, as expected for a classical system. This is a perfectly general expression for the lineshape function in terms of an arbitrary spectral distribution describing the time scale and amplitude of energy gap fluctuations. Given a spectral density \(\rho(\infty)\), you can calculate various spectroscopic observables and other time-dependent processes in a fluctuating environment.Now, let’s evaluate the behavior of the lineshape function and absorption lineshape for different forms of the spectral density. To keep things simple, we will consider the high temperature limit, \(k _ {B} T \ll \hbar \omega\). Here\[\operatorname {coth} ( \beta \hbar \omega / 2 ) \rightarrow 2 / \beta \hbar \omega\]and we can neglect the imaginary part of the frequency correlation function and lineshape function. These examples are motivated by the spectral densities observed for random or noisy processes. Depending on the frequency range and process of interest, noise tends to scale as \(U \approx Z^{-n}\), where \(n = 0\), \(1\) or \(2\). This behavior is often described in terms of a spectral density of the form\[\rho ( \omega ) \propto \omega _ {c}^{1 - s} \omega^{s - 2} e^{- \omega / \omega _ {c}} \label{13.111}\]where \(Z_c\) is a cut-off frequency, and the units are inverse frequency. These spectral densities have the desired property of being an odd function in \(Z\), and can be integrated to a finite value. The case \(s = 1\) is known as the Ohmic spectral density, whereas \(s > 1\) is super-ohmic and \(s < 1\) is sub-ohmic.Let’s first consider the example when \(U\) drops as \(1/Z\) with frequency, which refers to the Ohmic spectral density with a high cut-off frequency. This is the spectral density that corresponds to an energy gap correlation function that decays infinitely fast: \(C_{e g}(t) \sim \delta(t)\). To choose a definition consistent with Equation \ref{13.109}, we set\[\rho ( \omega ) = \lambda / \Lambda \hbar \omega \label{13.112}\]where \(\Lambda\) is a finite high frequency integration limit that we enforce to keep \(U\) well behaved. \(\Lambda\) has units of frequency, it is equated with the inverse correlation time for the fast decay of \(C_{eg}(t)\).Now we evaluate\[\begin{aligned} g (t) & = \int _ {0}^{\infty} d \omega \frac {2 k _ {B} T} {\Lambda \hbar \omega} \rho ( \omega ) ( 1 - \cos \omega t ) - \frac {i \lambda t} {\hbar} \\ & = \int _ {0}^{\infty} d \omega \frac {2 \lambda k _ {B} T ( 1 - \cos \omega t )} {\omega^{2}} - \frac {i \lambda t} {\hbar} \\ & = \lambda \frac {\pi k _ {B} T} {\Lambda \hbar^{2}} t - \frac {i \lambda t} {\hbar} \end{aligned} \label{13.113}\]Then we obtain the dephasing function\[F (t) = e^{- \Gamma t} \label{13.114}\]where we have defined the exponential damping constant as\[\Gamma = \lambda \frac {\pi k T} {\Lambda \hbar^{2}} \label{13.115}\]From this we obtain the absorption lineshape\[\sigma _ {a b s} \propto \frac {\left| \mu _ {e g} \right|^{2}} {\left( \omega - \omega _ {e g} \right) + i \Gamma} \label{13.116}\]Thus, a spectral density that scales as \(1 / \omega\) has a rapidly fluctuating bath and leads to a homogeneous Lorentzian lineshape with a half-width \(\Gamma\).Now take the case that we choose a Lorentzian spectral density centered at \(Z= 0\). To keep the proper odd function of \(Z\) and definition of \(O\) we write:\[\rho ( \omega ) = \frac {\lambda} {\hbar \omega} \frac {\Lambda} {\omega^{2} + \Lambda^{2}} \label{13.117}\]Note that for frequencies \(\omega \ll \Lambda\) this has the ohmic form of Equation \ref{13.112}. This is a spectral density that corresponds to an energy gap correlation function that drops exponentially as \(C_{e g}(t) \sim \exp (-\Lambda t)\). Here, in the high temperature (classical) limit \(k T>>\hbar \Lambda\), neglecting the imaginary part, we find\[g (t) \approx \frac {\pi \lambda k T} {\hbar^{2} \Lambda^{2}} [ \exp ( - \Lambda t ) + \Lambda t - 1 ] \label{13.118}\]This expression looks familiar. If we equate\[\Delta^{2} = \lambda \frac {\pi k T} {\hbar^{2}} \label{13.119}\]and\[\tau _ {c} = \frac {1} {\Lambda} \label{13.120}\]we obtain the same lineshape function as the classical Gaussian-stochastic model:\[g (t) = \Delta^{2} \tau _ {c}^{2} \left[ \exp \left( - t / \tau _ {c} \right) + t / \tau _ {c} - 1 \right] \label{13.121}\]So, the interaction of an electronic transition with a harmonic bath leads to line broadening that is equivalent to random fluctuations of the energy gap. As we noted earlier, for the homogeneous limit, we find \(\Gamma = \Delta^{2} \tau _ {c}\).This page titled 14.4: The Energy Gap Hamiltonian is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,287 |
14.5: Correspondence of Harmonic Bath and Stochastic Equations of Motion
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/14%3A_Fluctuations_in_Spectroscopy/14.05%3A_Correspondence_of_Harmonic_Bath_and_Stochastic_Equations_of_Motion | So, why does the mathematical model for coupling of a system to a harmonic bath give the same results as the classical stochastic equations of motion for fluctuations? Why does coupling to a continuum of bath states have the same physical manifestation as perturbation by random fluctuations? The answer is that in both cases, we really have imperfect knowledge of the behavior of all the particles present. Observing a small subset of particles will have dynamics with a random character. These dynamics can be quantified through a correlation function or a spectral density for the time-scales of motion of the bath. In this section, we will demonstrate a more formal relationship that illustrates the equivalence of these pictures.To take our discussion further, let’s again consider the electronic absorption spectrum from a classical perspective. It’s quite common to think that the electronic transition of interest is coupled to a particular nuclear coordinate \(Q\) which we will call a local coordinate. This local coordinate could be an intramolecular normal vibrational mode, an intermolecular rattling in a solvent shell, a lattice vibration, or another motion that influences the electronic transition. The idea is that we take the observed electronic transition to be linearly dependent on one or more local coordinates. Therefore describing \(Q\) allows us to describe the spectroscopy. However, since this local mode has further degrees of freedom that it may be interacting with, we are extracting a particular coordinate out or a continuum of other motions, the local mode will appear to feel a fluctuating environment—a friction.Classically, we describe fluctuations in \(Q\) as Brownian motion, typically through a Langevin equation. In the simplest sense, this is an equation that restates Newton’s equation of motion \(F=ma\) for a fluctuating force acting on a particle with position \(Q\). For the case that this particle is confined in a harmonic potential,\[m \ddot{Q}(t)+m \omega_{0}^{2} Q^{2}+m \gamma \dot{Q}=f_{R}(t) \label{13.122}\]Here the terms on the left side represent a damped harmonic oscillator. The first term is the force due to acceleration of the particle of mass \(m\left(F_{a c c}=m a\right)\). The second term is the restoring force of the potential, \(F_{r e s}=-\partial V / \partial Q=m \omega_{0}^{2}\). The third term allows friction to damp the motion of the coordinate at a rate \(\gamma\). The motion of \(Q\) is under the influence of \(f_{R}(t)\), a random fluctuating force exerted on \(Q\) by its surroundings.Under steady-state conditions, it stands to reason that the random force acting on \(Q\) is the origin of the damping. The environment acts on \(Q\) with stochastic perturbations that add and remove kinetic energy, which ultimately leads to dissipation of any excess energy. Therefore, the Langevin equation is modelled as a Gaussian stationary process. We take \(f_{R}(t)\) to have a timeaveraged value of zero,\[\left\langle f _ {R} (t) \right\rangle = 0 \label{13.123}\]and obey the classical fluctuation-dissipation theorem:\[\gamma = \frac {1} {2 m k _ {B} T} \int _ {- \infty}^{\infty} \left\langle f _ {R} (t) f _ {R} ( 0 ) \right\rangle \label{13.124}\]This shows explicitly how the damping is related to the correlation time for the random force. We will pay particular attention to the Markovian case\[\left\langle f _ {R} (t) f _ {R} ( 0 ) \right\rangle = 2 m \gamma k _ {B} T \delta (t) \label{13.125}\]which indicate that the fluctuations immediately lose all correlation on the time scale of the evolution of Q. The Langevin equation can be used to describe the correlation function for the time dependence of Q. For the Markovian case, Equation \ref{13.122} leads to\[C _ {Q Q} (t) = \frac {k _ {B} T} {m \omega _ {0}^{2}} \left( \cos \zeta t + \frac {\gamma} {2 \zeta} \sin \zeta t \right) e^{- \gamma t / 2} \label{13.126}\]where the reduced frequency \(\zeta=\sqrt{\omega_{0}^{2}-\gamma^{2} / 4}\). The frequency domain expression, obtained by Fourier transformation, is\[\tilde {C} _ {Q Q} ( \omega ) = \frac {\gamma k T} {m \pi} \frac {1} {\left( \omega _ {0}^{2} - \omega^{2} \right)^{2} + \omega^{2} \gamma^{2}} \label{13.127}\]Remembering that the absorption lineshape was determined by the quantum mechanical energy gap correlation function \(\langle q(t) q\rangle\), one can imagine an analogous classical description of the spectroscopy of a molecule that experiences interactions with a fluctuating environment. In essence this is what we did when discussing the Gaussian stochastic model of the lineshape. A more general description of the position of a particle subject to a fluctuating force is the Generalized Langevin Equation. The GLE accounts for the possibility that the damping may be time-dependent and carry memory of earlier configurations of the system:\[m \ddot {Q} (t) + m \omega _ {0}^{2} Q^{2} + m \int _ {0}^{t} d \tau \gamma ( t - \tau ) \dot {Q} ( \tau ) = f (t) \label{13.128}\]The memory kernel, \(\gamma ( t - \tau )\), is a correlation function that describes the time-scales over which the fluctuating force retains memory of its previous state. The force due to friction on \(Q\) depends on the history of the system through \(\tau\), the time preceding \(t\), and the relaxation of \(\gamma ( t - \tau )\). The classical fluctuation-dissipation relationship relates the magnitude of the fluctuating forces on the system coordinate to the damping\[\left\langle f_{R}(t) f_{R}(\tau)\right\rangle=2 m k_{B} T \gamma(t-\tau) \label{13.129}\]As expected, for the case that \(\gamma ( t - \tau ) = \gamma \delta ( t - \tau )\), the GLE reduces to the Markovian case, Equation \ref{13.122}.To demonstrate that the classical dynamics of the particle described under the GLE is related to the quantum mechanical dynamics for a particle interacting with a harmonic bath, we will outline the derivation of a quantum mechanical analog of the classical GLE. To do this we will derive an expression for the time-evolution of the system under the influence of the harmonic bath. We work with a Hamiltonian with a linear coupling between the system and the bath\[H _ {H B} = H _ {S} ( P , Q ) + H _ {B} \left( p _ {\alpha} , q _ {\alpha} \right) + H _ {S B} ( Q , q ) \label{13.130}\]We take the system to be a particle of mass M, described through variables P and Q, whereas \(m_{\alpha}\), \(p_{\alpha}\), and \(q_{\alpha}\) are bath variables. For the present case, we will take the system to be a quantum harmonic oscillator,\[H _ {s} = \frac {P^{2}} {2 M} + \frac {1} {2} M \Omega^{2} Q^{2} \label{13.131}\]and the Hamiltonian for the bath and its interaction with the system is written as\[H _ {B} + H _ {S B} = \sum _ {\alpha} \left( \frac {p _ {\alpha}^{2}} {2 m _ {\alpha}} + \frac {m _ {\alpha} \omega _ {\alpha}^{2}} {2} \left( q _ {\alpha} - \frac {c _ {\alpha}} {m _ {\alpha} \omega _ {\alpha}^{2}} Q \right)^{2} \right) \label{13.132}\]This expression explicitly shows that each of the bath oscillators is displaced with respect to the system by an amount dependent on their mutual coupling. In analogy to our work with the Displaced Harmonic Oscillator, if we define a displacement operator\[\hat {D} = \exp \left( - \frac {i} {\hbar} \sum _ {\alpha} \hat {p} _ {\alpha} \xi _ {\alpha} \right) \label{13.133}\]where\[\xi _ {\alpha} = \frac {c _ {\alpha}} {m _ {\alpha} \omega _ {\alpha}^{2}} Q \label{13.134}\]then\[H _ {B} + H _ {S B} = \hat {D}^{\dagger} H _ {B} \hat {D} \label{13.135}\]Equation \ref{13.132} is merely a different representation of our earlier harmonic bath model. To see this we write Equation \ref{13.132} as\[H _ {B} + H _ {S B} = \sum _ {\alpha} \hbar \omega _ {\alpha} \left( p _ {\alpha}^{2} + \left( q _ {\alpha} - c _ {\alpha} Q \right)^{2} \right) \label{13.136}\]where the coordinates and momenta are written in reduced form\[\begin{array}{l}
\underline{Q}=Q \sqrt{m \omega_{0} / 2 \hbar} \\
q_{\alpha}=q_{\alpha} \sqrt{m_{\alpha} \omega_{\alpha} / 2 \hbar} \\
p_{\alpha}=p_{\alpha} / \sqrt{2 \hbar m_{\alpha} \omega_{\alpha}}
\end{array} \label{13.137}\]Also, the reduced coupling is of the system to the \(\alpha^{\text {th }}\) oscillator is \[\mathcal {C} _ {\alpha} = c _ {\alpha} / \omega _ {\alpha} \sqrt {m _ {\alpha} \omega _ {\alpha} m \omega _ {0}} \label{13.138}\]Expanding Equation \ref{13.136} and collecting terms, we find that we can separate terms as in the harmonic bath model\[H _ {B} = \sum _ {\alpha} \hbar \omega _ {\alpha} \left( p _ {\alpha}^{2} + q _ {\alpha}^{2} \right) \label{13.139}\]\[H _ {S B} = - 2 \sum _ {\alpha} \hbar \omega _ {\alpha} d _ {\alpha} q _ {\alpha} + \lambda _ {B} \label{13.140}\]The reorganization energy due to the bath oscillators is\[\lambda _ {B} = \sum _ {\alpha} \hbar \omega _ {\alpha} d _ {\alpha}^{2} \label{13.141}\]and the unit less bath oscillator displacement is\[d _ {\alpha} = \underset {\mathcal {Q}} {\approx} \mathcal {C} _ {\alpha} \label{13.142}\]For our current work we regroup the total Hamiltonian (Equation \ref{13.130}) as\[H _ {H B} = \left[ \frac {P^{2}} {2 M} + \frac {1} {2} M \overline {\Omega}^{2} Q^{2} \right] + \sum _ {\alpha} \hbar \omega _ {\alpha} \left( p _ {\alpha}^{2} + q _ {\alpha}^{2} \right) - 2 \sum _ {\alpha} \hbar \omega _ {\alpha} c _ {\alpha} Q q _ {\alpha} \label{13.143}\]where the renormalized frequency is\[\overline {\Omega}^{2} = \Omega^{2} + \Omega \sum _ {\alpha} \omega _ {\alpha} c _ {\alpha}^{2} \label{13.144}\]To demonstrate the equivalence of the dynamics under this Hamiltonian and the GLE, we can derive an equation of motion for the system coordinate \(Q\). We approach this by first expressing these variables in terms of ladder operators\[\hat{P}=i\left(\hat{a}^{\dagger}-\hat{a}\right) \quad \hat{p}_{\alpha}=i\left(\hat{b}_{\alpha}^{\dagger}-\hat{b}_{\alpha}\right) \label{13.145}\]\[\hat{Q}=\left(\hat{a}^{\dagger}+\hat{a}\right) \quad \hat{q}_{\alpha}=\left(\hat{b}_{\alpha}^{\dagger}+\hat{b}_{\alpha}\right) \label{13.146}\]Here \(\hat {a}\), \(\hat {a}^{\dagger}\) are system operators, \(\hat {b}\) and \(\hat {b}^{\dagger}\) are bath operators. If the observed particle is taken to be bound in a harmonic potential, then the Hamiltonian in Equation \ref{13.130} can be written as\[H _ {H B} = \hbar \overline {\Omega} \left( \hat {a}^{\dagger} \hat {a} + \frac {1} {2} \right) + \sum _ {\alpha} \hbar \omega _ {\alpha} \left( \hat {b} _ {\alpha}^{\dagger} \hat {b} _ {\alpha} + \frac {1} {2} \right) - \left( \hat {a}^{\dagger} + \hat {a} \right) \sum _ {\alpha} \hbar \omega _ {\alpha} c _ {\alpha} \left( \hat {b} _ {\alpha}^{\dagger} + \hat {b} _ {\alpha} \right) \label{13.147}\]The equations of motion for the operators in Equations \ref{13.145} and \ref{13.146} can be obtained from the Heisenberg equation of motion.\[\dot {\hat {a}} = \frac {i} {\hbar} \left[ H _ {H B} , \hat {a} \right] \label{13.148}\]from which we find\[\dot {\hat {a}} = - i \overline {\Omega} \hat {a} + i \sum _ {\alpha} \omega _ {\alpha} c _ {\alpha} \left( \hat {b} _ {\alpha}^{\dagger} + \hat {b} _ {\alpha} \right) \label{13.149}\]\[\dot {\hat {b}} _ {\alpha} = - i \omega _ {\alpha} \hat {b} _ {\alpha} + i \omega _ {\alpha} \mathcal {C} _ {\alpha} \left( \hat {a}^{\dagger} + \hat {a} \right)\label{13.150}\]To derive an equation of motion for the system coordinate, we begin by solving for the time evolution of the bath coordinates by directly integrating Equation \ref{13.150},\[\hat {b} _ {\alpha} (t) = e^{- i \omega _ {a} t} \int _ {0}^{t} e^{i \omega _ {a} t^{\prime}} \left( i \omega _ {\alpha} \mathcal {C} _ {\alpha} \left( \hat {a}^{\dagger} + \hat {a} \right) \right) d t^{\prime} + \hat {b} _ {\alpha} ( 0 ) e^{- i \omega _ {a} t} \label{13.151}\]and insert the result into Equation \ref{13.149}. This leads to\[\dot {\hat {a}} + i \overline {\Omega} \hat {a} - i \sum _ {\alpha} \omega _ {\alpha} c _ {\alpha}^{2} \left( \hat {a}^{\dagger} + \hat {a} \right) + i \int _ {0}^{t} d t^{\prime} \kappa \left( t - t^{\prime} \right) \left( \dot {\hat {a}}^{\dagger} \left( t^{\prime} \right) + \dot {\hat {a}} \left( t^{\prime} \right) \right) = i F (t) \label{13.152}\]where\[\kappa (t) = \sum _ {\alpha} \omega _ {\alpha} c _ {\alpha}^{2} \cos \left( \omega _ {\alpha} t \right) \label{13.153}\]and\[F (t) = \sum _ {\alpha} c _ {\alpha} \left[ \hat {b} _ {\alpha} ( 0 ) - \omega _ {\alpha} c _ {\alpha} \left( \hat {a}^{\dagger} ( 0 ) + \hat {a} ( 0 ) \right) \right] e^{- i \omega _ {a} t} + h . c . \label{13.154}\]Now, recognizing that the time-derivative of the system variables is given by\[\dot {\hat {P}} = i \left( \dot {\hat {a}}^{\dagger} - \dot {\hat {a}} \right) \label{13.155}\]\[\hat {\hat {Q}} \left( \dot {\hat {a}}^{\dagger} + \dot {\hat {a}} \right) \label{13.156}\]and substituting Equation \ref{13.152} into \ref{13.155}, we can write an equation of motion\[\dot {P} (t) + \left( \overline {\Omega} - 2 \sum _ {\alpha} \frac {2 \mathcal {c} _ {\alpha}^{2}} {\omega _ {\alpha}} \right) Q + \int _ {0}^{t} d t^{\prime} 2 \kappa \left( t - t^{\prime} \right) \hat {Q} \left( t^{\prime} \right) = F (t) + F^{\dagger} (t) \label{13.157}\]Equation \ref{13.157} bears a striking resemblance to the classical GLE, Equation \ref{13.128}. In fact, if we define\[\gamma(t)=2 \bar{\Omega} \kappa(t)\]\[=\frac{1}{M} \sum_{\alpha} \frac{c_{\alpha}^{2}}{m_{\alpha} \omega_{\alpha}^{2}} \cos \omega_{\alpha} t \label{13.158}\]\[f_{R}(t)=\sqrt{2 \hbar M \Omega}\left[F(t)+F^{\dagger}(t)\right]\]\[=\sum_{\alpha} c_{\alpha}\left[q_{\alpha} \cos \omega_{\alpha} t+\frac{p_{\alpha}}{m_{\alpha} \omega_{\alpha}} \sin \omega_{\alpha} t\right] \label{13.159}\]then the resulting equation is isomorphic to the classical GLE\[\dot{P}(t)+M \Omega^{2} Q(t)+M \int_{0}^{t} d t^{\prime} \gamma\left(t-t^{\prime}\right) \dot{Q}\left(t^{\prime}\right)=f_{R}(t)\label{13.160}\]This demonstrates that the quantum harmonic bath acts a dissipative environment, whose friction on the system coordinate is given by Equation \ref{13.158}. What we have shown here is an outline of the proof, but detailed discussion of these relationships can be found elsewhere.1 Nitzan, A., Chemical Dynamics in Condensed Phases. Oxford University Press: New York, 2006.2 Nitzan, A., Chemical Dynamics in Condensed Phases. Oxford University Press: New York, 2006; Ch. 8.3 Calderia, A. O.; Legget, A. J., Ann. Phys 1983, 149, 372-4564 4 Weiss, U. Quantum Dissipative Systems. 3rd ed.; World Scientific: Hackensack, N.J. , 2008; Leggett, A. J.; Chakravarty, S.; Dorsey, A. T.; Fisher, M. P. A.; Garg, A.; Zwerger, W. Dynamics of the dissipative two-state system. Reviews of Modern Physics 1987, 59, 1-85; Yan, Y.; Xu, R. Quantum Mechanics of Dissipative Systems. Annual Review of Physical Chemistry 2005, 56, 187-219. 13-3This page titled 14.5: Correspondence of Harmonic Bath and Stochastic Equations of Motion is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,288 |
15.1: Electronic Interactions
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/15%3A_Energy_and_Charge_Transfer/15.01%3A_Electronic_Interactions | In this section we will describe processes that result from the interaction between two or more molecular electronic states, such as the transport of electrons or electronic excitation. This problem can be formulated in terms of a familiar Hamiltonian\[H = H _ {0} + V\]in which \(H_o\) describes the electronic states (including any coupling to nuclear motion), and \(V\) is the interaction between the electronic states. In formulating such a problem we will need to consider some basic questions: Is V strong or weak? Are the electronic states described in a diabatic or adiabatic basis? How do nuclear degrees of freedom influence the electronic couplings? For weak couplings, we can describe the transport of electrons and electronic excitation with perturbation theory drawing on Fermi’s Golden Rule:\[\begin{align} \overline {w} &= \frac {2 \pi} {\hbar} \sum _ {k , \ell} p _ {\ell} \left| V _ {k \ell} \right|^{2} \delta \left( E _ {k} - E _ {\ell} \right) \\[4pt] &= \frac {1} {\hbar^{2}} \int _ {- \infty}^{+ \infty} d t \left\langle V _ {I} (t) V _ {I} ( 0 ) \right\rangle \end{align}\]This approach underlies common descriptions of electronic energy transport and non-adiabatic electron transfer. We will discuss this regime concentrating on the influence of vibrational motions they are coupled to. However, the electronic couplings can also be strong, in which case the resulting states become delocalized. We will discuss this limit in the context of excitons that arise in molecular aggregates.To begin, it is useful to catalog a number of electronic interactions of interest. We can use some schematic diagrams to illustrate them, emphasizing the close relationship between the various transport processes. However, we need to be careful, since these are not meant to imply a mechanism or meaningful information on dynamics. Here are a few commonly described processes involving transfer from a donor molecule D to an acceptor molecule A:Applies to the transfer of energy from the electronic excited state of a donor to an acceptor molecule. Arises from a Coulomb interaction that is operative at long range, i.e., distances large compared to molecular dimensions. Requires electronic resonance. Named for the first practical derivations of expressions describing this effect: Förster Resonance Energy Transfer (FRET)Nonadiabatic electron transfer. Requires wavefunction overlap.Ground state Electron TransferExcised state Electron TransferHole TransferDexter Transfer. Requires wavefunction overlap. Singlet or tripletThis page titled 15.1: Electronic Interactions is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,289 |
15.2: Förster Resonance Energy Transfer (FRET)
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/15%3A_Energy_and_Charge_Transfer/15.02%3A_Forster_Resonance_Energy_Transfer_(FRET) | Förster resonance energy transfer (FRET) refers to the nonradiative transfer of an electronic excitation from a donor molecule to an acceptor molecule:\[\ce{D}^{*} + \ce{A} \rightarrow \ce{D} + \ce{A}^{*} \label{14.1}\]This electronic excitation transfer, whose practical description was first given by Förster, arises from a dipole–dipole interaction between the electronic states of the donor and the acceptor, and does not involve the emission and reabsorption of a light field. Transfer occurs when the oscillations of an optically induced electronic coherence on the donor are resonant with the electronic energy gap of the acceptor. The strength of the interaction depends on the magnitude of a transition dipole interaction, which depends on the magnitude of the donor and acceptor transition matrix elements, and the alignment and separation of the dipoles. The sharp \(1/r^6\) dependence on distance is often used in spectroscopic characterization of the proximity of donor and acceptor.The electronic ground and excited states of the donor and acceptor molecules all play a role in FRET. We consider the case in which we have excited the donor electronic transition, and the acceptor is in the ground state. Absorption of light by the donor at the equilibrium energy gap is followed by rapid vibrational relaxation that dissipates the reorganization energy of the donor \(\lambda _ {D}\) over the course of picoseconds. This leaves the donor in a coherence that oscillates at the energy gap in the donor excited state \(\omega _ {e g}^{D} \left( q _ {D} = d _ {D} \right)\). The time scale for FRET is typically nanoseconds, so this preparation step is typically much faster than the transfer phase. For resonance energy transfer we require a resonance condition, so that the oscillation of the excited donor coherence is resonant with the ground state electronic energy gap of the acceptor \(\omega _ {e g}^{A} \left( q _ {A} = 0 \right)\). Transfer of energy to the acceptor leads to vibrational relaxation and subsequent acceptor fluorescence that is spectrally shifted from the donor fluorescence. In practice, the efficiency of energy transfer is obtained by comparing the fluorescence emitted from donor and acceptor.This description of the problem lends itself naturally to treating with a DHO Hamiltonian, However, an alternate picture is also applicable, which can be described through the EG Hamiltonian. FRET arises from the resonance that occurs when the fluctuating electronic energy gap of a donor in its excited state matches the energy gap of an acceptor in its ground state. In other words\[\underbrace {\hbar \omega _ {e g}^{D} - \lambda _ {D}} _ {\Omega _ {D} (t)} = \underbrace {\hbar \omega _ {e g}^{A} - \lambda _ {A}} _ {\Omega _ {A} (t)} \label{14.2}\]These energy gaps are time-dependent with occasion crossings that allow transfer of energy.Our system includes the ground and excited potentials of the donor and acceptor molecules. The four possible electronic configurations of the system are\[| G _ {D} G _ {A} \rangle , | E _ {D} G _ {A} \rangle , | G _ {D} E _ {A} \rangle , | E _ {D} E _ {A} \rangle\]Here the notation refers to the ground (\(G\)) or excited (\(E\)) vibronic states of either donor (\(D\)) or acceptor (\(A\)). More explicitly, the states also include the vibrational excitation:\[| E _ {D} G _ {A} \rangle = | e _ {D} n _ {D} ; g _ {A} n _ {A} \rangle\]Thus the system can have no excitation, one excitation on the donor, one excitation on the acceptor, or one excitation on both donor and acceptor. For our purposes, let’s only consider the two electronic configurations that are close in energy, and are likely to play a role in the resonance transfer in Equation \ref{14.2} and\(| E _ {D} G _ {A} \rangle\) and \(| G _ {D} E _ {A} \rangle\)Since the donor and acceptor are weakly coupled, we can write our Hamiltonian for this problem in a form that can be solved by perturbation theory (\(H = H _ {0} + V\)). Working with the DHO. approach, our material Hamiltonian has four electronic manifolds to consider:\[\underbrace {\hbar \omega _ {e g}^{D} - \lambda _ {D}} _ {\Omega _ {D} (t)} = \underbrace {\hbar \omega _ {e g}^{A} - \lambda _ {A}} _ {\Omega _ {A} (t)} \label{14.3}\]Each of these is defined as we did previously, with an electronic energy and a dependence on a displaced nuclear coordinate. For instance\[\begin{align} H _ {D}^{E} &= | e _ {D} \rangle E _ {e}^{D} \langle e _ {D} | + H _ {e}^{D} \label{14.4} \\[4pt] H _ {e}^{D} &= \hbar \omega _ {0}^{D} \left( \tilde {p} _ {D}^{2} + \left( \tilde {q} _ {D} - \tilde {d} _ {D} \right)^{2} \right) \label{14.5} \end{align}\]\(E _ {e}^{D}\) is the electronic energy of donor excited state.Then, what is \(V\)? Classically it is a Coulomb interaction of the form ,\[V = \sum _ {i j} \frac {q _ {i}^{D} q _ {j}^{A}} {\left| r _ {i}^{D} - r _ {j}^{A} \right|} \label{14.6}\]Here the sum is over all electrons and nuclei of the donor (\(i\)) and acceptor (\(j\)).As is, this is challenging to work with, but at large separation between molecules, we can recast this as a dipole–dipole interaction. We define a frame of reference for the donor and acceptor molecule, and assume that the distance between molecules is large. Then the dipole moments for the molecules are\[\begin{aligned} \overline {\mu}^{D} & = \sum _ {i} q _ {i}^{D} \left( r _ {i}^{D} - r _ {0}^{D} \right) \\ \overline {\mu}^{A} & = \sum _ {j} q _ {j}^{A} \left( r _ {J}^{A} - r _ {0}^{A} \right) \end{aligned} \label{14.7}\]The interaction between donor and acceptor takes the form of a dipole–dipole interaction:\[V = \dfrac {3 \left( \overline {\mu} _ {A} \cdot \hat {r} \right) \left( \overline {\mu} _ {D} \cdot \hat {r} \right) - \overline {\mu} _ {A} \cdot \overline {\mu} _ {D}} {\overline {r}^{3}} \label{14.8}\]where \(r\) is the distance between donor and acceptor dipoles and \(\hat{r}\) is a unit vector that marks the direction between them. The dipole operators here are taken to only act on the electronic states and be independent of nuclear configuration, i.e., the Condon approximation. We write the transition dipole matrix elements that couple the ground and excited electronic states for the donor and acceptor as\[ \begin{align} \overline {\mu} _ {A} &= | A \rangle \overline {\mu}_{AA^{*}} \left\langle A^{*} | + | A^{*} \right\rangle \overline {\mu} _ {A^{*} A} \langle A | \label{14.9} \\[4pt] \overline {\mu} _ {D} &= | D \rangle \overline {\mu} _ {D D^{*}} \left\langle D^{*} | + | D^{*} \right\rangle \overline {\mu} _ {D^{*} D} \langle D | \label{14.10} \end{align} \]For the dipole operator, we can separate the scalar and orientational contributions as\[\overline {\mu} _ {A} = \hat {u} _ {A} \mu _ {A} \label{14.11}\]This allows the transition dipole interaction in Equation \ref{14.8} to be written as\[V = \mu _ {A} \mu _ {B} \frac {\kappa} {r^{3}} [ | D^{*} A \rangle \left\langle A^{*} D | + | A^{*} D \right\rangle \left\langle D^{*} A | \right] \label{14.12}\]All of the orientational factors are now in the term \(\kappa\)\[\kappa = 3 \left( \hat {u} _ {A} \cdot \hat {r} \right) \left( \hat {u} _ {D} \cdot \hat {r} \right) - \hat {u} _ {A} \cdot \hat {u} _ {D} \label{14.13}\]We can now obtain the rates of energy transfer using Fermi’s Golden Rule expressed as a correlation function in the interaction Hamiltonian:\[w _ {k \ell} = \frac {2 \pi} {\hbar^{2}} \sum _ {\ell} p _ {\ell} \left| V _ {k \ell} \right|^{2} \delta \left( \omega _ {k} - \omega _ {\ell} \right) = \frac {1} {\hbar^{2}} \int _ {- \infty}^{+ \infty} d t \left\langle V _ {I} (t) V _ {I} ( 0 ) \right\rangle \label{14.14}\]Note that this is not a Fourier transform! Since we are using a correlation function there is an assumption that we have an equilibrium system, even though we are initially in the excited donor state. This is reasonable for the case that there is a clear time scale separation between the ps vibrational relaxation and thermalization in the donor excited state and the time scale (or inverse rate) of the energy transfer process.Now substituting the initial state \(\ell = | D^{*} A \rangle\) and the final state \(k = | A^{*} D \rangle\), we find\[w _ {E T} = \frac {1} {\hbar^{2}} \int _ {- \infty}^{+ \infty} d t \frac {\left\langle \kappa^{2} \right\rangle} {r^{6}} \left\langle D^{*} A \left| \mu _ {D} (t) \mu _ {A} (t) \mu _ {D} ( 0 ) \mu _ {A} ( 0 ) \right| D^{*} A \right\rangle \label{14.15}\]where\[\mu _ {D} (t) = e^{i H _ {D} t / \hbar} \mu _ {D} e^{- i H _ {D} t / \hbar}.\]Here, we have neglected the rotational motion of the dipoles. Most generally, the orientational average is\[\left\langle \kappa^{2} \right\rangle = \langle \kappa (t) \kappa ( 0 ) \rangle \label{14.16}\]However, this factor is easier to evaluate if the dipoles are static, or if they rapidly rotate to become isotropically distributed. For the static case \(\left\langle \kappa^{2} \right\rangle = 0.475\). For the case of fast loss of orientation:\[\left\langle \kappa^{2} \right\rangle \rightarrow \langle \kappa (t) \rangle \langle \kappa ( 0 ) \rangle = \langle \kappa \rangle^{2} = \dfrac{2}{3}\]Since the dipole operators act only on \(A\) or \(D^{*}\), and the \(D\) and \(A\) nuclear coordinates are orthogonal, we can separate terms in the donor and acceptor states.\[\begin{aligned} w _ {E T} & = \frac {1} {\hbar^{2}} \int _ {- \infty}^{+ \infty} d t \frac {\left\langle \kappa^{2} \right\rangle} {r^{6}} \left\langle D^{*} \left| \mu _ {D} (t) \mu _ {D} ( 0 ) \right| D^{*} \right\rangle \left\langle A \left| \mu _ {A} (t) \mu _ {A} ( 0 ) \right| A \right\rangle \\ & = \frac {1} {\hbar^{2}} \int _ {- \infty}^{+ \infty} d t \frac {\left\langle \kappa^{2} \right\rangle} {r^{6}} C _ {D^{*} D^{*}} (t) C _ {\mathrm {AA}} (t) \end{aligned} \label{14.17}\]The terms in this equation represent the dipole correlation function for the donor initiating in the excited state and the acceptor correlation function initiating in the ground state. That is, these are correlation functions for the donor emission (fluorescence) and acceptor absorption. Remembering that \(| D^{*} \rangle\) represents the electronic and nuclear configuration \(| d^{*} n _ {D^{*}} \rangle\), we can use the displaced harmonic oscillator Hamiltonian or energy gap Hamiltonian to evaluate the correlation functions. For the case of Gaussian statistics we can write\[C _ {D _ {D}^{*}} \cdot (t) = \left| \mu _ {D D^{*}} \right|^{2} e^{- i \left( \omega _ {D D^{*}} - 2 \lambda _ {D} \right) t^{*} - g _ {D}^{*} (t)} \label{14.18}\]\[C _ {A A} (t) = \left| \mu _ {A A} \right|^{2} e^{- i \omega _ {A A} t - g _ {A} (t)} \label{14.19}\]Here we made use of\[ \omega _ {D^{*} D} = \omega _ {D D^{*}} - 2 \lambda _ {D}\label{14.20}\]which expresses the emission frequency as a frequency shift of \(2 \lambda _ {D}\) relative to the donor absorption frequency. The dipole correlation functions can be expressed in terms of the inverse Fourier transforms of a fluorescence or absorption lineshape:\[C _ {D^{*} D^{\cdot}} (t) = \frac {1} {2 \pi} \int _ {- \infty}^{+ \infty} d \omega e^{- i \omega t} \sigma _ {f l u o r}^{D} ( \omega ) \label{14.21}\]\[C _ {A A} (t) = \frac {1} {2 \pi} \int _ {- \infty}^{+ \infty} d \omega e^{- i \omega t} \sigma _ {a b s}^{A} ( \omega ) \label{14.22}\]To express the rate of energy transfer in terms of its common practical form, we make use of Parsival’s Theorem, which states that if a Fourier transform pair is defined for two functions, the integral over a product of those functions is equal whether evaluated in the time or frequency domain:\[\int _ {- \infty}^{\infty} f _ {1} (t) f _ {2}^{*} (t) d t = \int _ {- \infty}^{\infty} \tilde {f} _ {1} ( \omega ) \tilde {f} _ {2}^{*} ( \omega ) d \omega \label{14.23}\]This allows us to express the energy transfer rate as an overlap integral \(J_{DA}\) between the donor fluorescence and acceptor absorption spectra:\[w _ {E T} = \frac {1} {\hbar^{2}} \frac {\left\langle \kappa^{2} \right\rangle} {r^{6}} \left| \mu _ {D D^{*}} \right|^{2} \left| \mu _ {A A^{\prime}} \right|^{2} \int _ {- \infty}^{+ \infty} d \omega \sigma _ {a b s}^{A} ( \omega ) \sigma _ {f u o r}^{D} ( \omega ) \label{14.24}\]Here is the lineshape normalized to the transition matrix element squared: \(\sigma = \sigma / | \mu |^{2}\). The overlap integral is a measure of resonance between donor and acceptor transitions.So, the energy transfer rate scales as \(r^{-6}\), depends on the strengths of the electronic transitions for donor and acceptor molecules, and requires resonance between donor fluorescence and acceptor absorption. One of the things we have neglected is that the rate of energy transfer will also depend on the rate of excited donor state population relaxation. Since this relaxation is typically dominated by the donor fluorescence rate, the rate of energy transfer is commonly written in terms of an effective distance \(R_0\), and the fluorescence lifetime of the donor \(\tau_D\):\[w _ {E T} = \frac {1} {\tau _ {D}} \left( \frac {R _ {0}} {r} \right)^{6} \label{14.25}\]At the critical transfer distance \(R_0\) the rate (or probability) of energy transfer is equal to the rate of fluorescence. \(R_0\) is defined in terms of the sixth-root of the terms in Equation \ref{14.24}, and is commonly written as\[R _ {0}^{6} = \frac {9000 \ln ( 10 ) \phi _ {D} \left\langle \kappa^{2} \right\rangle} {128 \pi^{5} n^{4} N _ {A}} \int _ {0}^{\infty} d \overline {\nu} \frac {\sigma _ {\text {fluur}}^{D} ( \overline {V} ) \varepsilon _ {A} ( \overline {v} )} {\overline {V}^{4}} \label{14.26}\]This is the practical definition that accounts for the frequency dependence of the transitiondipole interaction and non-radiative donor relaxation in addition to being expressed in common units. \(\overline {V}\) represents units of frequency in cm-1. The fluorescence spectrum \(\sigma_{\text {fluor}}^{D}\) must be normalized to unit area, so that at \(\sigma_{\text {fluor }}^{D}(\bar{v})\) is expressed in cm (inverse wavenumbers). The absorption spectrum \(\varepsilon_{A}(\bar{v})\) must be expressed in molar decadic extinction coefficient units (liter/mol*cm). \(n\) is the index of refraction of the solvent, \(N_A\) is Avagadro’s number, and \(\phi_D\) is the donor fluorescence quantum yield.Transition Dipole InteractionFRET is one example of a quantum mechanical transition dipole interaction. The interaction between two dipoles, \(A\) and \(D\), in Equation \ref{14.12} is\[V = \frac {\kappa} {r^{3}} \left\langle e \left| \mu _ {A} \right| g \right\rangle \left\langle g \left| \mu _ {D} \right| e \right\rangle \label{14.27}\]Here, \(\left\langle g \left| \mu _ {D} \right| e \right\rangle\) is the transition dipole moment in Debye for the ground-to-excited state transition of molecule \(A\). \(r\) is the distance between the centers of the point dipoles, and \(\kappa\) is the unitless orientational factor\[\kappa = 3 \cos \theta _ {1} \cos \theta _ {2} - \cos \theta _ {12}\]The figure below illustrates this function for the case of two parallel dipoles, as a function of the angle between the dipole and the vector defining their separation.In the case of vibrational coupling, the dipole operator is expanded in the vibrational normal coordinate: \(\mu = \mu _ {0} + \left( \partial \mu / \partial Q _ {A} \right) Q _ {A}\) and harmonic transition dipole matrix elements are\[\left\langle 1\left|\mu_{A}\right| 0\right\rangle=\sqrt{\frac{\hbar}{2 c \omega_{A}}} \frac{\partial \mu}{\partial Q_{A}} \label{14.28}\]where \(\omega _ {A}\) is the vibrational frequency. If the frequency \(V _ {A}\) is given in cm-1, and the transition dipole moment \(\partial \mu / \partial Q _ {A}\) is given in units of \(\begin{equation}\text { D } Å^{-1} \text {amu }^{-1 / 2}\end{equation}\), then the matrix element in units of \(D\) is\[\left| \left\langle 1 \left| \mu _ {A} \right| 0 \right\rangle \right| = 4.1058 v _ {A}^{- 1 / 2} \left( \partial \mu / \partial Q _ {A} \right)\]If the distance between dipoles is specified in Ångstroms, then the transition dipole coupling from Equation \ref{14.27} in cm-1 is\[V \left( c m^{- 1} \right) = 5034 \kappa r^{- 3}.\]Experimentally, one can determine the transition dipole moment from the absorbance \(A\) as\[A = \left( \frac {\pi N _ {A}} {3 c^{2}} \right) \left( \frac {\partial \mu} {\partial Q _ {A}} \right)^{2} \label{14.29}\]This page titled 15.2: Förster Resonance Energy Transfer (FRET) is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,290 |
15.3: Excitons in Molecular Aggregates
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/15%3A_Energy_and_Charge_Transfer/15.03%3A_Excitons_in_Molecular_Aggregates | The absorption spectra of periodic arrays of interacting molecular chromophores show unique spectral features that depend on the size of the system and disorder of the environment. We will investigate some of these features, focusing on the delocalized eigenstates of these coupled chromophores, known as excitons. These principles apply to the study of molecular crystals, J-aggregates, photosensitizers, and light-harvesting complexes in photosynthesis. Similar topics are used in the description of properties of conjugated polymers and organic photovoltaics, and for extended vibrational states in IR and Raman spectroscopy.Strong coupling between molecules leads to the delocalization of electronic or vibrational eigenstates, under which weak coupling models like FRET do not apply. From our studies of the coupled two-state system, we know that when the coupling between states is much larger that the energy splitting between the states (\(\varepsilon _ {1} - \varepsilon _ {2} \ll 2 \mathrm {V}\)) then the resulting eigenstates \(| \pm \rangle\) are equally weighted symmetric and antisymmetric combinations of the two, whose energy eigenvalues are split by \(2V\). Setting \(\varepsilon _ {1} = \varepsilon _ {2} = \varepsilon\)\[E _ {\pm} = \varepsilon \pm V\]\[| \pm \rangle = \frac {1} {\sqrt {2}} ( | 1 \rangle \pm | 2 \rangle )\]If we excite one of these molecules, we expect that the excitation will flow back and forth at the Rabi frequency. So, what happens with multiple coupled chromophores, focusing particular interest on the placement of coupled chromophores into periodic arrays in space? In the strong coupling regime, the variation in the uncoupled energies is small, making this a problem of coupled quasi-degenerate states. With a spatially period structure, the resulting states bear close similarity to simple descriptions of electronic band structure using the tight-binding model.Excitons refer to electronic excited states that are not localized to a particular molecule. But beyond that there are many flavors. We will concentrate on Frenkel excitons, which refer to excited states in which the excited electron and the corresponding hole (or electron vacancy) reside on the same molecule. All molecules remain electrically neutral in the ground and excited states. This corresponds to what one would expect when one has resonant dipole–dipole interactions between molecules. When there is charge transfer character, the electron and hole can reside on different molecules of the coupled complex. These are referred to as Mott–Wannier excitons.To describe the spectroscopy of an array of many coupled chromophores, it is first instructive to work through a pair of coupled molecules. This is in essence the two-level problem from earlier. We consider a pair of molecules (\(1\) and \(2\)), which each have a ground and electronically excited state \(| e \rangle\) and \(| g \rangle\)) split by an energy gap \(\varepsilon 0\), and a transition dipole moment \(\overline {\mu}\). In the absence of coupling, the state of the system can be specified by specifying the electronic state of both molecules, leading to four possible states: \(|g g\rangle,|e g\rangle,|g e\rangle,|e e\rangle\) whose energies are \(0, \varepsilon_{0,} \varepsilon_{0}\), and \(2 \varepsilon 0\), respectively.For shorthand we define the ground state as \(|G\rangle\) and the excited states as \(|1\rangle\) and \(|2\rangle\) to signify the the electronic excitation is on either molecule \(1\) or \(2\). In addition, the molecules are spaced by a separation \(r_{12}\), and there is a transition dipole interaction that couples the molecules.\[V = J ( | 2 \rangle \langle 1 | + | 1 \rangle \langle 2 | )\]Following our description of transition dipole coupling the coupling strength \(J\) is given by\[J=\frac{\left(\bar{\mu}_{1} \cdot \bar{\mu}_{2}\right)\left|\bar{r}_{12}\right|^{2}-3\left(\bar{\mu}_{1} \cdot \bar{r}_{12}\right)\left(\bar{\mu}_{2} \cdot \bar{r}_{12}\right)}{\left|\bar{r}_{12}\right|^{5}}=\frac{\mu_{1} \mu_{2}}{r_{12}^{3}} \kappa\]where the orientational factor is\[\kappa=\left(\hat{\mu}_{1} \cdot \hat{\mu}_{2}\right)-3\left(\hat{\mu}_{1} \cdot \hat{r}_{12}\right)\left(\hat{\mu}_{2} \cdot \hat{r}_{12}\right)\]We assume that the coupling is not too strong, so that we can just concentrate on how it influences \(|1 \rangle\) and \(|2 \rangle\) but not \(|G \rangle\). Then we only need to describe the coupling induced shifts to the singly excited states, which are described by the Hamiltonian\[H = \left( \begin{array} {l l} {\varepsilon _ {0}} & {J} \\ {J} & {\varepsilon _ {0}} \end{array} \right)\]As stated above, we find that the eigenvalues are\[E _ {\pm} = \varepsilon _ {0} \pm J\]and that the eigenstates are:\[|\pm\rangle=\frac{1}{\sqrt{2}}(|1\rangle \pm|2\rangle)\]These symmetric and antisymmetric states are delocalized across the two molecules, and in the language of Frenkel excitons are referred to as the one-exciton states. Furthermore, the dipole operator for the dimer is\[\overline {M} = \overline {\mu} _ {1} + \overline {\mu} _ {2}\]and so the transition dipole matrix elements are:\[M _ {\pm} = \langle \pm | \overline {M} | G \rangle = \frac {1} {\sqrt {2}} \left( \overline {\mu} _ {1} \pm \overline {\mu} _ {2} \right)\]\(M_+\) and \(M_-\) are oriented perpendicular to each other in the molecular frame.
\end{array}We can now predict the absorption spectrum for the dimer. We have two transitions from the ground state and the \(| \pm \rangle\) states which are resonant at \(\hbar \omega = \varepsilon _ {0} \pm J\) and which have an amplitude \(\left| M _ {*} \right|^{2}\). The splitting between the peaks is referred to as the Davydov splitting. Note that the relative amplitude of the peaks allows one to infer the angle between the molecular transition dipoles. Also, note for \(θ = 0°\) or \(90°\), all amplitude appears in one transition with magnitude \(2 | \mu |^{2}\), which is referred to as superradiant.Now let’s consider linear aggregate of \(N\) periodically arranged molecules. We will assume that each molecule is a two-level electronic system with a ground state and an excited state. We will assume that electronic excitation moves an electron from the ground state to an unoccupied orbital of the same molecule. We will label the molecules with integer values (\(n\)) between \(0\) and \(N-1\):If the molecules are separated along the chain by a lattice spacing \(a\), then the size of the chain is \(L = \alpha N\). Each molecule has a transition dipole moment \(\mu\), which makes an angle \(\beta\) with the axis of the chain.In the absence of interactions, we can specify the state of the system exactly by identifying whether each molecule is in the electronically excited or ground state. If the state of molecule n within the chain is \(\varphi _ {n}\), which can take on values of \(g\) or \(e\), then\[| \psi \rangle = | \varphi _ {0} , \varphi _ {1} , \varphi _ {2} \cdots \varphi _ {n} \cdots \varphi _ {N - 1} \rangle\]This representation of the state of the system is referred to as the site basis, since it is expressed in terms of each molecular site in the chain. For simplicity we write the ground state of the system as\[| G \rangle = | g , g , g \ldots , g \rangle\]If we excite one of the molecules within the aggregate, we have a singly excited state in which the nth molecule is excited, so that\[| \psi \rangle = | g , g , g , \ldots , e , \dots , g \rangle \equiv | n \rangle\]For shorthand, we identify this product state as \(| n \rangle\) which is to be distinguished from the molecular eigenfunction at site \(n\), \(\varphi _ {n}\).The singly excited state is assigned an energy \(\mathcal {E} _ {0}\) corresponding to the electronic energy gap. In the absence of coupling, the singly excited states are \(N\)-fold degenerate, corresponding to a single excitation at any of the \(N\) sites. If two excitations are placed on the chain we can see that there are \(N(N-1)\) possible states with energy \(2 \varepsilon _ {0}\), recognizing that the Pauli principle does not allow two excitations on the same site. When coupling is introduced, the mixing of these degenerate states leads to the one-exciton and two-exciton bands. For this discussion, we will concentrate on the one-exciton states.The coupling between molecule \(n\) and molecule \(n′\) is given by the matrix element \(V _ {n n^{\prime}}\). We will assume that a molecule interacts only with its neighbors, and that each pairwise interaction has a magnitude \(J\)\[V _ {n n^{\prime}} = J \delta _ {n , n^{\prime} \pm 1}\]If \(V\) is a dipole–dipole interaction, the orientational factor \(\kappa\) dictates that when the transition dipole angle \(\beta < 54.7^{\circ}\) then the sign of the coupling \(J < 0\), which is the case known as J-aggregates (after Edwin Jelley), and implies an offset stack of chromophores or head-to-tail arrangement. If \(\beta > 54.7^{\circ}\) then \(J > 0\), and the system is known as an H-aggregate.To begin, we also apply periodic boundary conditions to this problem, which implies that we are describing the states of an N-molecule chain within an infinite linear chain. In terms of the Hamiltonian, the molecules at the beginning and end of our chain feel the same symmetric interactions to two neighbors as the other molecules. To write this in terms of a finite \(N \times N\) matrix, one couples the first and last member of the chain: \(J _ {0 , N - 1} = J _ {N - 1,0} = J\)\[J _ {0 , N - 1} = J _ {N - 1,0} = J.\]With these observations in mind, we can write the Frenkel Exciton Hamiltonian for the linear aggregate in terms of a system Hamiltonian that reflects the individual sites and their couplings\[\begin{align} H _ {0} &= H _ {S} + V \\[4pt] H _ {S} &= \sum _ {n = 1}^{N} \varepsilon _ {0} | n \rangle \langle n | \\[4pt] V &= \sum _ {n = 1}^{N} J \{| n^{\prime} \rangle \langle n | + | n \rangle \left\langle n^{\prime} | \right\} \delta _ {n , n^{\prime} \pm 1} \label{14.30} \end{align}\]Here periodic boundary conditions imply that we replace \(| N \rangle \Rightarrow | 0 \rangle\) and \(| - 1 \rangle \Rightarrow | N - 1 \rangle\) where they appear.The optical properties of the aggregate will be obtained by determining the eigenstates of the Hamiltonian. We look for solutions that describe one-exciton eigenstates as an expansion in the site basis.\[| \psi (x) \rangle = \sum _ {n = 0}^{N - 1} c _ {n} ( \mathrm {x} ) | \varphi _ {n} \left( x - x _ {n} \right) \rangle \label{14.31}\]which is written in order to point out the dependence of these wavefunctions on the lattice spacing x, and the position of a particular molecule at xn. Such an expansion should work well when the electronic interactions between sites is weak enough to treat perturbatively. For the electronic structure of solids, this is known as the tight binding model, which describes band structure as a linear combinations of atomic orbitals.Rather than diagonalizing the Hamiltonian, we can take advantage of its translational symmetry to obtain the eigenstates. The symmetry of the Hamiltonian is such that it is unchanged by any integral number of translations along the chain. That is the results are unchanged for any summation in Equation \ref{14.30} and \ref{14.31} over \(N\) consecutive integers. Similarly, the molecular wavefunction at any site is unchanged by such a translation. Written in terms of a displacement operator \(D = e^{i p _ {x} \alpha / \hbar}\) that shifts the molecular wavefunction by one lattice constant\[| \varphi ( x + n \alpha ) \rangle = D^{n} | \varphi (x) \rangle \label{14.32}\]These observations underlie Bloch’s theorem, which states that the eigenstates of a periodic system will vary only by a phase shift when displaced by a lattice constant.\[| \psi ( x + \alpha ) \rangle = e^{i k \alpha} | \psi (x) \rangle \label{14.33}\]Here \(k\) is the wavevector, or reciprocal lattice vector, a real quantity. Thus the expansion coefficients in Equation \ref{14.31} will have an amplitude that reflects an excitation spread equally among the N sites, and only vary between sites by a spatially varying phase factor. Equivalently, the eigenstates are expected to have a form that is a product of a spatially varying phase factor and a periodic function:\[| \psi (x) \rangle = e^{i k x} u (x) \label{14.34}\]These phase factors are closely related to the lattice displacement operators. If the linear chain has \(N\) molecules, the eigenstates must remain unchanged with a translation by the length of the chain \(L = \alpha N\):\[| \psi \left( x _ {n} + L \right) \rangle = | \psi \left( x _ {n} \right) \rangle\]Therefore, we see that our wavefunctions must satisfy\[ N k \alpha = 2 \pi m\label{14.35}\]where \(m\) is an integer. Furthermore, since there are \(N\) sites on the chain, unique solutions to Equation \ref{14.35} require that \(m\) can only take on \(N\) consecutive integer values. Like the site index \(n\), there is no unique choice of \(m\). Rewriting Equation \ref{14.35}, the wavevector is\[k _ {m} = \frac {2 \pi} {\alpha} \frac {m} {N} \label{14.36}\]We see that for an \(N\) site lattice, \(m\) can take on the \(N\) consecutive integer values, so that \(k_m\alpha\) varies over a \(2\pi\) range of angles. The wavevector index m labels the \(N\) one-exciton eigenstates of an \(N\) molecule chain. By convention, \(k_m\) is chosen such that\[- \pi / \alpha < k _ {m} \leq \pi / \alpha.\]Then the corresponding values of \(m\) are integers from \(-N-1)/2\) to \(N-1/2\) if there are an odd number of lattice sites or \(-N-2)/2\) to \(N/2\) for an even number of sites. For example, a 20 molecule chain would have \(m = -9,\, -8,\, … 9,\,10\).These findings lead to the general form for the m one-exciton eigenstates\[| k _ {m} \rangle = \frac {1} {\sqrt {N}} \sum _ {n = 0}^{N - 1} e^{i n k _ {m} \alpha} | n \rangle \label{14.37}\]The factor of \(\sqrt{N}\) assures proper normalization of the wavefunction, \(\langle \psi | \psi \rangle = 1\). Comparing Equation \ref{14.37} and \ref{14.31} we see that the expansion coefficients for the nth site of the mth eigenstate is\[c _ {m , n} = \frac {1} {\sqrt {N}} e^{i n k _ {m} \alpha} = \frac {1} {\sqrt {N}} e^{i 2 \pi n m / N} \label{14.38}\]We see that for state \(| k _ {0} \rangle\), with \(m = 0\), the phase factor is the same for all sites. In other words, the transition dipoles of the chain will oscillate in-phase, constructively adding for all sites. For the case that \(k _ {m} = \pi / \alpha\), we see that each site is out-of-phase with its nearest neighbors. Looking at the case of the dimer, \(N = 2\), we see that \(m = 0\) or \(1\), \(k_m = 0\) or \(\pi/2\), and we recover the expected symmetric and antisymmetric eigenstates:\[| k _ {0} \rangle = \frac {1} {\sqrt {2}} \sum _ {n = 0}^{1} e^{i n 0} | n \rangle = \frac {1} {\sqrt {2}} ( | 0 \rangle + | 1 \rangle )\]for \(k=0\) and \[| k _ {1} \rangle = \frac {1} {\sqrt {2}} \sum _ {n = 0}^{1} e^{i n \pi} | n \rangle = \frac {1} {\sqrt {2}} ( | 0 \rangle - | 1 \rangle )\]for \(k = \pi / \alpha\).Schematically for \(N = 20\), we see how the dipole phase varies with \(k_m\), plotting the real and imaginary components of the expansion coefficients. Also, we can evaluate the one-exciton transition dipole matrix elements, \(M(k_m)\), which are expressed as superpositions of the dipole moments at each site, \(\overline {\mu} _ {n}\):\[\overline {M} = \sum _ {n = 0}^{N - 1} \overline {\mu} _ {n} \label{14.39}\]\[\begin{align} M _ {m} = \left\langle k _ {m} | \overline {M} | G \right\rangle \\[4pt] = \frac {1} {\sqrt {N}} \sum _ {n = 0}^{N - 1} e^{i n k _ {n} \alpha} \left\langle n \left| \overline {\mu} _ {n} \right| G \right\rangle \label{14.40} \end{align}\]The phase of the transition dipoles of the chain matches their phase within each k state. Thus for our problem, in which all of the dipoles are parallel, transitions from the ground state to the \(k_m=0\) state will carry all of the oscillator strength. Plotted below is an illustration of the phase relationships between dipoles in a chain with \(N = 20\).Finally, let’s solve for the one-exciton energy eigenvalues by calculating the expectation value of the Hamiltonian operator, Equation \ref{14.30}\[\begin{align} E \left( k _ {m} \right) &= \left\langle k \left| H _ {0} \right| k \right\rangle \\[4pt] &= \frac {1} {N} \sum _ {n , m = 0}^{N - 1} e^{i ( n - m ) k \alpha} \left\langle m \left| H _ {0} \right| n \right\rangle \label{14.41} \end{align} \]\[\left\langle k _ {m} \left| H _ {S} \right| k _ {m} \right\rangle = \frac {1} {N} \sum _ {n = 0}^{N - 1} \varepsilon _ {0} = \varepsilon _ {0}\]\[\begin{align} \left\langle k _ {m} | V | k _ {m} \right\rangle &= \frac {1} {N} \sum _ {n = 0}^{N - 1} \left\{e^{i k _ {m} \alpha} \langle n - 1 | V | n \rangle + e^{- i k _ {m} \alpha} \langle n + 1 | V | n \rangle \right\}\\[4pt] &= 2 J \cos \left( k _ {m} \alpha \right) \label{14.42} \end{align}\]You predict that the one-exciton band of states varies in energy between \(\varepsilon _ {0} - 2 J\) and \(\varepsilon _ {0} + 2 J\). If we take J as negative, as expected for the case of J-aggregates (negative couplings), then \(k = 0\) is at the bottom of the band. Examples are illustrated below for the \(N=20\) aggregate.Note that the result in Equation \ref{14.42} gives you a splitting of \(4J\) between the two states of the dimer, unlike the expected \(2J\) splitting from earlier. This is a result of the periodic boundary conditions that we enforce here. We are now in a position to plot the absorption spectrum for aggregate, summing over eigenstates and assuming a Lorentzian lineshape for the system:\[\sigma ( \omega ) = \sum _ {m} \left| M _ {m} \right|^{2} \frac {\Gamma^{2}} {\left( \hbar \omega - E \left( k _ {m} \right) \right) + \Gamma^{2}}\]For a 20 oscillator chain with negative coupling, the spectrum is plotted below. We have one peak corresponding to the k0 mode that is peaked at \(\hbar \omega = \varepsilon _ {0} - 2 J\) and carries the oscillator strength of all 20 dipoles.Absorption spectrum for \(N=20\) aggregate with periodic boundary conditions and \(J<0\).Similar types of solutions appear without using periodic boundary conditions. For the case of open boundary conditions, in the molecules at the end of the chain are only coupled to the one nearest neighbor in the chain. In this case, it is helpful to label on the sites from \(n = 1,2,...,N\). Furthermore, \(m = 1,2,...,N\). Under those conditions, one can solve for the eigenstates using use the boundary condition that \(\psi = 0\) at sites \(0\) and \(N+1\). The change in boundary condition gives sine solutions:\[| k _ {m} \rangle = \sqrt {\frac {2} {N + 1}} \sum _ {n = 1}^{N} \sin \left( \frac {\pi m n} {N + 1} \right) | n \rangle\]The energy eigenvalues are\[E _ {m} = \omega _ {0} + 2 J \cos \left( \frac {\pi m} {N + 1} \right)\]Absorption spectrum for N = 20 aggregate with periodic boundary conditions and \(J < 0\). Returning to the case of the dimer (\(N=2\)), we can now confirm that we recover the symmetric and anti-symmetric eigenstates, with an energy splitting of \(2J\).If you calculate the oscillator strength for these transitions using the dipole operator in Equation \ref{14.39}, one finds:\[M _ {m}^{2} = \left| \left\langle k _ {m} | \overline {M} | G \right\rangle \right|^{2} = \left( \frac {1 - ( - 1 )^{m}} {2} \right)^{2} \frac {2 \mu^{2}} {N + 1} \cot^{2} \left( \frac {\pi m} {2 ( N + 1 )} \right) \]This result shows that most of the oscillator strength lies in the \(m =1\) state, for which all oscillators are in phase. For large \(N\), \(M_1^2\) carries 81% of the oscillator strength, with approximately 9% in the transition to the \(m = 3\) state.Absorption spectra for \(N=3,7,11\) for negative coupling.The shift in the peak of the absorption relative to the monomer gives the coupling \(J\). Including long-range interactions has the effect of shifting the exciton band asymmetrically about \(\omega _ {0}\).If the chain is not homogeneous, i.e., all molecules do not have same site energy \(\varepsilon _ {0}\), then we can model this effect as Gaussian random disorder. The energy of a given site is\[\varepsilon _ {n} = \varepsilon _ {0} + \delta \omega _ {n}\]We add as an extra term to our earlier Hamiltonian, Equation \ref{14.30}, to account for this variation.\[H _ {0} = H _ {s} + H _ {d i s} + V\]\[H _ {d i s} = \sum _ {n} \delta \omega _ {n} | n \rangle \langle n |\]The effect is to shift and mix the homogeneous exciton states. Absorption spectra for N = 3,7,11 for negative coupling\[\delta \Omega _ {k} = \left\langle k \left| H _ {d i s} \right| k \right\rangle = \frac {2} {N + 1} \sum _ {n} \sin^{2} \left( \frac {\pi k n} {N + 1} \right) \delta \omega _ {n}\]We find that these shifts are also Gaussian random variables, with a standard deviation of \(\Delta \sqrt {3 / 2 ( N + 1 )}\), where \(\Delta\) is the standard deviation for site energies. So, the delocalization of the eigenstate averages the disorder over \(N\) sites, which reduces the distribution of energies by a factor scaling as \(N\). The narrowing of the absorption lineshape with delocalization is called exchange narrowing. This depends on the distribution of site energies being relatively small: \(\Delta \ll 3 \pi | J | / N^{3 / 2}\).Absorption spectra for \(N =2,6,30\) normalized to the number of oscillators. \(3\Delta = J\) and \(J<0\).This page titled 15.3: Excitons in Molecular Aggregates is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,291 |
15.4: Multiple Particles and Second Quantization
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/15%3A_Energy_and_Charge_Transfer/15.04%3A_Multiple_Particles_and_Second_Quantization | In the case of a large number of nuclear or electronic degrees of freedom (or for photons in a quantum light field), it becomes tedious to write out the explicit product-state form of the state vector, i.e.,\[| \psi \rangle = | \varphi _ {1} , \varphi _ {2} , \varphi _ {3} \cdots \rangle\]Under these circumstances it becomes useful to define creation and annihilation operators. If \(| \psi \rangle\) refers to the state of multiple harmonic oscillators, then the Hamiltonian has the form\[H = \sum _ {\alpha} \left( \frac {p _ {\alpha}^{2}} {2 m _ {\alpha}} + \frac {1} {2} m _ {\alpha} \omega _ {\alpha}^{2} q _ {\alpha}^{2} \right) \label{14.43}\]which can also be expressed as\[H = \sum _ {\alpha} \hbar \omega _ {\alpha} \left( a _ {\alpha}^{\dagger} a _ {\alpha} + \frac {1} {2} \right) \label{14.44}\]and the eigenstates represented in through the occupation of each oscillator\[| \psi \rangle = | n _ {1} , n _ {2} , n _ {3} \dots ).\]This representation is sometimes referred to as “second quantization”, because the classical Hamiltonian was initially quantized by replacing the position and momentum variables by operators, and then these quantum operators were again replaced by raising and lowering operators.The operator \(a _ {\alpha}^{\dagger}\) raises the occupation in mode \(| n _ {\alpha} \rangle\), and \(a _ {\alpha}\) lowers the excitation in mode \(| n _ {\alpha} \rangle\). The eigenvalues of these operators, \(n _ {\alpha} \rightarrow n _ {\alpha} \pm 1\), are captured by the commutator relationships:\[\left[ a _ {\alpha} , a _ {\beta}^{\dagger} \right] = \delta _ {\alpha \beta} \label{14.45}\]\[\left[ a _ {\alpha} , a _ {\beta} \right] = 0 \label{14.46}\]Equation \ref{14.45} indicates that the raising and lower operators do not commute if they are operators in the same degree of freedom (\(\alpha = \beta\)), but they do otherwise. Written another way, these expression indicate that the order of operations for the raising and lowering operators in different degrees of freedom commute.\[a _ {\alpha} a _ {\beta}^{\dagger} = a _ {\beta}^{\dagger} a _ {\alpha} \label{14.47}\]\[a _ {\alpha} a _ {\beta} = a _ {\beta} a _ {\alpha} \label{14.48A}]\]\[a _ {\alpha}^{\dagger} a _ {\beta}^{\dagger} = a _ {\beta}^{\dagger} a _ {\alpha}^{\dagger} \label{14.48B}\]These expressions also imply that the eigenfunctions operations of the forms in Equations \ref{14.47}-\ref{14.48B} are the same, so that these eigenfunctions should be symmetric to interchange of the coordinates. That is, these particles are bosons.This observations proves an avenue to defining raising and lowering operators for electrons. Electrons are fermions, and therefore antisymmetric to exchange of particles. This suggests that electrons will have raising and lowering operators that change the excitation of an electronic state up or down following the relationship\[b _ {\alpha} b _ {\beta}^{\dagger} = - b _ {\beta}^{\dagger} b _ {\alpha} \label{14.49}\]or\[\left[ b _ {\alpha} , b _ {\beta}^{\dagger} \right] _ {+} = \delta _ {\alpha \beta} \label{14.50}\]where \([ \ldots ] _+\) refers to the anti-commutator. Further, we write\[\left[ b _ {\alpha} , b _ {\beta} \right] _ {+} = 0 \label{14.51}\]This comes from considering the action of these operators for the case where \(\alpha = \beta\). In that case, taking the Hermetian conjugate, we see that Equation \ref{14.51} gives\[2 b _ {\alpha}^{\dagger} b _ {\alpha}^{\dagger} = 0 \label{14.52A}\]or\[b _ {\alpha}^{\dagger} b _ {\alpha}^{\dagger} = 0 \label{14.52B}\]This relationship says that we cannot put two excitations into the same state, as expected for Fermions. This relationship indicates that there are only two eigenfunctions for the operators \(b _ {\alpha}^{\dagger}\) and \(b _ {\alpha}\), namely \(| n _ {\alpha} = 0 \rangle\) and \(| n _ {\alpha} = 1 \rangle\). This is also seen with Equation \ref{14.50}, which indicates that\[b _ {\alpha}^{\dagger} b _ {\alpha} | n _ {\alpha} \rangle + b _ {\alpha} b _ {\alpha}^{\dagger} | n _ {\alpha} \rangle = | n _ {\alpha} \rangle \]or\[b _ {\alpha} b _ {\alpha}^{\dagger} | n _ {\alpha} \rangle = \left( 1 - b _ {\alpha}^{\dagger} b _ {\alpha} \right) | n _ {\alpha} \rangle \label{14.53}\]If we now set \(| n _ {\alpha} \rangle = | 0 \rangle\), we find that Equation \ref{14.53} implies\[\left. \begin{array} {l} {b _ {\alpha} b _ {\alpha}^{\dagger} | 0 \rangle = | 0 \rangle} \\ {b _ {\alpha}^{\dagger} b _ {\alpha} | 0 \rangle = 0} \\ {b _ {\alpha} b _ {\alpha}^{\dagger} | 1 \rangle = 0} \\ {b _ {\alpha}^{\dagger} b _ {\alpha} | 1 \rangle = | 1 \rangle} \end{array} \right. \label{14.54}\]Again, this reinforces that only two states, \(| 0 \rangle\) and \(| 1 \rangle\), are allowed for electron raising and lowering operators. These are known as Pauli operators, since they implicitly enforce the Pauli exclusion principle. Note, in Equation \ref{14.54}, that \(| 0 \rangle\) refers to the eigenvector with an eigenvalue of zero \(| \varphi _ {0} \rangle\), whereas “0” refers to the null vector.For electronic chromophores, we use the notation \(| g \rangle\) and \(| e \rangle\) for the states of an electron in its ground or excited state. The state of the system for one excitation in an aggregate\[| n \rangle = | g , g , g , g \dots e \ldots g \rangle\]can then be written as \(a _ {n}^{\dagger} | G \rangle\), or simply \(a _ {n}^{\dagger}\), and the Frenkel exciton Hamiltonian is\[H _ {0} = \sum _ {n = 0}^{N - 1} \varepsilon _ {0} | n \rangle \langle n | + \sum _ {n , m} J _ {n , m} | n \rangle \langle m | \label{14.55}\]or\[H _ {0} = \sum _ {n} \varepsilon _ {0} b _ {n}^{\dagger} b _ {n} + \sum _ {n , m} J _ {n , m} b _ {n}^{\dagger} b _ {m} \label{14.56}\]This page titled 15.4: Multiple Particles and Second Quantization is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,292 |
15.5: Marcus Theory for Electron Transfer
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/15%3A_Energy_and_Charge_Transfer/15.05%3A_Marcus_Theory_for_Electron_Transfer | The displaced harmonic oscillator (DHO) formalism and the Energy Gap Hamiltonian have been used extensively in describing charge transport reactions, such as electron and proton transfer. Here we describe the rates of electron transfer between weakly coupled donor and acceptor states when the potential energy depends on a nuclear coordinate, i.e., nonadiabatic electron transfer. These results reflect the findings of Marcus’ theory of electron transfer.We can represent the problem as calculating the transfer or reaction rate for the transfer of an electron from a donor to an acceptor\[\ce{D + A \rightarrow D^{+} + A^{-}}\label{4.57}\]This reaction is mediated by a nuclear coordinate \(q\). This need not be, and generally isn’t, a simple vibrational coordinate. For electron transfer in solution, we most commonly consider electron transfer to progress along a solvent rearrangement coordinate in which solvent reorganizes its configuration so that dipoles or charges help to stabilize the extra negative charge at the acceptor site. This type of collective coordinate is illustrated below.The external response of the medium along the electron transfer coordinate is referred to as “outer shell” electron transfer, whereas the influence of internal vibrational modes that promote ET is called “inner shell”. The influence of collective solvent rearrangements or intramolecular vibrations can be captured with the use of an electronic transition coupled to a harmonic bath.Normally we associate the rates of electron transfer with the free-energy along the electron transfer coordinate \(q\). Pictures such as the ones above that illustrate states of the system with electron localized on the donor or acceptor electrons hopping from donor to acceptor are conceptually represented through diabatic energy surfaces. The electronic coupling \(J\) that results in transfer mixes these diabatic states in the crossing region. From this adiabatic surface, the rate of transfer for the forward reaction is related to the flux across the barrier. From classical transition state theory we can associate the rate with the free energy barrier using\[k _ {f} = A \exp \left( - \Delta G^{\dagger} / k _ {B} T \right)\]If the coupling is weak, we can describe the rates of transfer between donor and acceptor in the diabatic basis with perturbation theory. This accounts for nonadiabatic effects and tunneling through the barrier.To begin we consider a simple classical derivation for the free-energy barrier and the rate of electron transfer from donor to acceptor states for the case of weakly coupled diabatic states. First we assume that the free energy or potential of mean force for the initial and final state,\[\mathrm {G} ( \mathrm {q} ) = - \mathrm {k} _ {\mathrm {B}} \mathrm {T} \ln \mathrm {P} ( \mathrm {q} )\]is well represented by two parabolas.\[ \begin{align} G _ {D} ( q ) &= \frac {1} {2} m \omega _ {0}^{2} \left( q - d _ {D} \right)^{2} \label{14.58a} \\[4pt] G _ {A} ( q ) &= \frac {1} {2} m \omega _ {0}^{2} \left( q - d _ {A} \right)^{2} + \Delta G^{0} \label{14.58b} \end{align} \]To find the barrier height \(\Delta G^{\dagger}\), we first find the crossing point \(dC\) where\[G_D(d_C) = G_A(d_C). \label{14.58c}\]Substituting Equations \ref{14.58a} and \ref{14.58b} into Equation \ref{14.58c}\[ \frac {1} {2} m \omega _ {0}^{2} \left( d _ {c} - d _ {D} \right)^{2} = \Delta G^{\circ} + \frac {1} {2} m \omega _ {0}^{2} \left( d _ {C} - d _ {A} \right)^{2} \]and solving for \(d_C\) gives\[ \begin{align} d _ {C} &= \frac {\Delta G^{\circ}} {m \omega _ {0}^{2}} \left( \frac {1} {d _ {A} - d _ {D}} \right) + \frac {d _ {A} + d _ {D}} {2} \\[4pt] & = \frac {\Delta G^{\circ}} {2 \lambda} \left( d _ {A} - d _ {D} \right) + \frac {d _ {A} + d _ {D}} {2} \end{align} .\]The last expression comes from the definition of the reorganization energy (\(\lambda\)), which is the energy to be dissipated on the acceptor surface if the electron is transferred at \(d_D\),\[\begin{align} \lambda & = G _ {A} \left( d _ {D} \right) - G _ {A} \left( d _ {A} \right) \\ & = \frac {1} {2} m \omega _ {0}^{2} \left( d _ {D} - d _ {A} \right)^{2} \label{14.59} \end{align} \]Then, the free energy barrier to the transfer \(\Delta G^{\dagger}\) is\[\begin{aligned} \Delta G^{\dagger} & = G _ {D} \left( d _ {C} \right) - G _ {D} \left( d _ {D} \right) \\ & = \frac {1} {2} m \omega _ {0}^{2} \left( d _ {C} - d _ {D} \right)^{2} \\ & = \frac {1} {4 \lambda} \left[ \Delta G^{\circ} + \lambda \right]^{2} \end{aligned}.\]So the Arrhenius rate constant is for electron transfer via activated barrier crossing is\[k _ {E T} = A \exp \left[ \frac {- \left( \Delta G^{\circ} + \lambda \right)^{2}} {4 \lambda k T} \right] \label{14.60}\]This curve qualitatively reproduced observations of a maximum electron transfer rate under the conditions \(- \Delta G^{\circ} = \lambda\), which occurs in the barrierless case when the acceptor parabola crosses the donor state energy minimum.We expect that we can more accurately describe nonadiabatic electron transfer using the DHO or Energy Gap Hamiltonian, which will include the possibility of tunneling through the barrier when donor and acceptor wavefunctions overlap. We start by writing the transfer rates in terms of the potential energy as before. We recognize that when we calculate thermally averaged transfer rates that this is equivalent to describing the diabatic free energy surfaces. The Hamiltonian is\[H = H _ {0} + V \label{14.61}\]with\[H _ {0} = | D \rangle H _ {D} \langle D | + | A \rangle H _ {A} \langle A | \label{14.62}\]Here \(| D \rangle\) and \(| A \rangle\) refer to the potential where the electron is either on the donor or acceptor, respectively. Also remember that \(| D \rangle\) refers to the vibronic states\[| D \rangle = | d , n \rangle.\]These are represented through the same harmonic potential, displaced from one another vertically in energy by\[\Delta E = E _ {A} - E _ {D}\]and horizontally along the reaction coordinate \(q\):\[\begin{align} H _ {D} &= | d \rangle E _ {D} \langle d | + H _ {d} \\[4pt] H _ {A} &= | a \rangle E _ {A} \langle a | + H _ {a} \label{14.63} \end{align} \]\[\left.\begin{aligned} H _ {d} & = \hbar \omega _ {0} \left( p^{2} + \left( q - d _ {D} \right)^{2} \right) \\ H _ {a} & = \hbar \omega _ {0} \left( p^{2} + \left( q - d _ {A} \right)^{2} \right) \end{aligned} \right. \label{14.64}\]Here we are using reduced variables for the momenta, coordinates, and displacements of the harmonic oscillator. The diabatic surfaces can be expressed as product states in the electronic and nuclear configurations: \(| D \rangle = | d , n \rangle\). The interaction between the surfaces is assigned a coupling \(J\)\[V = J [ | d \rangle \langle a | + | a \rangle \langle d | ] \label{14.65}\]We have made the Condon approximation, implying that the transfer matrix element that describes the electronic interaction has no dependence on nuclear coordinate. Typically this electronic coupling is expected to drop off exponentially with the separation between donor and acceptor orbitals;\[J = J _ {0} \exp \left( - \beta _ {E} \left( R - R _ {0} \right) \right) \label{14.66}\]Here \(\beta_E\) is the parameter governing the distance dependence of the overlap integral. For our purposes, even though this is a function of donor-acceptor separation (R), we take this to vary slowly over the displacements investigated here, and therefore be independent of the nuclear coordinate (\(Q\)).Marcus evaluated the perturbation theory expression for the transfer rate by calculating Franck-Condon factors for the overlap of donor and acceptor surfaces, in a manner similar to our treatment of the DHO electronic absorption spectrum. Similarly, we can proceed to calculate the rates of electron transfer using the Golden Rule expression for the transfer of amplitude between two states\[w _ {k \ell} = \frac {1} {\hbar^{2}} \int _ {- \infty}^{+ \infty} d t \left\langle V _ {I} (t) V _ {I} ( 0 ) \right\rangle \label{14.67}\]Using\[V _ {I} (t) = e^{i H _ {0} t / \hbar} V e^{- i H _ {0} t / \hbar},\]we write the electron transfer rate in the DHO eigenstate form as\[w _ {E T} = \frac {| J |^{2}} {\hbar^{2}} \int _ {- \infty}^{+ \infty} d t e^{- i \Delta E t / \hbar} F (t) \label{14.68}\]where\[F (t) = \left\langle e^{i H _ {d} t / h} e^{- i H _ {a} t / h} \right\rangle \label{14.69}\]This form emphasizes that the electron transfer rate is governed by the overlap of vibrational wavepackets on the donor and acceptor potential energy surfaces.Alternatively, we can cast this in the form of the Energy Gap Hamiltonian. This carries with is a dynamical picture of the electron transfer event. The energy of the two states have time-dependent (fluctuating) energies as a result of their interaction with the environment. Occasionally the energy of the donor and acceptor states coincide that is the energy gap between them is zero. At this point transfer becomes efficient. By integrating over the correlation function for these energy gap fluctuations, we characterize the statistics for barrier crossing, and therefore forward electron transfer.Similar to before, we define a donor-acceptor energy gap Hamiltonian\[H _ {A D} = H _ {A} - H _ {D} \label{14.70}\] which allows us to write\[F (t) = \left\langle \exp _ {+} \left[ - \frac {i} {\hbar} \int _ {0}^{t} d t^{\prime} H _ {A D} \left( t^{\prime} \right) \right] \right\rangle \label{14.71}\]and\[H _ {A D} (t) = e^{i H _ {d} t / \hbar} H _ {A D} e^{- i H _ {d} t / \hbar} \label{14.72}\]These expressions and application of the cumulant expansion to equation allows us to express the transfer rate in terms of the lineshape function and correlation function\[F (t) = \exp \left[ \frac {- i} {\hbar} \left\langle H _ {A D} \right\rangle t - g (t) \right] \label{14.73}\]\[g (t) = \int _ {0}^{t} d \tau _ {2} \int _ {0}^{\tau _ {2}} d \tau _ {1} C _ {A D} \left( \tau _ {2} - \tau _ {1} \right) \label{14.74}\]\[C _ {A D} (t) = \frac {1} {\hbar^{2}} \left\langle \delta H _ {A D} (t) \delta H _ {A D} ( 0 ) \right\rangle \label{14.75}\]\[\left\langle H _ {A D} \right\rangle = \lambda \label{14.76}\]The lineshape function can also be written as a sum of many coupled nuclear coordinates, \(q_{\alpha}\). This expression is commonly applied to the vibronic (inner shell) contributions to the transfer rate:\[\begin{align} g (t) &= - \sum _ {\alpha} \left( d _ {\alpha}^{A} - d _ {\alpha}^{D} \right)^{2} \left[ \left( \overline {n} _ {\alpha} + 1 \right) \left( e^{- i \omega _ {\alpha} t} - 1 + i \omega _ {0} t \right) + \overline {n} _ {\alpha} \left( e^{i \omega _ {a} t} - 1 - i \omega _ {0} t \right) \right] \\[4pt] &= - \sum _ {\alpha} \left( d _ {\alpha}^{A} - d _ {\alpha}^{D} \right)^{2} \left[ \operatorname {coth} \left( \beta \hbar \omega _ {\alpha} / 2 \right) \left( \cos \omega _ {\alpha} t - 1 \right) - i \left( \sin \omega _ {\alpha} t - \omega _ {\alpha} t \right) \right] \label{14.77} \end{align}\]Substituting the expression for a single harmonic mode into the Golden Rule rate expression gives\[\begin{align} w _ {E T} &= \frac {| J |^{2}} {\hbar^{2}} \int _ {- \infty}^{+ \infty} d t e^{- i \Delta E t / \hbar - g (t)} \label{4.78} \\[4pt] &= \frac {| J |^{2}} {\hbar^{2}} \int _ {- \infty}^{+ \infty} d t e^{- i ( \Delta E + \lambda ) t / \hbar} \exp \left[ D \left( \operatorname {coth} \left( \beta \hbar \omega _ {0} / 2 \right) \left( \cos \omega _ {0} t - 1 \right) - i \sin \omega _ {0} t \right) \right] \label{14.78} \end{align}\]where \[D = \left( d _ {A} - d _ {D} \right)^{2} \label{14.79}\]This expression is very similar to the one that we evaluated for the absorption lineshape of the Displaced Harmonic Oscillator model. A detailed evaluation of this vibronically mediated transfer rate is given in Jortner.To get a feeling for the dependence of \(k\) on \(q\), we can look at the classical limit \(\hbar \omega \ll k T\). This corresponds to the case where one is describing the case of a low frequency “solvent mode” or “outer sphere” effect on the electron transfer. Now, we neglect the imaginary part of \(g(t)\) and take the limit\[\operatorname {coth} ( \beta \hbar \omega / 2 ) \rightarrow 2 / \beta \hbar \omega\]so\[w _ {E T} = \frac {| J |^{2}} {\hbar^{2}} \int _ {- \infty}^{+ \infty} d t e^{- i ( \Delta E + \lambda ) t} \exp \left( - \left( \frac {2 D k _ {B} T} {\hbar \omega _ {0}} \right) \left( 1 - \cos \omega _ {0} t \right) \right) \label{14.80}\]Note that the high temperature limit also means the low frequency limit for \(\omega _ {0}\). This means that we can expand\[\cos \omega _ {0} t \approx 1 - \left( \omega _ {0} t \right)^{2} / 2,\]and find\[w _ {E T} = \frac {| J |^{2}} {\hbar} \sqrt {\frac {\pi} {\lambda k T}} \exp \left[ \frac {- ( \Delta E + \lambda )^{2}} {4 \lambda k T} \right] \label{14.81}\]where \(\lambda = D \hbar \omega _ {0}\). Note that the activation barrier \(\Delta E^{\dagger}\) for displaced harmonic oscillators is \(\Delta E^{\dagger} = \Delta E + \lambda\). For a thermally averaged rate it is proper to associate the average energy gap with the standard free energy of reaction,\[\left\langle H _ {A} - H _ {D} \right\rangle - \lambda = \Delta G^{0}.\]Therefore, this expression is equivalent to the classical Marcus’ result for the electron transfer rate\[k _ {E T} = A \exp \left[ \frac {- \left( \Delta G^{o} + \lambda \right)^{2}} {4 \lambda k T} \right] \label{14.82}\]where the pre-exponential is\[A = 2 \pi | J |^{2} / \hbar \sqrt {4 \pi \lambda k T} \label{14.83}\]This expression shows the nonlinear behavior expected for the dependence of the electron transfer rate on the driving force for the forward transfer, i.e., the reaction free energy. This is unusual because we generally think in terms of a linear free energy relationship between the rate of a reaction and the equilibrium constant:\[\ln k \propto \ln K _ {e q}.\]This leads to the thinking that the rate should increase as we increase the driving free energy for the reaction \(-\Delta G^{0}\). This behavior only hold for a small region in \(\Delta G^{0}\). Instead, eq. shows that the ET rate will increase with \(-\Delta G^{0}\), until a maximum rate is observed for \(-\Delta G^{0}=\lambda\) and the rate then decreases. This decrease of k with increased \(-\Delta G^{0}\) is known as the “inverted regime”. The inverted behavior means that extra vibrational excitation is needed to reach the curve crossing as the acceptor well is lowered. The high temperature behavior for coupling to a low frequency mode \(\left(100 \mathrm{~cm}^{-1} \text {at } 300 \mathrm{~K}\right)\) is shown at right, in addition to a cartoon that indicates the shift of the curve crossing at \(\Delta G^{0}\) in increased. Particularly in intramolecular ET, it is common that one wants to separately account for the influence of a high frequency intramolecular vibration (inner sphere ET) that is not in the classical limit that applies to the low frequency classical solvent response. If an additional mode of frequency \(\omega _ {0}\) and a rate in the form of Equation \ref{14.81} is added to the low frequency mode, Jortner has given an expression for the rate as:\[w _ {E T} = \frac {| J |^{2}} {\hbar} \sqrt {\frac {\pi} {\lambda _ {0} k T}} \sum _ {j = 0}^{\infty} \left( \frac {e^{- D}} {j !} D^{j} \right) \exp \left[ \frac {- \left( \Delta G^{o} + \lambda _ {0} + j \hbar \omega _ {0} \right)^{2}} {4 \lambda _ {0} k T} \right] \label{14.84}\]Here \(\lambda _ {0}\) is the solvation reorganization energy. For this case, the same inverted regime exists; although the simple Gaussian dependence of \(k\) on \(\Delta G^{0}\) no longer exists. The asymmetry here exists because tunneling sees a narrower barrier in the inverted regime than in the normal regime. Examples of the rates obtained with eq. are plotted in the figure below (T= 300 K).As with electronic spectroscopy, a more general and effective way of accounting for the nuclear motions that mediate the electron transfer process is to describe the coupling weighted density of states as a spectral density. Then we can use coupling to a harmonic bath to describe solvent and/or vibrational contributions of arbitrary form to the transfer event using\[g (t) = \int _ {0}^{\infty} d \omega\, \rho ( \omega ) \left[ \operatorname {coth} \left( \frac {\beta \hbar \omega} {2} \right) ( 1 - \cos \omega t ) + i ( \sin \omega t - \omega t ) \right] \label{14.85}\]This page titled 15.5: Marcus Theory for Electron Transfer is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,293 |
16.1: Vibrational Relaxation
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/16%3A_Quantum_Relaxation_Processes/16.01%3A_Vibrational_Relaxation | Here we want to address how excess vibrational energy undergoes irreversible energy relaxation as a result of interactions with other intra- and intermolecular degrees of freedom. Why is this process important? It is the fundamental process by which nonequilibrium states thermalize. As chemists, this plays a particularly important role in chemical reactions, where efficient vibrational relaxation of an activated species is important to stabilizing the product and not allowing it to re-cross to the reactant well. Further, the rare activation event for chemical reactions is similar to the reverse of this process. Although we will be looking specifically at vibrational couplings and relaxation, the principles are the same for electronic population relaxation through electron–phonon coupling and spin–lattice relaxation.For an isolated molecule with few vibrational coordinates, an excited vibrational state must relax by interacting with the remaining internal vibrations or the rotational and translational degrees of freedom. If a lot of energy must be dissipated, radiative relaxation may be more likely. In the condensed phase, relaxation is usually mediated by the interactions with the environment, for instance, the solvent or lattice. The solvent or lattice forms a continuum of intermolecular motions that can absorb the energy of of the vibrational relaxation. Quantum mechanically this means that vibrational relaxation (the annihilation of a vibrational quantum) leads to excitation of solvent or lattice motion (creation of an intermolecular vibration that increases the occupation of higher lying states).For polyatomic molecules it is common to think of energy relaxation from high lying vibrational states (\(k T \ll \hbar \omega _ {0}\)) in terms of cascaded redistribution of energy through coupled modes of the molecule and its surroundings leading finally to thermal equilibrium. We seek ways of describing these highly non-equilibrium relaxation processes in quantum systems.Classically vibrational relaxation reflects the surroundings exerting a friction on the vibrational coordinate, which damps its amplitude and heats the sample. We have seen that a Langevin equation for an oscillator experiencing a fluctuating force \(f(t)\) describes such a process:\[\ddot {Q} (t) + \omega _ {0}^{2} Q^{2} - \gamma \dot {Q} = f (t) / m \label{15.1}\]This equation assigns a phenomenological damping rate \(\gamma\) to the vibrational relaxation we wish to describe. However, we know in the long time limit, the system must thermalize and the dissipation of energy is related to the fluctuations of the environment through the classical fluctuation-dissipation relationship. Specifically,\[\langle f (t) f ( 0 ) \rangle = 2 m \gamma k _ {B} T \delta (t) \label{15.2}\]More general classical descriptions relate the vibrational relaxation rates to the correlation function for the fluctuating forces acting on the excited coordinate.In these classical pictures, efficient relaxation requires a matching of frequencies between the vibrational period of the excited oscillator and the spectrum of fluctuation of the environment. Since these fluctuations are dominated by motions are of the energy scale of \(k_BT\), such models do not work effectively for high frequency vibrations whose frequency \(\omega \gg k_BT/\hbar\). We would like to develop a quantum model that allows for these processes and understand the correspondence between these classical pictures and quantum relaxation.Let’s treat the problem of a vibrational system \(H_S\) that relaxes through weak coupling \(V\) to a continuum of bath states \(H_B\) using perturbation theory. The eigenstates of \(H_S\) are \(| a \rangle\) and those of \(H_B\) are \(| \alpha \rangle\). Although our earlier perturbative treatment did not satisfy energy conservation, here we can take care of it by explicitly treating the bath states.\[\begin{align} H &= H _ {0} + V \label{15.3} \\[4pt] H _ {0} &= H _ {S} + H _ {B} \label{15.4} \end{align}\]with\[\begin{align} H _ {S} &= | a \rangle E _ {a} \langle a | + | b \rangle E _ {b} \langle b | \label{15.5} \\[4pt] H _ {B} &= \sum _ {\alpha} | \alpha \rangle E _ {\alpha} \langle \alpha | \label{15.6} \\[4pt] H _ {0} | a \alpha \rangle &= \left( E _ {a} + E _ {\alpha} \right) | a \alpha \rangle \label{15.7} \end{align}\]We will describe transitions from an initial state \(| i \rangle = | a \alpha \rangle\) with energy \(E _ {a} + E _ {\alpha}\) to a final state \(| f \rangle = | b \beta \rangle\) with energy \(E _ {b} + E _ {\beta}\). Since we expect energy conservation to hold, this undoubtedly requires that a change in the system states will require an equal and opposite change of energy in the bath.Initially, we take \(p_a=1\) and \(p_b=0\). If the interaction potential is \(V\), Fermi’s Golden Rule says the transition from \(| i \rangle\) to \(| f \rangle\) is given by\[\begin{align} k _ {f i} &= \frac {2 \pi} {\hbar} \sum _ {i , f} p _ {i} | \langle i | V | f \rangle |^{2} \delta \left( E _ {f} - E _ {i} \right) \label{15.8} \\[4pt] &= \frac {2 \pi} {\hbar} \sum _ {a , \alpha , b , \beta} p _ {a , \alpha} | \langle a \alpha | V | b \beta \rangle |^{2} \delta \left( \left( E _ {b} + E _ {\beta} \right) - \left( E _ {a} + E _ {\alpha} \right) \right) \\[4pt] &= \frac {1} {\hbar^{2}} \int _ {- \infty}^{+ \infty} d t \sum _ {a , \alpha \atop b , \beta} p _ {a , \alpha} \langle a \alpha | V | b \beta \rangle \langle b \beta | V | a \alpha \rangle e^{- i \left( E _ {b} - E _ {a} \right) + \left( E _ {\beta} - E _ {\alpha} \right) ) t / \hbar} \label{15.10} \end{align}\]Equation \ref{15.10} is just a restatement of the time domain version of Equation \ref{15.8}\[k _ {f _ {f}} = \frac {1} {\hbar^{2}} \int _ {- \infty}^{+ \infty} d t \langle V (t) V ( 0 ) \rangle \label{15.11}\]with\[V (t) = e^{i H _ {0} t} V e^{- i H _ {0} t} \label{15.12}\]Now, the matrix element involves both evaluation in both the system and bath states, but if we write this in terms of a matrix element in the system coordinate \(V _ {a b} = \langle a | V | b \rangle\):\[\langle a \alpha | V | b \beta \rangle = \left\langle \alpha \left| V _ {a b} \right| \beta \right\rangle \label{15.13}\]Then we can write the rate as\[\begin{align} k_{b a} &=\frac{1}{\hbar^{2}} \int_{-\infty}^{+\infty} d t \sum_{\alpha, \beta} p_{\alpha}\left\langle\alpha\left|e^{+i E_{\alpha} t} V_{a b} e^{-i E_{\beta} t}\right| \beta\right\rangle\left\langle\beta\left|V_{b a}\right| \alpha\right\rangle e^{-i \omega_{b a} t} \label{15.14} \\[4pt] &=\frac{1}{\hbar^{2}} \int_{-\infty}^{+\infty} d t\left\langle V_{a b}(t) V_{b a}\right\rangle_{B} e^{-i \omega_{b a} t} \label{15.15} \end{align}\]\[V _ {a b} (t) = e^{i H _ {B} t} V _ {a b} e^{- i H _ {B} t} \label{15.16}\]Equation \ref{15.15} says that the relaxation rate is determined by a correlation function\[C _ {b a} (t) = \left\langle V _ {a b} (t) V _ {b a} ( 0 ) \right\rangle \label{15.17}\]which describes the time-dependent changes to the coupling between \(b\) and \(a\). The time dependence of the interaction arises from the interaction with the bath; hence its time evolution under \(H_B\). The subscript \(\langle \cdots \rangle _ {B}\) means an equilibrium thermal average over the bath states\[\langle \cdots \rangle _ {B} = \sum _ {\alpha} p _ {\alpha} \langle \alpha | \cdots | \alpha \rangle \label{15.18}\]Note also that Equation \ref{15.15} is similar but not quite a Fourier transform. This expression says that the relaxation rate is given by the Fourier transform of the correlation function for the fluctuating coupling evaluated at the energy gap between the initial and final state states.Alternatively we could think of the rate in terms of a vibrational coupling spectral density, and the rate is given by its magnitude at the system energy gap \(\omega _ {b a}\).\[k _ {b a} = \frac {1} {\hbar^{2}} \tilde {C} _ {b a} \left( \omega _ {a b} \right) \label{15.19}\]where the spectral representation \(\tilde {C} _ {b a} \left( \omega _ {\omega b} \right)\) is defined as the Fourier transform of \(C _ {b a} (t)\).To evaluate these expressions, let’s begin by consider the specific case of a system vibration coupled to a harmonic bath, which we will describe by a spectral density. Imagine that we prepare the system in an excited vibrational state in \(v=|1\rangle\) and we want to describe relaxation \(v=|0\rangle\).\[H _ {S} = \hbar \omega _ {0} \left( P^{2} + Q^{2} \right) \label{15.20}\]\[H _ {B} = \sum _ {\alpha} \hbar \omega _ {\alpha} \left( p _ {\alpha}^{2} + q _ {\alpha}^{2} \right) = \sum _ {\alpha} \hbar \omega _ {\alpha} \left( a _ {\alpha}^{\dagger} a _ {\alpha} + \frac {1} {2} \right) \label{15.21}\]We will take the system–bath interaction to be linear in the bath coordinates:\[V = H _ {S B} = \sum _ {\alpha} c _ {\alpha} Q q _ {\alpha} \label{15.22}\]Here \(\mathcal {C} \alpha\) is a coupling constant which describes the strength of the interaction between the system and bath mode \(\alpha\). Note, that this form suggests that the system vibration is a local mode interacting with a set of normal vibrations of the bath.For the case of single quantum relaxation from \(| a \rangle = | 1 \rangle\) to \(b = | 0 \rangle\), we can write the coupling matrix element as\[V _ {b a} = \sum _ {\alpha} \xi _ {a b , \alpha} \left( a _ {\alpha}^{\dagger} + a _ {\alpha} \right) \label{15.23}\]where\[\xi _ {a b , \alpha} = c _ {\alpha} \frac {\sqrt {m _ {\varrho} m _ {q} \omega _ {0} \omega _ {\alpha}}} {2 \hbar} \langle b | Q | a \rangle \label{15.24}\]NoteNote that we are using an equilibrium property, the coupling correlation function, to describe a nonequilibrium process, the relaxation of an excited state. Underlying the validity of the expressions are the principles of linear response. In practice this also implies a time scale separation between the equilibration of the bath and the relaxation of the system state. The bath correlation function should work fine if it has rapidly equilibrated, even though the system may not have. An instance where this would work well is electronic spectroscopy, where relaxation and thermalization in the excited state occurs on picosecond time scales, whereas the electronic population relaxation is on nanosecond time scales.Here the matrix element \(\langle b | Q | a \rangle\) is taken in evaluating \(\xi _ {a b , \alpha}\). Evaluating Equation \ref{15.17} is now much the same as problems we’ve had previously:\[\begin{align} \left\langle V _ {a b} (t) V _ {b a} ( 0 ) \right\rangle _ {B} &= \left\langle e^{i H _ {B} t} V _ {a b} e^{- i H _ {B} t} V _ {b a} \right\rangle _ {B} \\[4pt] &= \sum _ {\alpha} \xi _ {\alpha}^{2} \left[ \left( \overline {n} _ {\alpha} + 1 \right) e^{- i \omega _ {\alpha} t} + \overline {n} _ {\alpha} e^{+ i \omega _ {\alpha} t} \right] \label{15.25} \end{align}\]here \(\overline {n} _ {\alpha} = \left( e^{\beta \hbar \omega _ {\alpha}} - 1 \right)^{- 1}\) is the thermally averaged occupation number of the bath mode at \(\omega_{\alpha}\). In evaluating this we take advantage of relationships we have used before\[\overline {n} _ {\alpha} = \left( e^{\beta \hbar \omega _ {\alpha}} - 1 \right)^{- 1} \label{15.26}\]\[\left. \begin{array} {l} {\left\langle a _ {\alpha} a _ {\alpha}^{\dagger} \right\rangle = \overline {n} _ {\alpha} + 1} \\ {\left\langle a _ {\alpha}^{\dagger} a _ {\alpha} \right\rangle = \overline {n} _ {\alpha}} \end{array} \right. \label{15.27}\]So, now by Fourier transforming (Equation \ref{15.25}) we have the rate as\[k _ {b a} = \frac {1} {\hbar^{2}} \sum _ {\alpha} \left[ \xi _ {\alpha} \right] _ {a b}^{2} \left[ \left( \overline {n} _ {\alpha} + 1 \right) \delta \left( \omega _ {b a} + \omega _ {\alpha} \right) + \overline {n} _ {\alpha} \delta \left( \omega _ {b a} - \omega _ {\alpha} \right) \right] \label{15.28}\]This expression describes two relaxation processes which depend on temperature. The first is allowed at \(T = 0\, K\) and is obeys \(- \omega _ {b a} = \omega _ {\alpha}\). This implies that \( E _ {a} > E _ {b} \), and that a loss of energy in the system is balanced by an equal rises in energy of the bath. That is \(| \beta \rangle = | \alpha + 1 \rangle\). The second term is only allowed for elevated temperatures. It describes relaxation of the system by transfer to a higher energy state \(E _ {b} > E _ {a}\), with a concerted decrease of the energy of the bath (\(| \beta \rangle = | \alpha - 1 \rangle\)). Naturally, this process vanishes if there is no thermal energy in the bath.NoteThere is an exact analogy between this problem and the interaction of matter with a quantum radiation field. The interaction potential is instead a quantum vector potential and the bath is the photon field of different electromagnetic modes. Equation \ref{15.28} describes has two terms that describe emission and absorption processes. The leading term describes the possibility of spontaneous emission, where a material system can relax in the absence of light by emitting a photon at the same frequency.To more accurately model the relaxation due to a continuum of modes, we can replace the explicit sum over bath states with an integral over a density of bath states \(W\)\[k _ {b a} = \frac {1} {\hbar^{2}} \int d \omega _ {\alpha} W \left( \omega _ {\alpha} \right) \xi _ {b a}^{2} \left( \omega _ {\alpha} \right) \left[ \left( \overline {n} \left( \omega _ {\alpha} \right) + 1 \right) \delta \left( \omega _ {b a} + \omega _ {\alpha} \right) + \overline {n} \left( \omega _ {\alpha} \right) \delta \left( \omega _ {b a} - \omega _ {\alpha} \right) \right] \label{15.29}\]We can also define a spectral density, which is the vibrational coupling-weighted density of states:\[\rho \left( \omega _ {\alpha} \right) \equiv W \left( \omega _ {\alpha} \right) \xi _ {b a}^{\mathcal {E}} \left( \omega _ {\alpha} \right) \label{15.30}\]Then the relaxation rate is:\[\left.\begin{aligned} k _ {b a} & = \frac {1} {\hbar^{2}} \int d \omega _ {\alpha} W \left( \omega _ {\alpha} \right) \xi _ {b a}^{2} \left( \omega _ {\alpha} \right) \left[ \left( \overline {n} \left( \omega _ {\alpha} \right) + 1 \right) \delta \left( \omega _ {b a} + \omega _ {\alpha} \right) + \overline {n} \left( \omega _ {\alpha} \right) \delta \left( \omega _ {b a} - \omega _ {\alpha} \right) \right] \\ & = \frac {1} {\hbar^{2}} \left[ \left( \overline {n} \left( \omega _ {b a} \right) + 1 \right) \rho _ {b a} \left( \omega _ {a b} \right) + \overline {n} \left( \omega _ {b a} \right) \rho _ {b a} \left( - \omega _ {a b} \right) \right] \end{aligned} \right. \label{15.31}\]We see that the Fourier transform of the fluctuating coupling correlation function, is equivalent to the coupling-weighted density of states, which we evaluate at \(\omega _ {b a}\) or \(-\omega _ {b a}\) depending on whether we are looking at upward or downward transitions. Note that \(\overline {n}\) still refers to the occupation number for the bath, although it is evaluated at the energy splitting between the initial and final system states. Equation \ref{15.31} is a full quantum expression, and obeys detailed balance between the upward and downward rates of transition between two states:\[k _ {b a} = \exp \left( - \beta \hbar \omega _ {a b} \right) k _ {a b} \label{15.32}\]From our description of the two level system in a harmonic bath, we see that high frequency relaxation (\(k T < < \hbar \omega _ {0}\)) only proceeds with energy from the system going into a mode of the bath at the same frequency, but at lower frequencies (\(k T \approx \hbar \omega _ {0}\)) that energy can flow both into the bath and from the bath back into the system. When the vibration has energies that are thermally populated in the bath, we return to the classical picture of a vibration in a fluctuating environment that can dissipate energy from the vibration as well as giving kicks that increase the energy of the vibration. Note that in a cascaded relaxation scheme, as one approaches kT, the fraction of transitions that increase the system energy increase. Also, note that the bi-linear coupling in Equation \ref{15.22} and used in our treatment of quantum fluctuations can be associated with fluctuations of the bath that induce changes in energy (relaxation) and shifts of frequency (dephasing).3 Vibrational relaxation of polyatomic molecules in solids or in solution involves anharmonic coupling of energy between internal vibrations of the molecule, also called IVR (internal vibrational energy redistribution). Mechanical interactions between multiple modes of vibrationof the molecule act to rapidly scramble energy deposited into one vibrational coordinate and lead to cascaded energy flow toward equilibrium.For this problem the bilinear coupling above doesn’t capture the proper relaxation process. Instead we can express the molecular potential energy in terms of well-defined normal modes of vibration for the system and the bath, and these interact weakly through small anharmonic terms in the potential. Then we can extend the perturbative approach above to include the effect of multiple accepting vibrations of the system or bath. For a set of system and bath coordinates, the potential energy for the system and system–bath interaction can be expanded as\[V _ {S} + V _ {S B} = \frac {1} {2} \sum _ {a} \frac {\partial^{2} V} {\partial Q _ {a}^{2}} Q _ {a}^{2} + \frac {1} {6} \sum _ {a , \alpha , \beta} \frac {\partial^{3} V} {\partial Q _ {a} \partial q _ {\alpha} \partial q _ {\beta}} Q _ {a , b , \alpha} q _ {\beta} + \frac {1} {6} \sum _ {a , b , \alpha} \frac {\partial^{3} V} {\partial Q _ {a} \partial Q _ {b} \partial q _ {\alpha}} Q _ {a} Q _ {b} q _ {\alpha} \cdots \label{15.33}\]Focusing explicitly on the first cubic expansion term, for one system oscillator:\[V _ {S} + V _ {S B} = \frac {1} {2} m \Omega^{2} Q^{2} + V^{( 3 )} Q q _ {\alpha} q _ {\beta} \label{15.34}\]Here, the system–bath interaction potential describes the case for a cubic anharmonic coupling that involves one vibration of the system \(Q\) interacting weakly with two vibrations of the bath \(\frac {9} {2} \alpha\) and \(9 _ {\beta}\), so that \(\hbar \Omega \gg V^{( 3 )}\). Energy deposited in the system vibration will dissipate to the two vibrations of the bath, a three quantum process. Higher-order expansion terms would describe interactions involving four or more quanta.Working specifically with the cubic example, we can use the harmonic bath model to calculate the rate of energy relaxation. This picture is applicable if a vibrational mode of frequency \(\Omega\) relaxes by transferring its energy to another vibration nearby in energy (\(\infty _ {\alpha}\)), and the energy difference \(\omega _ {\beta}\) being accounted for by a continuum of intermolecular motions. For this case one can show\[k _ {b a} = \frac {1} {\hbar^{2}} \left[ \left( \overline {n} \left( \omega _ {\alpha} \right) + 1 \right) \left( \overline {n} \left( \omega _ {\beta} \right) + 1 \right) \rho _ {b a} \left( \omega _ {a b} \right) + \left( \overline {n} \left( \omega _ {\alpha} \right) + 1 \right) \overline {n} \left( \omega _ {\beta} \right) \rho _ {b a} \left( \omega _ {a b} \right) \right] \label{15.35}\]where \(\rho ( \omega ) \equiv W ( \omega ) \left( V^{( 3 )} ( \omega ) \right)^{2}.\). Here we have taken , \(\Omega , \omega _ {\alpha} \gg \omega _ {\beta}\). These two terms describe two possible relaxation pathways, the first in which annihilation of a quantum of \(\Omega\) leads to a creation of one quantum each of \(\omega_{\alpha} \text { and } \omega_{\beta}\). The second term describes the dissipation of energy by coupling to a higher energy vibration, with the excess energy being absorbed from the bath. Annihilation of a quantum of \(\Omega\) leads to a creation of one quantum of \(\omega_{\alpha}\) and the annihilation of one quantum of \(\omega_{\beta}\). Naturally this latter term is only allowed when there is adequate thermal energy present in the bath.In general, we would like a practical way to calculate relaxation rates, and calculating quantum correlation functions is not practical. How do we use classical calculations for the bath, for instance drawing on a classical molecular dynamics simulation? Is there a way to get a quantum mechanical rate?The first problem is that the quantum correlation function is complex \(C _ {a b}^{*} (t) = C _ {a b} ( - t )\) and the classical correlation function is real and even \(C _ {C l} (t) = C _ {C l} ( - t )\). In order to connect these two correlation functions, one can derive a quantum correction factor that allows one to predict the quantum correlation function on the basis of the classical one. This is based on the assumption that at high temperature it should be possible to substitute the classical correlation function with the real part of the quantum correlation function\[C _ {c} (t) \Rightarrow C _ {b n}^{\prime} (t) \label{15.36}\]To make this adjustment we start with the frequency domain expression derived from the detailed balance expression \(\tilde {C} ( - \omega ) = e^{- \beta \hbar \omega} \tilde {C} ( \omega )\)\[\tilde {C} ( \omega ) = \frac {2} {1 + \exp ( - \beta \hbar \omega )} \tilde {C}^{\prime} ( \omega ) \label{15.37}\]Here \(\tilde {C}^{\prime} ( \omega )\) is defined as the Fourier transform of the real part of the quantum correlation function. So the vibrational relaxation rate is\[k _ {b a} = \frac {4} {\hbar^{2} \left( 1 + \exp \left( - \hbar \omega _ {b a} / k T \right) \right)} \int _ {0}^{\infty} d t e^{- i \omega _ {\omega a} t} \operatorname {Re} \left[ \left\langle V _ {a b} (t) V _ {b a} ( 0 ) \right\rangle \right] \label{15.38}\]Now we will assume that one can replace a classical calculation of the correlation function here as in Equation \ref{15.36}. The leading term out front can be considered a “quantum correction factor” that accounts for the detailed balance of rates encoded in the quantum spectral density.In practice such a calculation might be done with molecular dynamics simulations. Here one has an explicit characterization of the intermolecular forces that would act to damp the excited vibrational mode. One can calculate the system–bath interactions by expanding the vibrational potential of the system in the bath coordinates\[\left.\begin{aligned} V _ {S} + V _ {s B} & = V _ {0} + \sum _ {\alpha} \frac {\partial V^{\alpha}} {\partial Q} Q + \sum _ {\alpha} \frac {\partial^{2} V^{\alpha}} {\partial Q^{2}} Q^{2} + \cdots \\ & = V _ {0} + F Q + G Q^{2} + \cdots \end{aligned} \right. \label{15.39}\]Here \(V^{\alpha}\) represents the potential of an interaction of one solvent coordinate acting on the excited vibrational system coordinate \(Q\). The second term in this expansion \(FQ\) depends linearly on the system \(Q\) and bath \(\alpha\) coordinates, and we can use variation in this parameter to calculate the correlation function for the fluctuating interaction potential. Note that \(F\) is the force that molecules exert on \(Q\)! Thus the relevant classical correlation function for vibrational relaxation is a force correlation function\[C _ {C l} (t) = \langle F (t) F ( 0 ) \rangle \label{15.40}\]\[k _ {C l} = \frac {1} {k T} \int _ {0}^{\infty} d t \cos \omega _ {b a} t \langle F (t) F ( 0 ) \rangle \label{15.41}\]This page titled 16.1: Vibrational Relaxation is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,294 |
16.2: A Density Matrix Description of Quantum Relaxation
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/16%3A_Quantum_Relaxation_Processes/16.02%3A_A_Density_Matrix_Description_of_Quantum_Relaxation | Here we will more generally formulate a quantum mechanical picture of coherent and incoherent relaxation processes that occur as the result of interaction between a prepared system and its environment. This description will apply to the case where we separate the degrees of freedom in our problem into a system and a bath that interact. We have limited information about the bath degrees of freedom. As a statistical mixture, we only have knowledge of the probability of occupying states of the bath and not of the phase relationships required to describe a deterministic quantum system. For such problems, the density matrix is the natural tool.How does a system get into a mixed state? Generally, if you have two systems and you put these in contact with each other, interaction between the two will lead to a new system that is inseparable. Imagine that I have two systems \(H_S\) and \(H_B\) for which the eigenstates of \(H_S\) are \(| a \rangle\) and those of \(H_B\) are \(| \alpha \rangle\).\[H _ {0} = H _ {S} + H _ {B} \label{15.42}\]with\[\begin{align} H _ {S} | a \rangle &= E _ {a} | a \rangle \label{15.43} \\[4pt] H _ {B} | \alpha \rangle &= E _ {\alpha} | \alpha \rangle \end{align}\]In general, before these systems interact, they can be described in terms of product states in the eigenstates of \(H_S\) and \(H_B\):\[| \psi \left( t _ {0} \right) \rangle = | \psi _ {S}^{0} \rangle | \psi _ {B}^{0} \rangle \label{15.44}\]with\[ \begin{align} | \psi _ {S}^{0} \rangle &= \sum _ {a} s _ {a} | a \rangle \label{15.45A} \\[4pt] | \psi _ {B}^{0} \rangle &= \sum _ {\alpha} b _ {\alpha} | \alpha \rangle \end{align}\]\[| \psi _ {0} \rangle = \sum _ {a , \alpha} s _ {a} b _ {\alpha} | a \rangle | \alpha \rangle \label{15.46}\]After these states are allowed to interact, we have a new state vector \(| \psi (t) \rangle\). The new state can still be expressed in the zero-order basis, although this does not represent the eigenstates of the new Hamiltonian\[H = H _ {0} + V \label{15.47}\]\[| \psi (t) \rangle = \sum _ {a , \alpha} c _ {a \alpha} | a \alpha \rangle \label{15.48}\]For any point in time, \(\mathcal {C} _ {a \alpha}\) is the joint probability amplitude for finding particle of \(| \psi _ {S} \rangle\) in \(| a \rangle\) and simultaneously finding particle of \(| \psi _ {B} \rangle\) in \(| \alpha \rangle\). At \(t=t_o\), \(c _ {a \alpha} = S _ {a} b _ {\alpha}\).Now suppose that you have an operator \(A\) that is only an operator in the \(| \psi _ {S} \rangle\) coordinates. This might represent an observable for the system that you wish to measure. Let’s calculate the expectation value of \(A\)\[\langle A (t) \rangle = \langle \psi (t) | A | \psi (t) \rangle = \left\langle \psi _ {S} | A | \psi _ {S} \right\rangle \label{15.49}\]\[\begin{aligned} \langle A (t) \rangle & = \sum _ {a , \alpha} c _ {a \alpha}^{*} c _ {b \beta} \langle a \alpha | A | b , \beta \rangle \\ & = \sum _ {a , \alpha} c _ {a \alpha}^{*} c _ {b \beta} \langle a | A | b \rangle \delta _ {\alpha \beta} \\ & = \sum _ {a , \alpha} c _ {a \alpha}^{*} c _ {b \beta} \langle a | A | b \rangle \delta _ {\alpha \beta} \\ & = \sum _ {a , \beta} \left( \sum _ {\alpha} c _ {a \alpha}^{*} c _ {b \alpha} \right) A _ {a b} \\ & \equiv \sum _ {a , b} \left( \rho _ {S} \right) _ {b a} A _ {a b} \\ & = \operatorname {Tr} \left[ \rho _ {S} A \right] \end{aligned} \]Here we have defined a density matrix for the degrees of freedom in \(| \psi _ {s} \rangle\)\[\rho _ {S} = | \psi _ {S} \rangle \langle \psi _ {S} | \label{15.51}\]with density matrix elements that traced over the \(| \psi _ {B} \rangle\) states, that is, that are averaged over the probability of occupying the \(| \psi _ {B} \rangle\) states.\[| b \rangle \rho _ {S} \langle a | = \sum _ {\alpha} c _ {a \alpha}^{*} c _ {b \alpha} \label{15.52}\]Here the matrix elements in direct product states involve elements of a four-dimensional matrix, which are specified by the tetradic notation.We have defined a trace of the density matrix over the unobserved degrees of freedom in \(| \psi _ {B} \rangle\), i.e. a sum over diagonal elements in \(\alpha\). To relate this to our similar prior expression: \(\langle A (t) \rangle = \operatorname {Tr} [ \rho A ]\), the following definitions are useful:\[\begin{aligned} \rho _ {S} & = T r _ {B} ( \rho ) \\ & = \sum _ {a , b} \left( \rho _ {S} \right) _ {b a} A _ {a b} \\ & = \operatorname {Tr} \left( \rho _ {S} A \right) \end{aligned} \label{15.53}\]Also,\[\operatorname {Tr} ( A \times B ) = \operatorname {Tr} ( A ) \operatorname {Tr} ( B ) \label{15.54}\]Since \(\rho _ {S}\) is Hermitian, it can be diagonalized by a unitary transformation \(T\), where the new eigenbasis \(| m \rangle\) represents the mixed states of the \(| \psi _ {S} \rangle\) system.\[\rho _ {S} = \sum _ {m} | m \rangle \langle m | \rho _ {m m} \label{15.55}\]\[\sum _ {m} \rho _ {m n} = 1 \label{15.56}\]The density matrix elements represent the probability of occupying state m averaged over the bath degrees of freedom\[\begin{aligned} \rho _ {m n} & = \sum _ {a , b} T _ {m b} \rho _ {b a} T _ {a m}^{\dagger} \\ & = \sum _ {a , b} a _ {b \alpha} T _ {m b} a _ {a \alpha}^{*} T _ {m a}^{*} \\ & = \sum _ {\alpha} f _ {m \alpha} f _ {m \alpha}^{*} \\ & = \sum _ {\alpha} \left| f _ {m \alpha} \right|^{2} = p _ {m} \geq 0 \end{aligned} \label{15.57}\]The quantum mechanical interaction of one system with another causes the system to be in a mixed state after the interaction. The mixed states are generally not separable into the original states. The mixed state is described by\[| \psi _ {S} \rangle = \sum _ {m} d _ {m} | m \rangle \label{15.58}\]\[d _ {m} = \sum _ {\alpha} f _ {m \alpha} \label{15.59}\]If we only observe a few degrees of freedom, we can calculate observables by tracing over unobserved degrees of freedom. This forms the basis for treating relaxation phenomena. A few degrees of freedom that we observe, coupled to many other degrees of freedom, which lend to irreversible relaxation.So now to describe irreversible processes in quantum systems, let’s look at the case where we have partitioned the problem so that we have a few degrees of freedom that we are most interested in (the system), which is governed by \(H_S\) and which we observe with a system operator \(A\). The remaining degrees of freedom are a bath, which interact with the system. The Hamiltonian is given by Equation \ref{15.42} and \ref{15.47}. In our observations, we will be interested in expectation values in \(A\) which we have seen are written\[\begin{aligned} \left\langle A _ {S} \right\rangle & = \operatorname {Tr} [ \rho (t) A ] \\[4pt] & = \operatorname {Tr} _ {S} [ \sigma (t) A ] \\[4pt] & = \sum _ {a , b} \sigma _ {a b} (t) A _ {b a} \\[4pt] & = T r _ {S} T r _ {B} [ \rho (t) A ] \end{aligned} \label{15.60}\]Here \(\sigma\) is the reduced density operator for the system degrees of freedom. This is the more commonly variable used for \(\rho _ {S}\).\[\sigma_{ab} = \sum_{\alpha} \langle a \alpha | \rho | b \alpha \rangle = \operatorname {Tr} _ {B} \rho _ {a b} \label{15.61}\]\(T r _ {B}\) and \(T r _ {S}\) are partial traces over the bath and system respectively. Note, that since\[\operatorname {Tr} ( A \times B ) = \operatorname {Tr} A T r B\]for direct product states, all we need to do is describe time evolution of \(\sigma\) to understand the time dependence to \(A\).We obtain the equation of motion for the reduced density matrix beginning with\[\rho (t) = U (t) \rho ( 0 ) U^{\dagger} (t) \label{15.62}\]and tracing over bath:\[\sigma (t) = T r _ {B} \left[ U \rho U^{\dagger} \right] \label{15.63}\]We can treat the time evolution of the reduced density matrix in the interaction picture. From our earlier discussion of the density matrix, we integrate the equation of motion\[\dot {\rho} _ {I} = - \frac {i} {\hbar} \left[ V _ {I} (t) , \rho _ {I} (t) \right] \label{15.64}\]to obtain\[\rho _ {I} (t) = \rho _ {I} ( 0 ) - \frac {i} {\hbar} \int _ {0}^{t} d \tau \left[ V _ {I} ( \tau ) , \rho _ {I} ( \tau ) \right] \label{15.65}\]Remember that the density matrix in the interaction picture is\[\rho _ {I} (t) = U _ {0}^{\dagger} \rho (t) U _ {0} = e^{i \left( H _ {s} + H _ {B} \right) t / \hbar} \rho (t) e^{- i \left( H _ {s} + H _ {B} \right) t / \hbar} \label{15.66}\]and similarly\[V _ {I} (t) = U _ {0}^{\dagger} V U _ {0} = e^{i \left( H _ {S} + H _ {B} \right) t / \hbar} V (t) e^{- i \left( H _ {s} + H _ {B} \right) t / \hbar} \label{15.67}\]Substituting Equation \ref{15.65} into Equation \ref{15.64} we have\[\dot {\rho} _ {I} (t) = - \frac {i} {\hbar} \left[ V _ {I} (t) , \rho _ {I} \left( t _ {0} \right) \right] - \frac {1} {\hbar^{2}} \int _ {0}^{t} d t^{\prime} \left[ V _ {I} (t) , \left[ V _ {I} \left( t^{\prime} \right) , \rho _ {I} \left( t^{\prime} \right) \right] \right] \label{15.68}\]Now taking a trace over the bath states\[\dot {\sigma} _ {I} (t) = - \frac {i} {\hbar} T r _ {B} \left[ V _ {I} (t) , \rho _ {I} \left( t _ {0} \right) \right] - \frac {1} {\hbar^{2}} \int _ {0}^{t} d t^{\prime} T r _ {B} \left[ V _ {I} (t) , \left[ V _ {I} \left( t^{\prime} \right) , \rho _ {I} \left( t^{\prime} \right) \right] \right] \label{15.69}\]If we assume that the interaction of the system and bath is small enough that the system cannot change the bath\[\rho _ {I} (t) \approx \sigma _ {I} (t) \rho _ {B} ( 0 ) = \sigma _ {I} (t) \rho _ {e q}^{B} \label{15.70}\]\[\rho _ {e q}^{B} = \frac {e^{- \beta H _ {B}}} {Z} \label{15.71}\]Then we obtain an equation of motion for \(\sigma\) to second order:\[\dot {\sigma} _ {I} (t) = - \frac {i} {\hbar} T r _ {B} \left[ V _ {I} (t) , \sigma _ {I} ( 0 ) \rho _ {e q}^{B} \right] - \frac {1} {h^{2}} \int _ {0}^{t} d t^{\prime} T r _ {B} \left[ V _ {I} (t) , \left[ V _ {I} \left( t^{\prime} \right) , \sigma _ {I} \left( t^{\prime} \right) \rho _ {e q}^{B} \right] \right] \label{15.72}\]The last term involves an integral over a correlation function for a fluctuating interaction potential. This looks similar to a linear response function, and also the same form as the relaxation rates from Fermi’s Golden Rule that we just discussed. The first term in Equation \ref{15.72} involves a thermal average over the interaction potential,\[\langle V \rangle _ {B} = T r _ {B} \left[ V \rho _ {e q}^{B} \right].\]If this average value is zero, which would be the case for an off-diagonal form of \(V\), we can drop the first term in the equation of motion for \(\sigma_I\). If it were not zero, it is possible to redefine the Hamiltonian such that \(H_{0} \rightarrow H_{0}+\langle V\rangle_{B} \text { and } V(t) \rightarrow V(t)-\langle V\rangle_{B}\)which recasts it in a form where \(\langle V \rangle _ {B} \rightarrow 0\) and the first term can be neglected. Now let’s evaluate the equation of motion for the case where the system–bath interaction can be written as a product of operators in the system \(\hat{A}\) and bath \(\hat {\beta}\)\[H _ {s B} = V = \hat {A} \hat {\beta}\label{15.73}\]This is equivalent to the bilinear coupling form that was used in our prior description of dephasing and population relaxation. There we took the interaction to be linearly proportional to the system and bath coordinate(s): \(V = c \varrho q\). The time evolution in the two variables is separable and given by\[ \begin{array} {l} {\hat {A} (t) = U _ {S}^{\dagger} \hat {A} \left( t _ {0} \right) U _ {S}} \\[4pt] {\hat {\beta} (t) = U _ {B}^{\dagger} \hat {\beta} \left( t _ {0} \right) U _ {B}} \end{array} \label{15.74}\]The equation of motion for \(\sigma _ {I}\) becomes\[ \dot {\sigma} _ {I} (t) = \frac {1} {\hbar^{2}} \int _ {0}^{t} d t^{\prime} \left[ \hat {A} (t) \hat {A} \left( t^{\prime} \right) \sigma \left( t^{\prime} \right) - \hat {A} \left( t^{\prime} \right) \sigma \left( t^{\prime} \right) \hat {A} (t) \right] \operatorname {Tr} _ {B} \left( \hat {\beta} (t) \hat {\beta} \left( t^{\prime} \right) \rho _ {e q}^{B} \right) - \left[ \hat {A} (t) \sigma \left( t^{\prime} \right) \hat {A} \left( t^{\prime} \right) - \sigma \left( t^{\prime} \right) \hat {A} \left( t^{\prime} \right) \hat {A} (t) \right] \operatorname {Tr} _ {B} \left( \hat {\beta} \left( t^{\prime} \right) \hat {\beta} (t) \rho _ {e q}^{B} \right) \label{15.75}\]Here the history of the evolution of \(\hat{A}\) depends on the time dependence of the bath variables coupled to the system. The time dependence of the bath enters as a bath correlation function\[\begin{aligned} C _ {\beta \beta} \left( t - t^{\prime} \right) & = \operatorname {Tr} _ {B} \left( \hat {\beta} (t) \hat {\beta} \left( t^{\prime} \right) \rho _ {e q}^{B} \right) \\[4pt] & = \left\langle \hat {\beta} (t) \hat {\beta} \left( t^{\prime} \right) \right\rangle _ {B} = \left\langle \hat {\beta} \left( t - t^{\prime} \right) \hat {\beta} ( 0 ) \right\rangle _ {B} \end{aligned} \label{15.76}\]The bath correlation function can be evaluated using the methods that we have used in the Energy Gap Hamiltonian and Brownian Oscillator Models. Switching integration variables to the time interval prior to observation\[\tau = t - t^{\prime} \label{15.77}\]we obtain\[\dot {\sigma} _ {I} (t) = - \frac {1} {\hbar^{2}} \int _ {0}^{t} d \tau \left[ \hat {A} (t) , \hat {A} ( t - \tau ) \sigma _ {I} ( t - \tau ) \right] C _ {\beta \beta} ( \tau ) - \left[ \hat {A} (t) , \sigma _ {I} ( t - \tau ) \hat {A} ( t - \tau ) \right] C _ {\beta \beta}^{*} ( \tau ) \label{15.78}\]Here we have made use of \(C _ {\beta \beta}^{*} ( \tau ) = C _ {\beta \beta} ( - \tau )\). For the case that the system–bath interaction is a result of interactions with many bath coordinates\[V = \sum _ {\alpha} \hat {A} \hat {\beta} _ {\alpha} \label{15.79}\]then Equation \ref{15.78} becomes\[\dot {\sigma} _ {I} (t) = - \frac {1} {\hbar^{2}} \sum _ {\alpha , \beta} \int _ {0}^{t} d \tau \left[ \hat {A} (t) , \hat {A} ( t - \tau ) \sigma _ {I} ( t - \tau ) \right] C _ {\alpha \beta} ( \tau ) - \left[ \hat {A} (t) , \sigma _ {I} ( t - \tau ) \hat {A} ( t - \tau ) \right] C _ {\alpha \beta}^{*} ( \tau ) \label{15.80}\]with the bath correlation function\[C _ {\alpha \beta} ( \tau ) = \left\langle \hat {\beta} _ {\alpha} ( \tau ) \hat {\beta} _ {\beta} ( 0 ) \right\rangle _ {B} \label{15.81}\]Equation \ref{15.78} or \ref{15.80} indicates that the rates of exchange of amplitude between the system states carries memory of the bath’s influence on the system, that is, \(\sigma _ {I} (t)\) is dependent on \(\sigma _ {I} ( t - \tau )\). If we make the Markov approximation, for which the dynamics of the bath are much faster than the evolution of the system and where the system has no memory of its past, we would replace\[\sigma \left( t^{\prime} \right) = \sigma \left( t^{\prime} \right) \delta \left( t - t^{\prime} \right) \Rightarrow \sigma (t) \label{15.82}\]in Equation \ref{15.75}, or equivalently in Equation \ref{15.78} set\[\sigma _ {I} ( t - \tau ) \Rightarrow \sigma _ {I} (t) \label{15.83}\]For the subsequent work, we use this approximation. Similarly, the presence of a time scale separation between a slow system and a fast bath allows us to change the upper integration limit in Equation \ref{15.78} from \(t\) to \(∞\). \[ \dot {\sigma} _ {a b} (t) = - i \omega _ {a b} \sigma _ {a b} (t) - \frac {1} {\hbar^{2}} \sum _ {c , d} [ \hat {A} _ {a c} \hat {A} _ {c d} \sigma _ {d b} (t) \tilde {C} _ {\beta \beta} \left( \omega _ {d c} \right) - \hat {A} _ {a c} \hat {A} _ {d b} \sigma _ {c d} (t) \tilde {C} _ {\beta \beta} \left( \omega _ {c a} \right) - \hat {A}^{2} \hat {A} _ {d b} \sigma _ {c d} (t) \tilde {C} _ {\beta \beta}^{*} \left( - \omega _ {d b} \right) + \hat {A} _ {c d} \hat {A} _ {d b} \sigma _ {a c} (t) \tilde {C} _ {\beta \beta}^{*} \left( - \omega _ {c d} \right) ] \label{15.90}\]\[ \dot {\sigma} _ {a b} (t) = - i \omega _ {a b} \sigma _ {a b} (t) - \frac {1} {\hbar^{2}} \sum _ {c , d} [ \hat {A} _ {a c} \hat {A} _ {c d} \sigma _ {d b} (t) \tilde {C} _ {\beta \beta} \left( \omega _ {d c} \right) - \hat {A} _ {a c} \hat {A} _ {d b} \sigma _ {c d} (t) \tilde {C} _ {\beta \beta} \left( \omega _ {c a} \right) - \hat {A}^{2} \hat {A} _ {d b} \sigma _ {c d} (t) \tilde {C} _ {\beta \beta}^{*} \left( - \omega _ {d b} \right) + \hat {A} _ {c d} \hat {A} _ {d b} \sigma _ {a c} (t) \tilde {C} _ {\beta \beta}^{*} \left( - \omega _ {c d} \right) ] \label{15.91}\]The rate constants are defined through:\[\Gamma _ {a b , c d}^{+} = \frac {1} {\hbar^{2}} A _ {a b} A _ {c d} \tilde {C} _ {\beta \beta} \left( \omega _ {d c} \right) \label{15.92}\]\[\Gamma _ {a b , c d}^{-} = \frac {1} {\hbar^{2}} A _ {a b} A _ {c d} \tilde {C} _ {\beta \beta} \left( \omega _ {b a} \right) \label{15.93}\]Here we made use of\[\tilde {C} _ {\beta \beta}^{*} ( \omega ) = \tilde {C} _ {\beta \beta} ( - \omega ).\]Also, it is helpful to note that\[\Gamma _ {a b , c d}^{+} = \left[ \Gamma _ {d c , b a}^{-} \right]^{*} \label{15.94}\]The coupled differential equations in Equation \ref{15.91} express the relaxation dynamics of the system states almost entirely in terms of the system Hamiltonian. The influence of the bath only enters through the bath correlation function.Do describe the exchange of amplitude between system states induced by the bath, we will want to evaluate the matrix elements of the reduced density matrix in the system eigenstates. To begin, we use Equation \ref{15.78} to write the time-dependent matrix elements as\[\dot {\sigma} _ {I} (t) = - \frac {1} {\hbar^{2}} \sum _ {\alpha , \beta} \int _ {0}^{t} d \tau \left[ \hat {A} (t) , \hat {A} ( t - \tau ) \sigma _ {I} ( t - \tau ) \right] C _ {\alpha \beta} ( \tau ) - \left[ \hat {A} (t) , \sigma _ {I} ( t - \tau ) \hat {A} ( t - \tau ) \right] C _ {\alpha \beta}^{*} ( \tau ) \label{15.84}\]Now, let’s convert the time dependence expressed in terms of the interaction picture into a Schrödinger representation using\[\left\langle a | A (t) | b \right\rangle = e^{i \omega _ {a b} t} A _ {a b}\]\[\left\langle a \left| \sigma^{I} \right| b \right\rangle = e^{i \omega _ {a b} t} \sigma _ {a b}\]with\[\sigma _ {a b} = \frac {\partial} {\partial t} \left\langle a \left| \sigma^{I} \right| b \right\rangle\]To see how this turns out, consider the first term in Equation \ref{15.84}:\[\dot {\sigma} _ {a b}^{I} (t) = - \sum _ {c , d} \frac {1} {\hbar^{2}} \int _ {0}^{\infty} d \tau \hat {A} _ {a c} (t) \hat {A} _ {c d} ( t - \tau ) \sigma _ {d b}^{I} (t) C _ {\beta \beta} ( \tau ) \label{15.86}\]\[\dot {\sigma} _ {a b} (t) e^{i \omega _ {b b} \tau} + i \omega _ {a b} e^{i \omega _ {b b} \tau} \sigma _ {a b} = - \sum _ {c , d} \frac {1} {\hbar^{2}} \hat {A} _ {a c} \hat {A} _ {c d} \sigma _ {d b} (t) e^{i \omega _ {a} t + i \omega _ {c d} t + i \omega _ {b b} t} \int _ {0}^{\infty} d \tau e^{- i \omega _ {c t} \tau} C _ {\beta \beta} ( \tau ) \label{15.87}\]Defining the Fourier-Laplace transform of the bath correlation function:\[\tilde {C} _ {\beta \beta} ( \omega ) = \int _ {0}^{\infty} d \tau e^{i \omega \pi} C _ {\beta \beta} ( \tau )\label{15.88}\]We have\[\dot {\sigma} _ {a b} (t) = - i \omega _ {a b} \sigma _ {a b} - \sum _ {c , d} \frac {1} {\hbar^{2}} \hat {A} _ {a c} \hat {A} _ {c d} \sigma _ {d b} (t) \tilde {C} _ {\beta \beta} \left( \omega _ {d c} \right) \label{15.89}\]Here the spectral representation of the bath correlation function is being evaluated at the energy gap between system states \(\omega _ {d c}\). So the evolution of coherences and populations in the system states is governed by their interactions with other system states, governed by the matrix elements, and this is modified depending on the fluctuations of the bath at different system state energy gaps. In this manner, Equation \ref{15.84} becomesThe common alternate way of writing these expressions is in terms of the relaxation superoperator \(\mathbf {R}\)\[\dot {\sigma} _ {a b} (t) = - i \omega _ {a b} \sigma _ {a b} - \sum _ {c , d} R _ {a b , c d} \sigma _ {c d} (t) \label{15.95}\]or in the interaction picture\[\dot {\sigma} _ {a b}^{I} (t) = \sum _ {c , d} \sigma _ {c d}^{I} (t) R _ {a b , c d} e^{i \left( E _ {a} - E _ {b} - E _ {c} + E _ {d} \right) t / h} \label{15.96}\]Equation \ref{15.95}, the reduced density matrix equation of motion for a Markovian bath, is known as the Redfield equation. It describes the irreversible and oscillatory components of the amplitude in the \(| a \rangle \langle b |\) coherence as a result of dissipation to the bath and feeding from other states. \(\mathbf {R}\) describes the rates of change of the diagonal and off-diagonal elements of \(\sigma _ {I}\) and is expressed as:\[R _ {a b , c d} = \delta _ {d b} \sum _ {k} \Gamma _ {a k , k c}^{+} - \Gamma _ {d b , a d}^{+} - \Gamma _ {d b , a d}^{-} + \delta _ {a c} \sum _ {k} \Gamma _ {d k , k b}^{-} \label{15.97}\]where \(k\) refers to a system state. The derivation described above can be performed without assuming a form to the system– bath interaction potential as we did in Equation \ref{15.73}. If so, one can write the relaxation operator in terms of a correlation function for the system–bath interaction ,\[\Gamma _ {a b , c d}^{+} = \frac {1} {\hbar^{2}} \int _ {0}^{\infty} d \tau \left\langle V _ {a b} ( \tau ) V _ {c d} ( 0 ) \right\rangle _ {B} e^{- i \omega _ {c d} \tau} \label{15.98}\]\[\Gamma _ {a b , c d}^{-} = \frac {1} {\hbar^{2}} \int _ {0}^{\infty} d \tau \left\langle V _ {a b} ( 0 ) V _ {c d} ( \tau ) \right\rangle _ {B} e^{- i \omega _ {a b} \tau} \label{15.99}\]The tetradic notation for the Redfield relaxation operator allows us to identify four classes of relaxation processes, depending on the number of states involved:The origin and meaning of these terms will be discussed below.From Equation \ref{15.96} we note that the largest changes in matrix elements of \(\sigma_I\) result from a resonance condition:\[\begin{aligned} \exp \left[ i \left( E _ {a} - E _ {b} - E _ {c} + E _ {d} \right) t / \hbar \right] & \approx 1 \\ E _ {a} - E _ {b} - E _ {c} + E _ {d} & \approx 0 \end{aligned} \label{15.100}\]which is satisfied when:\[ \begin{array} {l l} {a = c ; b = d} & {\Rightarrow R _ {a b , a b}} \\ {a = b ; c = d} & {\Rightarrow R _ {a a , c c}} \\ {a = b = c = d} & {\Rightarrow R _ {a a , a a}} \end{array} \label{15.101}\]In evaluating relaxation rates, often only these secular terms are retained. Whether this approximation is valid must be considered on a case by case basis and depends on the nature of the system eigenvalues and the bath correlation function.Population Relaxation and the Master EquationTo understand the information in the relaxation operator and the classification of relaxation processes, let’s first consider the relaxation of the diagonal elements of the reduced density matrix. Using the secular approximation,\[\dot{\sigma}_{a a}(t)=-\sum_{b} R_{a a, b b} \sigma_{b b}(t)\]Considering first the case that \(a ≠ b\), Equation \ref{15.97} gives the relaxation operator as\[R_{a a, b b}=-\Gamma_{b a, a b}^{+}-\Gamma_{b a, a b}^{-} \label{15.103}\]Recognizing that \(\Gamma^{+}\) and \(\Gamma^{-}\) are Hermitian conjugates,\[\begin{aligned} R _ {a a , b b} & = - \frac {1} {\hbar^{2}} \left| A _ {a b} \right|^{2} \int _ {0}^{\infty} d \tau C _ {\beta \beta} ( \tau ) e^{- i \omega _ {b a} \tau} + c . c . \\ & = - \frac {1} {\hbar^{2}} \int _ {0}^{\infty} d \tau \left\langle V _ {b a} ( \tau ) V _ {a b} ( 0 ) \right\rangle _ {B} e^{- i \omega _ {a b} \tau} + c . c . \end{aligned} \label{15.104}\]So \(R _ {a a , b b}\), is a real valued quantity. However, since \(\left\langle V _ {b a} ( \tau ) V _ {a b} ( 0 ) \right\rangle = \left\langle V _ {b a} ( 0 ) V _ {a b} ( - \tau ) \right\rangle\)\[R _ {a a , b b} = - \frac {1} {\hbar^{2}} \int _ {- \infty}^{+ \infty} d t \left\langle V _ {b a} ( \tau ) V _ {a b} ( 0 ) \right\rangle _ {B} e^{i \omega _ {b b} \tau} \label{15.105}\]So we see that the relaxation tensor gives the population relaxation rate between states \(a\) and \(b\) that we derived from Fermi’s Golden rule:\[R _ {a a , b b} = - w _ {a b} \label{15.106}\]if \(a \neq b\).For the case that \(a = b\), Equation \ref{15.97} gives the relaxation operator as\[ \begin{aligned} R _ {a a , a a} & = - \left( \Gamma _ {a a , a a}^{+} + \Gamma _ {a a , a a}^{-} \right) + \sum _ {k} \left( \Gamma _ {a k , k a}^{+} + \Gamma _ {a k , k a}^{-} \right) \\ & = \sum _ {k \neq a} \left( \Gamma _ {a k , k a}^{+} + \Gamma _ {a k , k a}^{-} \right) \end{aligned} \label{15.107}\]The relaxation accounts for the bath-induced dissipation for interactions with all states of the system (last term), but with the influence of self-relaxation (first term) removed. The net result is that we are left with the net rate of relaxation from a to all other system states (\(a ≠ k\))\[R _ {a a , a a} = \sum _ {k \neq a} w _ {k a} \label{15.108}\]This term \(R _ {a a , a a}\), is also referred to as the inverse of \(T_1\), the population lifetime of the \(a\) state.The combination of these observations shows that the diagonal elements of the reduced density matrix follow a master equation that describes the net gain and loss of population in a particular state\[\dot {\sigma} _ {a a} (t) = \sum _ {b \neq a} w _ {a b} \sigma _ {b b} (t) - \sum _ {k \neq a} w _ {k a} \sigma _ {a a} (t) \label{15.109}\]Now let’s consider the relaxation of the off-diagonal elements of the reduced density matrix. It is instructive to limit ourselves at first to one term in the relaxation operator, so that we write the equation of motion as\[\dot {\sigma} _ {a b} (t) = - i \omega _ {a b} \sigma _ {a b} (t) - R _ {a b , a b} \sigma _ {a b} (t) + \cdots \label{15.110}\]The relaxation operator gives\[\begin{align} R_{a b, a b} &=-\left(\Gamma_{a a, b b}^{+}+\Gamma_{a a, b b}^{-}\right)+\sum_{k}\left(\Gamma_{a k, k a}^{+}+\Gamma_{b k, k b}^{-}\right) \label{15.111} \\[4pt] &=-\left(\Gamma_{a a, b b}^{+}+\Gamma_{a a, b b}^{-}-\Gamma_{a a, a a}^{+}-\Gamma_{b b, b b}^{-}\right)+\left(\sum_{k \neq a} \Gamma_{a k, k a}^{+}+\sum_{k \neq b} \Gamma_{b k, k b}^{-}\right) \label{15.112} \end{align}\]In the second step, we have separated the sum into two terms, one involving relaxation constants for the two coherent states, and the second involving all other states. The latter term looks very similar to the relaxation rates. In fact, if we factor out the imaginary parts of these terms and add them as a correction to the frequency in Equation \ref{15.110}, \(\omega _ {a b} \rightarrow \omega _ {a b} + \operatorname {Im} [ t e r m 2 ]\), then the remaining expression is directly related to the population lifetimes of the \(a\) and \(b\) states:\[\begin{aligned} \operatorname {Re} \left( \sum _ {k \neq a} \Gamma _ {a k , k a}^{+} + \sum _ {k \neq b} \Gamma _ {b k , k b}^{-} \right) & = \frac {1} {2} \sum _ {k \neq b} w _ {k b} - \frac {1} {2} \sum _ {k \neq a} w _ {k a} \\ & = \frac {1} {2} \left( \frac {1} {T _ {1 , a}} + \frac {1} {T _ {1 , b}} \right) \end{aligned} \label{15.113}\]This term accounts for the decay of the coherence as a sum of the rates of relaxation of the \(a\) and \(b\) states.The meaning of the first term on the right hand side of Equation \ref{15.112} is a little less obvious. If we write out the four contributing relaxation factors explicitly using the system–bath correlation functions in Equation \ref{15.98} and \ref{15.99}, the real part can be written as\[\begin{align} \operatorname {Re} \left( \Gamma _ {a a , b b}^{+} + \Gamma _ {a a , b b}^{-} - \Gamma _ {b b , a b}^{+} - \Gamma _ {b b , b b}^{-} \right) &= \int _ {0}^{\infty} d \tau \left\langle \left[ V _ {b b} ( \tau ) - V _ {a a} ( \tau ) \right] \left[ V _ {b b} ( 0 ) - V _ {a a} ( 0 ) \right] \right\rangle _ {B} \\[4pt] &\equiv \int _ {0}^{\infty} d \tau \langle \Delta V ( \tau ) \Delta V ( 0 ) \rangle _ {B} \label{15.114B} \end{align}\]In essence, this term involves an integral over a correlation function that describes variations in the a-b energy gap that varies as a result of its interactions with the bath. So this term, in essence, accounts for the fluctuations of the energy gap that we previously treated with stochastic models. Of course in the current case, we have made a Markovian bath assumption, so the fluctuations are treated as rapid and only assigned an interaction strength \(\Gamma\) which is related to the linewidth. In an identical manner to the fast modulation limit of the stochastic model we see that the relaxation rate is related to the square of the amplitude of modulation times the correlation time for the bath:\[ \begin{align} \int _ {0}^{\infty} d \tau \langle \Delta V ( \tau ) \Delta V ( 0 ) \rangle _ {B} &= \left\langle \Delta V^{2} \right\rangle \tau _ {c} \label{15.115} \\[4pt] &\equiv \Gamma \\[4pt] & = \frac {1} {T _ {2}^{*}} \end{align}\]As earlier this is how the pure dephasing contribution to the Lorentzian lineshape is defined. It is also assigned a time scale \(T_2^*\).So to summarize, we see that the relaxation of coherences has a contribution from pure dephasing and from the lifetime of the states involved. Explicitly, the equation of motion in Equation \ref{15.110} can be re-written\[\dot {\sigma} _ {a b} (t) = - i \omega _ {a b} \sigma _ {a b} (t) - \frac {1} {T _ {2}} \sigma _ {a b} (t) \label{15.116}\]where the dephasing time is\[\frac {1} {T _ {2}} = \frac {1} {T _ {2}^{*}} + \frac {1} {2} \left( \frac {1} {T _ {1 , a}} + \frac {1} {T _ {1 , b}} \right) \label{15.117}\]and the frequency has been corrected as a result interactions with the bath with the (small) imaginary contributions to \(R _ {a b , a b}\):\[\omega _ {a b} = \omega _ {a b} + \operatorname {Im} \left[ R _ {a b , a b} \right] \label{15.118}\]This page titled 16.2: A Density Matrix Description of Quantum Relaxation is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,295 |
2.1: Time-Evolution with a Time-Independent Hamiltonian
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/02%3A_Introduction_to_Time-Dependent_Quantum_Mechanics/2.01%3A_Time-Evolution_with_a_Time-Independent_Hamiltonian | The time evolution of the state of a quantum system is described by the time-dependent Schrödinger equation (TDSE):\[i \hbar \frac {\partial} {\partial t} \psi ( \overline {r} , t ) = \hat {H} ( \overline {r} , t ) \psi ( \overline {r} , t ) \label{1.1}\]\(\hat{H}\) is the Hamiltonian operator which describes all interactions between particles and fields, and determines the state of the system in time and space. \(\hat{H}\) is the sum of the kinetic and potential energy. For one particle under the influence of a potential\[\hat {H} = - \frac {\hbar^{2}} {2 m} \hat {\nabla}^{2} + \hat {V} ( \overline {r} , t ) \label{1.2}\]The state of the system is expressed through the wavefunction \(\psi ( \overline {r} , t )\). The wavefunction is complex and cannot be observed itself, but through it we obtain the probability density\[P = | \psi ( \overline {r} , t ) |^{2},\]which characterizes the spatial probability distribution for the particles described by \(\hat{H}\) at time \(t\). Also, it is used to calculate the expectation value of an operator \(\hat{A}\)\[ \begin{align} \langle \hat {A} (t) \rangle &= \int \psi^{*} ( \overline {r} , t ) \hat {A} \psi ( \overline {r} , t ) d \overline {r} \\[4pt] &= \langle \psi (t) | \hat {A} | \psi (t) \rangle \label{1.3} \end{align}\]Physical observables must be real, and therefore will correspond to the expectation values of Hermitian operators (\(\hat {A} = \hat {A}^{\dagger}\)).Our first exposure to time-dependence in quantum mechanics is often for the specific case in which the Hamiltonian \(\hat{H}\) is assumed to be independent of time: \(\hat {H} = \hat {H} ( \overline {r} )\). We then assume a solution with a form in which the spatial and temporal variables in the wavefunction are separable:\[\psi ( \overline {r} , t ) = \varphi ( \overline {r} ) T (t) \label{1.4}\]\[i \hbar \frac {1} {T (t)} \frac {\partial} {\partial t} T (t) = \frac {\hat {H} ( \overline {r} ) \varphi ( \overline {r} )} {\varphi ( \overline {r} )} \label{1.5}\]Here the left-hand side is a function only of time, and the right-hand side is a function of space only (\(\overline {r}\), or rather position and momentum). Equation \ref{1.5} can only be satisfied if both sides are equal to the same constant, \(E\). Taking the right-hand side we have\[\frac {\hat {H} ( \overline {r} ) \varphi ( \overline {r} )} {\varphi ( \overline {r} )} = E \quad \Rightarrow \quad \hat {H} ( \overline {r} ) \varphi ( \overline {r} ) = E \varphi ( \overline {r} ) \label{1.6}\]This is the Time-Independent Schrödinger Equation (TISE), an eigenvalue equation, for which \(\varphi ( \overline {r} )\) are the eigenstates and \(E\) are the eigenvalues. Here we note that\[\langle \hat {H} \rangle = \langle \psi | \hat {H} | \psi \rangle = E,\]so \(\hat{H}\) is the operator corresponding to \(E\) and drawing on classical mechanics we associate \(\hat{H}\) with the expectation value of the energy of the system. Now taking the left-hand side of Equation \ref{1.5} and integrating:\[\begin{align} i \hbar \frac {1} {T (t)} \frac {\partial T} {\partial t} &= E \\[4pt] \left( \frac {\partial} {\partial t} + \frac {i E} {\hbar} \right) T (t) &= 0 \label{1.7} \end{align}\]which has solutions like this:\[T (t) = \exp ( - i E t / \hbar ) \label{1.8}\]So, in the case of a bound potential we will have a discrete set of eigenfunctions \(\varphi _ {n} ( \overline {r} )\) with corresponding energy eigenvalues \(E_n\) from the TISE, and there are a set of corresponding solutions to the TDSE.\[\psi _ {n} ( \overline {r} , t ) = \varphi _ {n} ( \overline {r} ) \underbrace{\exp \left( - i E _ {n} t / \hbar \right)}_{\text{phase factor}} \label{1.9}\]Phase FactorFor any complex number written in polar form (such as \(re^{iθ}\)), the phase factor is the complex exponential factor (\(e^{iθ}\)). The phase factor does not have any physical meaning, since the introduction of a phase factor does not change the expectation values of a Hermitian operator. That is\[ \langle \phi |A|\phi \rangle = \langle \phi |e^{-i\theta}Ae^{i\theta}|\phi \rangle \]Since the only time-dependence in \(\psi _ {n} \) is a phase factor, the probability density for an eigenstate is independent of time:\[P = \left| \psi _ {n} (t) \right|^{2} = \text {constant}.\]Therefore, the eigenstates \(\varphi ( \overline {r} )\) do not change with time and are called stationary states.However, more generally, a system may exist as a linear combination of eigenstates:\[\begin{align} \psi ( \overline {r} , t ) &= \sum _ {n} c _ {n} \psi _ {n} ( \overline {r} , t ) \\[4pt] &= \sum _ {n} c _ {n} e^{- i E _ {n} t h} \varphi _ {n} ( \overline {r} ) \label{1.10} \end{align}\]where \(c_n\) are complex amplitudes, with\[\sum _ {n} \left| c _ {n} \right|^{2} = 1. \nonumber\]For such a case, the probability density will oscillate with time. As an example, consider two eigenstates\[ \begin{align} \psi ( \overline {r} , t ) &= \psi _ {1} + \psi _ {2} \nonumber \\[4pt] &= c _ {1} \varphi _ {1} e^{- i E _ {1} t / h} + c _ {2} \varphi _ {2} e^{- i E _ {2} t / h} \label{1.11} \end{align}\]For this state the probability density oscillates in time as\[\begin{align} P (t) & = | \psi |^{2} \nonumber \\[4pt] &= \left| \psi _ {1} + \psi _ {2} \right|^{2} \nonumber \\[4pt] & = \left| c _ {1} \varphi _ {1} \right|^{2} + \left| c _ {2} \varphi _ {2} \right|^{2} + c _ {1}^{*} c _ {2} \varphi _ {1}^{*} \varphi _ {2} e^{- i \left( \alpha _ {2} - \omega _ {1} \right) t} + c _ {2}^{*} c _ {1} \varphi _ {2}^{*} \varphi _ {1} e^{+ i \left( a _ {2} - \omega _ {1} \right) t} \nonumber \\[4pt] & = \left| \psi _ {1} \right|^{2} + \left| \psi _ {2} \right|^{2} + 2 \left| \psi _ {1} \psi _ {2} \right| \cos \left( \omega _ {2} - \omega _ {1} \right) t \label{1.12} \end{align}\]where \(\omega _ {n} = E _ {n} / \hbar\). We refer to this state of the system that gives rise to this time-dependent oscillation in probability density as a coherent superposition state, or coherence. More generally, the oscillation term in Equation \ref{1.12} may also include a time-independent phase factor \(\phi\) that arises from the complex expansion coefficients.As an example, consider the superposition of the ground and first excited states of the quantum harmonic oscillator. The basis wavefunctions, \(\psi _ {0} (x)\) and \(\psi _ {1} (x)\), and their stationary probability densities \(P _ {i} = \left\langle \psi _ {i} (x) | \psi _ {i} (x) \right\rangle\) areIf we create a superposition of these states with Equation \ref{1.11}, the time-dependent probability density oscillates, with \(\langle x (t) \rangle\) bearing similarity to the classical motion. (Here \(c_0 = 0.5\) and \(c_1 = 0.87\).)This page titled 2.1: Time-Evolution with a Time-Independent Hamiltonian is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,296 |
2.2: Exponential Operators Again
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/02%3A_Introduction_to_Time-Dependent_Quantum_Mechanics/2.02%3A_Exponential_Operators | Throughout our work, we will make use of exponential operators of the form \(\hat {T} = e^{- i \hat {A}}\) We will see that these exponential operators act on a wavefunction to move it in time and space. Of particular interest to us is the time-propagator or time-evolution operator \(\hat {U} = e^{- i \hat {H} t / h},\) which propagates the wavefunction in time. Note the operator \(\hat {T}\) is a function of an operator, \(f(\hat{A})\). A function of an operator is defined through its expansion in a Taylor series, for instance\[\hat {T} = e^{- i \hat {A}} = \sum _ {n = 0}^{\infty} \frac {( - i \hat {A} )^{n}} {n !} = 1 - i \hat {A} - \frac {\hat {A} \hat {A}} {2} - \cdots \label{1.13}\]Since we use them so frequently, let’s review the properties of exponential operators that can be established with Equation \ref{1.13}. If the operator \(\hat{A}\) is Hermitian, then \(\hat {T} = e^{- i \hat {A}}\) is unitary, i.e., \(\hat {T}^{\dagger} = \hat {T}^{- 1}\). Thus the Hermitian conjugate of \(\hat{T}\) reverses the action of \(\hat{T}\). For the time-propagator \(\hat{U}\), \(U^{†}\) is often referred to as the time-reversal operator.The eigenstates of the operator \(\hat{A}\) also are also eigenstates of \(f(\hat{A})\), and eigenvalues are functions of the eigenvalues of \(\hat{A}\). Namely, if you know the eigenvalues and eigenvectors of \(\hat{A}\), i.e., \(\hat {A} \varphi _ {n} = a _ {n} \varphi _ {n}\), you can show by expanding the function that\[f ( \hat {A} ) \varphi _ {n} = f \left( a _ {n} \right) \varphi _ {n} \label{1.14}\]Our most common application of this property will be to exponential operators involving the Hamiltonian. Given the eigenstates \(\varphi _ {n}\), then \(\hat {H} | \varphi _ {n} \rangle = E _ {n} | \varphi _ {n} \rangle\) implies\[e^{- i \hat {H} t / \hbar} | \varphi _ {n} \rangle = e^{- i E _ {n} t / \hbar} | \varphi _ {n} \rangle \label{1.15}\]Just as \(\hat {U} = e^{- i \hat {H} t / h}\) is the time-evolution operator, which displaces the wavefunctionin time, \(\hat {D} _ {x} = e^{- i \hat {p} _ {x} x / \hbar}\) is the spatial displacement operator that moves \(\psi\) along the \(x\) coordinate. If we define \(\hat {D} _ {x} ( \lambda ) = e^{- i \hat {p} _ {x} \lambda / h},\) then the action of is to displace the wavefunction by an amount \(\lambda\)\[| \psi ( x - \lambda ) \rangle = \hat {D} _ {x} ( \lambda ) | \psi (x) \rangle \label{1.16}\]Also, applying \(\hat {D} _ {x} ( \lambda )\) to a position operator shifts the operator by \(\lambda\)\[\hat {D} _ {x}^{\dagger} \hat {x} \hat {D} _ {x} = \hat {x} + \lambda \label{1.17}\]Thus \(e^{- i \hat {p} _ {x} \lambda / h} | x \rangle\) is an eigenvector of \(\hat{x}\) with eigenvalue \(x + \lambda\) instead of \(x\). The operator \(\hat {D} _ {x} = e^{- i \hat {p} _ {x} \lambda / \hbar}\) is a displacement operator for \(x\) position coordinates. Similarly, \(\hat {D} _ {y} = e^{- i \hat {p} _ {y} \lambda / \hbar}\) generates displacements in \(y\) and \(\hat{D}_z\) in \(z\). Similar to the time-propagator \(\hat{U}\), the displacement \(hta{D}\) operator must be unitary, since the action of must leave the system unchanged. That is if \(\hat{D}\) shifts the system to from , then shifts the system from \(x\) back to \(x_0\).We know intuitively that linear displacements commute. For example, if we wish to shift a particle in two dimensions, \(x\) and \(y\), the order of displacement does not matter. We end up at the same position. These displacement operators commute, as expected from \([\hat{p}_x,\hat{p}_y] = 0\)Similar to the displacement operator, we can define rotation operators that depend on the angular momentum operators, \(L_x\), \(L_y\), and \(L_z\). For instance, \(\hat {R} _ {x} ( \phi ) = e^{- i \phi L _ {x} / \hbar}\) gives a rotation by angle \(\phi\) about the x-axis. Unlike linear displacement, rotations about different axes do not commute. For example, consider a state representing a particle displaced along the z-axis, \(| \mathrm {Z} 0 \rangle\). Now the action of two rotations \(\hat{R}_x\) and \(\hat{R}_y\) by an angle of \(\pi/2\) on this particle differs depending on the order of operation.The results of these two rotations taken in opposite order differ by a rotation about the z–axis. Thus, because the rotations about different axes do not commute, we must expect the angular momentum operators, which generate these rotations, not to commute. Indeed, we know that \([L_x,L_y] = i\hbar L_z\) where the commutator of rotations about the x and y axes is related by a z-axis rotation. As with rotation operators, we will need to be careful with time-propagators to determine whether the order of time-propagation matters. This, in turn, will depend on whether the Hamiltonians at two points in time commute.Useful Properties of Exponential OperatorFinally, it is worth noting some relationships that are important in evaluating the action of exponential operators:Since the TDSE is deterministic and linear in time, we can define an operator that describes the dynamics of the wavefunction:\[\psi (t) = \hat {U} \left( t , t _ {0} \right) \psi \left( t _ {0} \right) \label{1.20}\]\(U\) is is the time-propagator or time-evolution operator that evolves the quantum system as a function of time. It represents the solution to the time-dependent Schrödinger equation. To investigate its form we consider the TDSE for a time-independent Hamiltonian:\[\frac {\partial} {\partial t} \psi ( \overline {r} , t ) + \frac {i \hat {H}} {\hbar} \psi ( \overline {r} , t ) = 0 \label{1.21}\]To solve this, we will define an exponential operator \(\hat {T} = \exp ( - i \hat {H} t / \hbar )\), which is defined through its expansion in a Taylor series:\[ \begin{align} \hat {T} &= \exp ( - i \hat {H} t / \hbar ) \\[4pt] &= 1 - \frac {i \hat {H} t} {\hbar} + \frac {1} {2 !} \left( \frac {i \hat {H} t} {\hbar} \right)^{2} - \cdots \end{align} \label{1.22}\]You can also confirm from the expansion that \(\hat {T}^{- 1} = \exp ( i \hat {H} t / \hbar ),\) noting that \(\hat{H}\) is Hermitian and commutes with \(\hat{T}\). Multiplying Equation \ref{1.21} from the left by \(\hat {T}^{- 1}\), we can write\[\frac {\partial} {\partial t} \left[ \exp \left( \frac {i \hat {H} t} {\hbar} \right) \psi ( \overline {r} , t ) \right] = 0 \label{1.23}\]and integrating \(t _ {0} \rightarrow t\), we get\[\exp \left( \frac {i \hat {H} t} {\hbar} \right) \psi ( \overline {r} , t ) - \exp \left( \frac {i \hat {H} t _ {0}} {\hbar} \right) \psi \left( \overline {r} , t _ {0} \right) = 0 \label{1.24}\]\[\psi ( \overline {r} , t ) = \exp \left( \frac {- i \hat {H} \left( t - t _ {0} \right)} {\hbar} \right) \psi \left( \overline {r} , t _ {0} \right) \label{1.25}\]So, comparing to Equation \ref{1.20}, we see that the time-propagator is\[\hat {U} \left( t , t _ {0} \right) = \exp \left( \frac {- i \hat {H} \left( t - t _ {0} \right)} {\hbar} \right) \label{1.26}\]For the time-independent Hamiltonian for which we know the eigenstates \(\phi_n\) and eigenvalues \(E_n\), we can express this in a practical form using Equation \ref{1.14}\[\psi _ {n} ( \overline {r} , t ) = e^{- i E _ {n} \left( t - t _ {0} \right) / \hbar} \psi _ {n} \left( \overline {r} , t _ {0} \right) \label{1.27}\]Alternatively, if we substitute the projection operator (or identity relationship)\[\sum _ {n} | \varphi _ {n} \rangle \langle \varphi _ {n} | = 1 \label{1.28}\]into Equation \ref{1.26}, we see\[\begin{align} \hat {U} \left( t , t _ {0} \right) &= e^{- i \hat {H} \left( t - t _ {0} \right) / \hbar} \sum _ {n} | \varphi _ {n} \rangle \langle \varphi _ {n} | \\[4pt] &= \sum _ {n} e^{- i \omega _ {n} \left( t - t _ {0} \right)} | \varphi _ {n} \rangle \langle \varphi _ {n} | \end{align}\]\[\omega _ {n} = \frac {E _ {n}} {\hbar}\]So now we can write our time-developing wave-function as\[\begin{align} | \psi _ {n} ( \overline {r} , t ) \rangle & = | \varphi _ {n} \rangle \sum _ {n} e^{- i \omega _ {n} \left( t - t _ {0} \right)} \left\langle \varphi _ {n} | \psi _ {n} \left( \overline {r} , t _ {0} \right) \right\rangle \\ & = \sum _ {n} e^{- i \omega _ {n} \left( t - t _ {0} \right)} c _ {n} \\ & = \sum _ {n} c _ {n} (t) | \varphi _ {n} \rangle \end{align} \label{1.30}\]As written in Equation \ref{1.20}, we see that the time-propagator \(\hat {U} \left( t , t _ {0} \right)\), acts to the right (on kets) to evolve the system in time. The evolution of the conjugate wavefunctions (bras) is under the Hermitian conjugate of \(\hat {U} \left( t , t _ {0} \right)\), acting to the left:\[\langle \psi (t) | = \langle \psi \left( t _ {0} \right) | \hat {U}^{\dagger} \left( t , t _ {0} \right) \label{1.31}\]From its definition as an expansion and recognizing \(\hat{H}\) as Hermitian, you can see that\[\hat {U}^{\dagger} \left( t , t _ {0} \right) = \exp \left[ \frac {i \hat {H} \left( t - t _ {0} \right)} {\hbar} \right] \label{1.32}\]Noting that \(\hat{U}\) is unitary, \(\hat {U}^{\dagger} = \hat {U}^{- 1}\), we often refer to \(\hat {U}^{\dagger}\) as the time reversal operator.This page titled 2.2: Exponential Operators Again is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,297 |
2.3: Two-Level Systems
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/02%3A_Introduction_to_Time-Dependent_Quantum_Mechanics/2.03%3A_Two-Level_Systems | Let’s use the time-propagator in a model calculation that we will refer to often. It is common to reduce or map quantum problems onto a two level system (2LS). We will pick the most important states for our problem and find strategies for discarding or simplifying the influence of the remaining degrees of freedom. Consider a 2LS with two unperturbed or “zeroth order” states \(| \varphi _ {a} \rangle\) and \(| \varphi _ {b} \rangle\) with energies \( {\varepsilon} _ {a}\) and \( {\varepsilon} _ {b}\), which are described by a zero-order Hamiltonian \(H_0\):\[\begin{align} \hat {H} _ {0} &= | \varphi _ {a} \rangle \varepsilon _ {a} \left\langle \varphi _ {a} | + | \varphi _ {b} \right\rangle \varepsilon _ {b} \langle \varphi _ {b} | \\[4pt] &= \left( \begin{array} {l l} {\varepsilon _ {a}} & {0} \\[4pt] {0} & {\varepsilon _ {b}} \end{array} \right) \label{1.33} \end{align}\]These states interact through a coupling \(V\) of the form\[\begin{align} \hat {V} &= | \varphi _ {a} \rangle V _ {a b} \left\langle \varphi _ {b} | + | \varphi _ {b} \right\rangle V _ {b a} \langle \varphi _ {a} | \\[4pt] &= \left( \begin{array} {l l} {0} & {V _ {a b}} \\[4pt] {V _ {b a}} & {0} \end{array} \right) \label{1.34} \end{align}\]The full Hamiltonian for the two coupled states is \(\hat{H}\):\[\begin{align} \hat {H} & = \hat {H} _ {0} + \hat {V} \\[4pt] & = \left( \begin{array} {c c} {\varepsilon _ {a}} & {V _ {a b}} \\[4pt] {V _ {b a}} & {\varepsilon _ {b}} \end{array} \right) \label{1.35} \end{align} \]The zero-order states are \(| \varphi _ {a} \rangle\) and \(| \varphi _ {b} \rangle\). The coupling mixes these states, leading to two eigenstates of \(\hat{H}\), \(| \varphi _ {+} \rangle\) and \(| \varphi _ {-} \rangle\), with corresponding energy eigenvalues \( {\varepsilon} _ {+}\) and \( {\varepsilon} _ {-}\), respectively.We will ask: If we prepare the system in state \(| \varphi _ {a} \rangle\), what is the time-dependent probability of observing it in \(| \varphi _ {b} \rangle\)? Since \(| \varphi _ {a} \rangle\) and \(| \varphi _ {b} \rangle\) are not eigenstates of \(\hat{H}\), and since our time-propagation will be performed in the eigenbasis using Equation \ref{1.29}, we will need to find the transformation between these bases.We start by searching for the eigenvalues of the Hamiltonian (Equation \ref{1.35}). Since the Hamiltonian is Hermitian, (\(H _ {i j} = H _ {j i}^{*}\)), we write\[V _ {a b} = V _ {b a}^{*} = V e^{- i \varphi} \label{1.36}\]\[\hat {H} = \left( \begin{array} {c c} {\varepsilon _ {a}} & {V e^{- i \varphi}} \\[4pt] {V e^{+ i \varphi}} & {\varepsilon _ {b}} \end{array} \right) \label{1.37}\]Often the couplings we describe are real, and we can neglect the phase factor \(\phi\). Now we define variables for the mean energy and energy splitting between the uncoupled states\[E = \frac {\varepsilon _ {a} + \varepsilon _ {b}} {2}\]\[\Delta = \frac {\varepsilon _ {a} - \varepsilon _ {b}} {2} \label{1.39}\]We can then obtain the eigenvalues of the coupled system by solving the secular equation\[\operatorname {det} ( H - \lambda I ) = 0\]giving\[\varepsilon _ {\pm} = E \pm \Omega \label{1.41}\]Here I defined another variable\[\Omega = \sqrt {\Delta^{2} + V^{2}} \label{1.42}\]To determine the eigenvectors of the coupled system \(| \varphi _ {\pm} \rangle\), it proves to be a great simplification to define a mixing angle \(\theta\) that describes the relative magnitude of the coupling relative to the zero-order energy splitting through\[\tan 2 \theta = \frac {V} {\Delta} \label{1.43}\]We see that the mixing angle adopts values such that \(0 \leq \theta \leq \pi / 4\). Also, we note that\[\sin 2 \theta = V / \Omega \label{1.44}\]\[\cos 2 \theta = \Delta / \Omega \label{1.45}\]In this representation the Hamiltonian (Equation \ref{1.37}) becomes\[\hat {H} = E \overline {I} + \Delta \left( \begin{array} {l l} {1} & {\tan 2 \theta e^{- i \varphi}} \\[4pt] {\tan 2 \theta e^{+ i \varphi}} & {- 1} \end{array} \right) \label{1.46}\]and we can express the eigenvalues as\[\varepsilon _ {\pm} = E \pm \Delta \sec 2 \theta \label{1.47}\]Next we want to find \(S\), the transformation that diagonalizes the Hamiltonian and which transforms the coefficients of the wavefunction from the zero-order basis to the eigenbasis. The eigenstates can be expanded in the zero-order basis in the form\[| \varphi _ {\pm} \rangle = c _ {a} | \varphi _ {a} \rangle + c _ {b} | \varphi _ {b} \rangle \label{1.48}\]So that the transformation can be expressed in matrix form as\[\left( \begin{array} {l} {\varphi _ {+}} \\[4pt] {\varphi _ {-}} \end{array} \right) = S \left( \begin{array} {l} {\varphi _ {a}} \\[4pt] {\varphi _ {b}} \end{array} \right)\label{1.49}\]To find \(S\), we use the Schrödinger equation \(\hat {H} | \varphi _ {\pm} \rangle = \varepsilon _ {\pm} | \varphi _ {\pm} \rangle\)substituting Equation \ref{1.48}. This gives\[S = \left( \begin{array} {l l} {\cos \theta} & {e^{- i \varphi / 2}} & {\sin \theta} & {e^{i \varphi / 2}} \\[4pt] {- \sin \theta} & {e^{- i \varphi / 2}} & {\cos \theta} & {e^{i \varphi / 2}} \end{array} \right) \label{1.50}\]Note that \(S\) is unitary since \(S^{\dagger} = S^{- 1}\) and \(\left( S^{T} \right)^{*} = S^{- 1}\). Also, the eigenbasis is orthonormal:\[\left\langle \varphi _ {+} | \varphi _ {+} \right\rangle + \left\langle \varphi _ {-} | \varphi _ {-} \right\rangle = 1.\]Now, let’s examine the eigenstates in two limits:We can schematically represent the energies of these states with the following diagram. Here we explore the range of \( {E} _ {\pm}\) available given a fixed coupling \(V\) and varying the splitting \(\Delta\).This diagram illustrates an avoided crossing effect. The strong coupling limit is equivalent to a degeneracy point (\(\Delta = 0\)) between the states \(| \varphi _ {a} \rangle\) and \(| \varphi _ {b} \rangle\). The eigenstates completely mix the unperturbed states, yet remain split by the strength of interaction \(2V\). We will return to the discussion of avoided crossings when we describe potential energy surfaces and the adiabatic approximation, where the dependence of \(V\) and \(\Delta\) on position \(R\) must be considered.Now we can turn to describing dynamics. The time evolution of this system is given by the time-propagator\[U (t) = | \varphi _ {+} \rangle e^{- i \omega , t} \left\langle \varphi _ {+} | + | \varphi _ {-} \right\rangle e^{- i \omega t} \langle \varphi _ {-} | \label{1.52}\]where \(\omega _ {\pm} = \varepsilon _ {\pm} / \hbar\). Since \(\varphi _ {a}\) and \(\varphi _ {b}\) are not the eigenstates, preparing the system in state \(\varphi _ {a}\) will lead to time evolution! Let’s prepare the system so that it is initially in \(\varphi _ {a}\).\[| \psi ( 0 ) \rangle = | \varphi _ {a} \rangle \label{1.53} \nonumber\]Evaluating the time-dependent amplitudes of initial and final states with the help of \(S\), we find\[\begin{align*} c _ {a} (t) & = \left\langle \varphi _ {a} | U (t) | \varphi _ {a} \right\rangle \\[4pt] & = e^{- i E t} \left[ \cos^{2} \theta e^{i \Omega _ {R} t} + \sin^{2} \theta e^{- i \Omega _ {R} t} \right] \label{1.54} \\[4pt] c _ {b} (t) & = \left\langle \varphi _ {b} | U (t) | \varphi _ {a} \right\rangle \\[4pt] & = 2 \sin \theta \cos \theta e^{- i E t} \sin \Omega _ {R} t \label{1.55} \end{align*}\]So, what is the probability that it is found in state \(| \varphi _ {b} \rangle\) at time \(t\)?\[\begin{aligned} P _ {b a} (t) & = \left| c _ {b} (t) \right|^{2} \\[4pt] & = \frac {V^{2}} {V^{2} + \Delta^{2}} \sin^{2} \Omega _ {R} t \end{aligned} \label{1.56}\]where\[\Omega _ {R} = \frac {1} {\hbar} \sqrt {\Delta^{2} + V^{2}} \label{1.57}\]\(\Omega _ {R}\), the Rabi Frequency, represents the frequency at which probability amplitude oscillates between \(\varphi _ {a}\) and \(\varphi _ {b}\) states.Notice for the weak coupling limit (\(V \rightarrow 0\)), \(\varphi _ {\pm} \rightarrow \varphi _ {a , b}\) (the eigenstates resemble the stationary states), and the time-dependence disappears. In the strong coupling limit (\(V \gg \Delta\)), amplitude is exchanged completely between the zero-order states at a rate given by the coupling: \(\Omega _ {R} \rightarrow V / \hbar\). Even in this limit it takes a finite amount of time for amplitude to move between states. To get \(P=1\) requires a time \(\tau\):\[\tau = \pi / 2 \Omega _ {R} = \hbar \pi / 2 V. \nonumber\]This page titled 2.3: Two-Level Systems is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,298 |
3.1: Time-Evolution Operator
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/03%3A__Time-Evolution_Operator/3.01%3A_Time-Evolution_Operator | Let’s start at the beginning by obtaining the equation of motion that describes the wavefunction and its time evolution through the time propagator. We are seeking equations of motion for quantum systems that are equivalent to Newton’s—or more accurately Hamilton’s—equations for classical systems. The question is, if we know the wavefunction at time \(| \psi (\vec{r}, t_o ) \rangle \), how does it change with time? How do we determine \(| \psi (\vec{r}, t ) \rangle \), for some later time \( t > t_o\)? We will use our intuition here, based largely on correspondence to classical mechanics. To keep notation to a minimum, in the following discussion we will not explicitly show the spatial dependence of wavefunction.We start by assuming causality: \(| \psi ( t_o ) \rangle \) precedes and determines \(| \psi (t) \rangle \), which is crucial for deriving a deterministic equation of motion. Also, as usual, we assume time is a continuous variable:\[\lim _ {t \rightarrow \tau _ {0}} | \psi (t) \rangle = | \psi \left( t _ {0} \right) \rangle \label{2.1}\]Now define an “time-displacement operator” or “propagator” that acts on the wavefunction to the right and thereby propagates the system forward in time:\[| \psi (t) \rangle = U \left( t , t _ {0} \right) | \psi \left( t _ {0} \right) \rangle \label{2.2}\]We also know that the operator \(U\) cannot be dependent on the state of the system \(| \psi (t) \rangle \). This is necessary for conservation of probability, i.e., to retain normalization for the system. If\[| \psi \left( t _ {0} \right) \rangle = a _ {1} | \varphi _ {1} \left( t _ {0} \right) \rangle + a _ {2} | \varphi _ {2} \left( t _ {0} \right) \rangle \label{2.3}\]then\[\begin{align} | \psi (t) \rangle & = U \left( t , t _ {0} \right) | \psi \left( t _ {0} \right) \rangle \\[4pt] & = U \left( t , t _ {0} \right) a _ {1} | \varphi _ {1} \left( t _ {0} \right) \rangle + U \left( t , t _ {0} \right) a _ {2} | \varphi _ {2} \left( t _ {0} \right) \rangle \\[4pt] & = a _ {1} (t) | \varphi _ {1} \rangle + a _ {2} (t) | \varphi _ {2} \rangle \end{align}. \label{2.4}\]This is a reflection of the importance of linearity and the principle of superposition in quantum mechanical systems. While \(|a_i(t)|\) typically is not equal to \(|a_i|\)\[\sum _ {n} \left| a _ {n} (t) \right|^{2} = \sum _ {n} \left| a _ {n} \left( t _ {0} \right) \right|^{2} \label{2.5}\]This dictates that the differential equation of motion is linear in time.We now make some important and useful observations regarding the properties of \(U\).4. Time-reversal. The inverse of the time-propagator is the time reversal operator. From Equation \ref{2.8}:\[ \begin{align} U \left( t , t _ {0} \right) U \left( t _ {0} , t \right) = &1 \label{2.11} \\[4pt] \therefore \,\, U^{- 1} \left( t , t _ {0} \right) &= U \left( t _ {0} , t \right) . \label{2.12} \end{align}\]Let’s find an equation of motion that describes the time-evolution operator using the change of the system for an infinitesimal time-step, \(\delta t\): \(U(t+ \delta t)\). Since\[\lim _ {\delta t \rightarrow 0} U ( t + \delta t , t ) = 1 \label{2.13}\]We expect that for small enough \(\delta t\), \(U\) will change linearly with \(\delta t\). This is based on analogy to thinking of deterministic motion in classical systems. Setting \(t_0\) to 0, so that \(U(t,t_o) = U(t)\), we can write\[U ( t + \delta t ) = U (t) - i \hat {\Omega} (t) \delta t \label{2.14}\]\(\hat{\Omega}\) is a time-dependent Hermitian operator, which is required for \(U\) to be unitary. We can now write a differential equation for the time-development of \(U(t,t_o)\), the equation of motion for \(U\):\[\dfrac {d U (t)} {d t} = \lim _ {\delta t \rightarrow 0} \dfrac {U ( t + \delta t ) - U (t)} {\delta t} \label{2.15}\]So from Equation \ref{2.14} we have:\[\dfrac {\partial U \left( t , t _ {0} \right)} {\partial t} = - i \hat {\Omega} U \left( t , t _ {0} \right) \label{2.16}\]You can now see that the operator needed a complex argument, because otherwise probability density would not be conserved; it would rise or decay. Rather it oscillates through different states of the system.We note that \(\hat {\Omega}\) has units of frequency. Since quantum mechanics fundamentally associates frequency and energy as \(E = \hbar \omega\), and since the Hamiltonian is the operator corresponding to the energy, and responsible for time evolution in Hamiltonian mechanics, we write\[\hat {\Omega} = \dfrac {\hat {H}} {\hbar} \label{2.17}\]With that substitution we have an equation of motion for\[\mathrm {i} \hbar \dfrac {\partial} {\partial t} U \left( t , t _ {0} \right) = \hat {H} U \left( t , t _ {0} \right) \label{2.18}\]Multiplying from the right by \(| \psi(t_o) \rangle \) gives the TDSE:\[i \hbar \dfrac {\partial} {\partial t} | \psi \rangle = \hat {H} | \psi \rangle \label{2.19}\]If you use the Hamiltonian for a free particle (\(- \left( \hbar^{2} / 2 m \right) \left( \partial^{2} / \partial x^{2} \right)\)), this looks like a classical wave equation, except that it is linear in time. Rather, this looks like a diffusion equation with imaginary diffusion constant. We are also interested in the equation of motion for \(U^{\dagger}\) which describes the time evolution of the conjugate wavefunctions. Following the same approach and recognizing that \(U^{\dagger} \left( t , t _ {0} \right)\), acts to the left:\[\langle \psi (t) | = \langle \psi \left( t _ {0} \right) | U^{\dagger} \left( t , t _ {0} \right) \label{2.20}\]we get\[- i \hbar \dfrac {\partial} {\partial t} U^{\dagger} \left( t , t _ {0} \right) = U^{\dagger} \left( t , t _ {0} \right) \hat {H} \label{2.21}\]At first glance it may seem straightforward to integrate Equation \ref{2.18}. If \(H\) is a function of time, then the integration of \(i \hbar \dfrac{d U}{U} = H\, dt\) gives\[U \left( t , t _ {0} \right) = \exp \left[ \frac {- i} {\hbar} \int _ {t _ {0}}^{t} H \left( t^{\prime} \right) d t^{\prime} \right] \label{2.22}\]Following our earlier definition of the time-propagator, this exponential would be cast as a series expansion\[U \left( t , t _ {0} \right)^{2} = 1 - \frac {i} {\hbar} \int _ {t _ {0}}^{t} H \left( t^{\prime} \right) d t^{\prime} + \frac {1} {2 !} \left( \frac {- i} {\hbar} \right)^{2} \int _ {t _ {0}}^{t} d t^{\prime} d t^{\prime \prime} H \left( t^{\prime} \right) H \left( t^{\prime \prime} \right) + \ldots \label{2.23}\]This approach is dangerous, since we are not properly treating \(H\) as an operator. Looking at the second term in Equation \ref{2.23}, we see that this expression integrates over both possible time-orderings of the two Hamiltonian operations, which would only be proper if the Hamiltonians at different times commute: \( H(t'),H(t'')] =0\)Now, let’s proceed a bit more carefully assuming that the Hamiltonians at different times do not commute. Integrating Equation \ref{2.18} directly from \(t_0\) to \(t\) gives\[U \left( t , t _ {0} \right) = 1 - \frac {i} {\hbar} \int _ {t _ {0}}^{t} d \tau H ( \tau ) U \left( \tau , t _ {0} \right) \label{2.24}\]This is the solution; however, it is not very practical since \(U(t,t_o)\) is a function of itself. But we can make an iterative expansion by repetitive substitution of \(U\) into itself. The first step in this process is\[\begin{align} U \left( t , t _ {0} \right) &= 1 - \frac {i} {\hbar} \int _ {t _ {0}}^{t} d \tau H ( \tau ) \left[ 1 - \frac {i} {\hbar} \int _ {t _ {0}}^{\tau} d \tau^{\prime} H \left( \tau^{\prime} \right) U \left( \tau^{\prime} , t _ {0} \right) \right] \\[4pt] & = 1 + \left( \frac {- i} {\hbar} \right) \int _ {t _ {0}}^{t} d \tau H ( \tau ) + \left( \frac {- i} {\hbar} \right)^{2} \int _ {t _ {0}}^{t} d \tau \int _ {t _ {0}}^{\tau} d \tau^{\prime} H ( \tau ) H \left( \tau^{\prime} \right) U \left( \tau^{\prime} , t _ {0} \right)\end{align} \label{2.25}\]Note in the last term of this equation, that the integration limits enforce a time-ordering; that is, the first integration variable \(\tau'\) must precede the second \(\tau\). Pictorially, the area of integration isThe next substitution step gives\[\begin{align} U \left( t , t _ {0} \right) & = 1 + \left( \frac {- i} {\hbar} \right) \int _ {t _ {0}}^{t} d \tau H ( \tau ) \nonumber \\[4pt] & + \left( \frac {- i} {\hbar} \right)^{2} \int _ {t _ {0}}^{t} d \tau \int _ {t _ {0}}^{\tau} d \tau^{\prime} H ( \tau ) H \left( \tau^{\prime} \right) \label{2.26} \\[4pt] & + \left( \frac {- i} {\hbar} \right)^{3} \int _ {t _ {0}}^{t} d \tau \int _ {t _ {0}}^{\tau} d \tau^{\prime} \int _ {t _ {0}}^{t^{\prime}} d \tau^{\prime \prime} H ( \tau ) H \left( \tau^{\prime} \right) H \left( \tau^{\prime \prime} \right) U \left( \tau^{\prime \prime} , t _ {0} \right) \nonumber \end{align}\]From this expansion, you should be aware that there is a time-ordering to the interactions. For the third term, \(\tau^{\prime \prime}\) acts before \(\tau^{\prime}\), which acts before \(\tau\): \(t _ {0} \leq \tau^{\prime \prime} \leq \tau^{\prime} \leq \tau \leq t\)What does this expression represent? Imagine you are starting in state \(| \psi _ {0} \rangle = | \ell \rangle\) and you want to describe how one evolves toward a target state \(| \psi \rangle = | k \rangle\). The possible paths by which one can shift amplitude and evolve the phase, pictured in terms of these time variables are:The first term in Equation \ref{2.26} represents all actions of the Hamiltonian which act to directly couple \(| \ell \rangle\) and \(| k \rangle\). The second term described possible transitions from \(| \ell \rangle\) to \(| k \rangle\) via an intermediate state \(| m \rangle\). The expression for \(U\) describes all possible paths between initial and final state. Each of these paths interferes in ways dictated by the acquired phase of our eigenstates under the timedependent Hamiltonian.The solution for \(U\) obtained from this iterative substitution is known as the positive timeordered exponential\[\left.\begin{aligned} U \left( t , t _ {0} \right) & = 1 + \left( \frac {- i} {\hbar} \right) \int _ {t _ {0}}^{t} d \tau H ( \tau ) \\[4pt] & + \left( \frac {- i} {\hbar} \right)^{2} \int _ {t _ {0}}^{t} d \tau \int _ {t _ {0}}^{\tau} d \tau^{\prime} H ( \tau ) H \left( \tau^{\prime} \right) \\[4pt] & + \left( \frac {- i} {\hbar} \right)^{3} \int _ {t _ {0}}^{t} d \tau \int _ {t _ {0}}^{\tau} d \tau^{\prime} \int _ {t _ {0}}^{t^{\prime}} d \tau^{\prime \prime} H ( \tau ) H \left( \tau^{\prime} \right) H \left( \tau^{\prime \prime} \right) U \left( \tau^{\prime \prime} , t _ {0} \right) \end{aligned} \right. \label{2.27}\](\(\hat{T}\) is known as the Dyson time-ordering operator.) In this expression the time-ordering is\[\left. \begin{array} {l} {t _ {0} \rightarrow \tau _ {1} \rightarrow \tau _ {2} \rightarrow \tau _ {3} \dots \tau _ {n} \rightarrow t} \\[4pt] {t _ {0} \rightarrow \quad \dots \quad \tau^{\prime \prime} \rightarrow \tau^{\prime} \rightarrow \tau} \end{array} \right.\label{2.28}\]So, this expression tells you about how a quantum system evolves over a given time interval, and it allows for any possible trajectory from an initial state to a final state through any number of intermediate states. Each term in the expansion accounts for more possible transitions between different intermediate quantum states during this trajectory.Compare the time-ordered exponential with the traditional expansion of an exponential:\[1 + \sum _ {n = 1}^{\infty} \frac {1} {n !} \left( \frac {- i} {\hbar} \right)^{n} \int _ {t _ {0}}^{t} d \tau _ {n} \ldots \int _ {t _ {0}}^{t} d \tau _ {1} H \left( \tau _ {n} \right) H \left( \tau _ {n - 1} \right) \ldots H \left( \tau _ {1} \right) \label{2.29}\]Here the time-variables assume all values, and therefore all orderings for \(H(t,t_0)\) are calculated. The areas are normalized by the \(n!\) factor (there are \(n!\) time-orderings of the \(t_n\) times.) (As commented above these points need some more clarification.) We are also interested in the Hermitian conjugate of \(U \left( t , t _ {0} \right)\), which has the equation of motion in Equation \ref{2.21}. If we repeat the method above, remembering that \(U^{\dagger} \left( t , t _ {0} \right)\), acts to the left, then we obtain\[U^{\dagger} \left( t , t _ {0} \right) = 1 + \frac {i} {\hbar} \int _ {t _ {0}}^{t} d \tau U^{\dagger} ( t , \tau ) H ( \tau ) \label{2.30}\]Performing iterative substitution leads to a negative-time-ordered exponential:\[U^{\dagger} \left( t , t _ {0} \right) = 1 + \frac {i} {\hbar} \int _ {t _ {0}}^{t} d \tau U^{\dagger} ( t , \tau ) H ( \tau )\label{2.31}\]Here the \(H(\tau_i)\) act to the left.This page titled 3.1: Time-Evolution Operator is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,300 |
3.2: Integrating the Schrödinger Equation Directly
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/03%3A__Time-Evolution_Operator/3.02%3A_Integrating_the_Schrodinger_Equation_Directly | Okay, how do we evaluate the time-propagator and obtain a time-dependent trajectory for a quantum system? Expressions such as the time-ordered exponentials are daunting, and there are no simple ways in which to handle this. One cannot truncate the exponential because usually this is not a rapidly converging series. Also, the solutions oscillate rapidly as a result of the phase acquired at the energy of the states involved, which leads to a formidable integration problem. Rapid oscillations require small time steps, when in fact the time scales. For instance in a molecular dynamics problem, the highest frequency oscillations may be as a result of electronically excited states with periods of less than a femtosecond, and the nuclear dynamics that you hope to describe may occur on many picosecond time scales. Rather than general recipes, there exist an arsenal of different strategies that are suited to particular types of problems. The choice of how to proceed is generally dictated by the details of your problem, and is often an art-form. Considerable effort needs to be made to formulate the problem, particularly choosing an appropriate basis set for your problem. Here it is our goal to gain some insight into the types of strategies available, working mainly with the principles, rather than the specifics of how it’s implemented.Let’s begin by discussing the most general approach. With adequate computational resources, we can choose the brute force approach of numerical integration. We start by choosing a basis set and defining the initial state \(\psi_0\). Then, we can numerically evaluate the timedependence of the wavefunction over a time period \(t\) by discretizing time into \(n\) small steps of width \(\delta t = t / n\) over which the change of the system is small. A variety of strategies can be pursed in practice.One possibility is to expand your wavefunction in the basis set of your choice\[| \psi (t) \rangle = \sum _ {n} c _ {n} (t) | \varphi _ {n} \rangle \label{2.32}\]and solve for the time-dependence of the expansion coefficients. Substituting into the right side of the TDSE,\[i \hbar \frac {\partial} {\partial t} | \psi \rangle = \hat {H} | \psi \rangle \label{2.33}\]and then acting from the left by \(\langle k |\) on both sides leads to an equation that describes their time dependence:\[i \hbar \frac {\partial c _ {k} (t)} {\partial t} = \sum _ {n} H _ {k n} (t) c _ {n} (t) \label{2.34}\]or in matrix form \(i \hbar \dot {c} = H c\). This represents a set of coupled first-order differential equations in which amplitude flows between different basis states at rates determined by the matrix elements of the time-dependent Hamiltonian. Such equations are straightforward to integrate numerically. We recognize that we can integrate on a grid if the time step forward (\(\delta t\)) is small enough that the Hamiltonian is essentially constant. Then Equation \ref{2.34} becomes\[i \hbar \delta c _ {k} (t) = \sum _ {n} H _ {k n} (t) c _ {n} (t) \delta t \label{2.35}\]and the system is propagated as\[c _ {k} ( t + \delta t ) = c _ {k} (t) + \delta c _ {k} (t) \label{2.36}\]The downside of such a calculation is the unusually small time-steps and significant computational cost required.Similarly, we can use a grid with short time steps to simplify our time-propagator as\[\hat {U} ( t + \delta t , t ) = \exp \left[ - \frac {i} {\hbar} \int _ {t}^{t + \delta t} d t^{\prime} \hat {H} \left( t^{\prime} \right) \right] \approx \exp \left[ - \frac {i} {\hbar} \delta t \hat {H} (t) \right] \label{2.37}\]Therefore the time propagator can be written as a product of \(n\) propagators over these small intervals.\[\begin{align} \hat {U} (t) & = \lim _ {\delta t \rightarrow 0} \left[ \hat {U} _ {n} \hat {U} _ {n - 1} \cdots \hat {U} _ {2} \hat {U} _ {1} \right] \label{2.38A} \\[4pt] & = \lim _ {n \rightarrow \infty} \prod _ {j = 0}^{n - 1} \hat {U} _ {j} \label{2.38B} \end{align}\]Here the time-propagation over the jth small time step is\[\left.\begin{aligned} \hat {U} _ {j} & = \exp \left[ - \frac {i} {\hbar} \delta t \hat {H} _ {j} \right] \\[4pt] \hat {H} _ {j} & = \hat {H} ( j \delta t ) \end{aligned} \right. \label{2.39}\]Note that the expressions in Equations \ref{2.38A} and \ref{2.38B} are operators time ordered from right to left, which we denote with the “+” subscript. Although Equation \ref{2.38B} is exact in the limit \(\delta t \rightarrow 0\) (or \(n→∞\)), we can choose a finite number such that \(H(t)\) does not change much over the time \(\delta t\). In this limit the time propagator does not change much and can be approximated as an expansion\[\hat {U} _ {j} \approx 1 - \frac {i} {\hbar} \delta t \hat {H} _ {j} \label{2.40}.\]In a general sense this approach is not very practical. The first reason is that the time step is determined by \(\delta \mathrm {t} < \hbar / | H |\) which is typically very small in comparison to the dynamics of interest. The second complication arises when the potential and kinetic energy operators in the Hamiltonian don’t commute. Taking the Hamiltonian to be \(\hat {H} = \hat {T} + \hat {V}\)\[\left.\begin{aligned} e^{- i \hat {H} (t) \delta t / h} & = e^{- i ( \hat {T} (t) + \hat {V} (t) ) \delta t / h} \\[4pt] & \approx e^{- i \hat {T} (t) \delta t / \hbar} e^{- i \hat {V} (t) \delta t / h} \end{aligned} \right. \label{2.41}\]The second line makes the Split Operator approximation, what states that the time propagator over a short enough period can be approximated as a product of independent propagators evolving the system over the kinetic and potential energy. The validity of this approximation depends on how well these operators commute and the time step, with the error scaling like \(\frac {1} {2} [ \hat {T} (t) , \hat {V} (t) ] ( \delta t / \hbar )^{2}\) meaning that we should use a time step, such that \(\delta t < \left\{2 \hbar^{2} / [ \hat {T} (t) , \hat {V} (t) ] \right\}^{1 / 2}\)This approximation can be improved by symmetrizing the split operator as\[e^{- i \hat {H} (t) \delta t / h} \approx e^{- i \hat {V} (t) \frac {\delta t} {2} / h} e^{- i \hat {T} (t) \delta t / h} e^{- i \hat {V} (t) \frac {\delta t} {2} / h} \label{2.42}\]Here the error scales as \(\frac {1} {12} ( \delta t / \hbar )^{3} \left\{[ \hat {T} , [ \hat {T} , \hat {V} ] ] + \frac {1} {2} [ \hat {V} , [ \hat {V} , \hat {T} ] ] \right\}\). There is no significant increase in computational effort since half of the operations can be combined as\[e^{- \frac {i \hat {V}} {h} \frac {( j + 1 ) \delta t} {2}} e^{- \frac {i \hat {V}} {\hbar} \frac {j \delta t} {2}} \approx e^{- i \hat {V} j \delta t / \hbar}\]to give \( U (t) \approx e^{- i \hat {V} \frac {n \delta t} {2} / h} \left[ \prod _ {j = 1}^{n} e^{- i \hat {V} j \delta t / h} e^{- i \hat {T} j \delta t / h} \right] e^{- i \hat {V} \frac {\delta t} {2} / h} \label{2.44}\)1. Tannor, D. J., Introduction to Quantum Mechanics: A Time-Dependent Perspective. University Science Books: Sausilito, CA, 2007.This page titled 3.2: Integrating the Schrödinger Equation Directly is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,301 |
3.3: Transitions Induced by Time-Dependent Potential
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/03%3A__Time-Evolution_Operator/3.03%3A_Transitions_Induced_by_Time-Dependent_Potential | For many time-dependent problems, most notably in spectroscopy, we can often partition the problem so that the time-dependent Hamiltonian contains a time-independent part \(H_0\) that we can describe exactly, and a time-dependent potential \(V(t)\) \[H = H _ {0} + V (t) \label{2.45}\]The remaining degrees of freedom are discarded, and then only enter in the sense that they give rise to the interaction potential with \(H_0\). This is effective if you have reason to believe that the external Hamiltonian can be treated classically, or if the influence of \(H_0\) on the other degrees of freedom is negligible. From Equation \ref{2.45}, there is a straightforward approach to describing the time evolving wavefunction for the system in terms of the eigenstates and energy eigenvalues of \(H_0\).To begin, we know the complete set of eigenstates and eigenvalues for the system Hamiltonian\[H _ {0} | n \rangle = E _ {n} | n \rangle \label{2.46}\]The state of the system can then be expressed as a superposition of these eigenstates:\[| \psi (t) \rangle = \sum _ {n} c _ {n} (t) | n \rangle \label{2.47}\]The TDSE can be used to find an equation of motion for the eigenstate coefficients\[c _ {k} (t) = \langle k | \psi (t) \rangle \label{2.48}\]Starting with\[\frac {\partial | \psi \rangle} {\partial t} = \frac {- i} {\hbar} H | \psi \rangle \label{2.49}\]\[\frac {\partial c _ {k} (t)} {\partial t} = - \frac {i} {\hbar} \langle k | H | \psi (t) \rangle \label{2.50}\]and from Equation \ref{2.47}\[\frac {\partial c _ {k} (t)} {\partial t} = - \frac {i} {\hbar} \sum _ {n} \langle k | H | n \rangle c _ {n} (t) \label{2.51}\]Already we see that the time evolution amounts to solving a set of coupled linear ordinary differential equations. These are rate equations with complex rate constants, which describe the feeding of one state into another. Substituting Equation \ref{2.45} we have:\[\left.\begin{aligned} \frac {\partial c _ {k} (t)} {\partial t} & = - \frac {i} {\hbar} \sum _ {n} \left\langle k \left| \left( H _ {0} + V (t) \right) \right| n \right\rangle c _ {n} (t) \\ & = - \frac {i} {\hbar} \sum _ {n} \left[ E _ {n} \delta _ {k n} + V _ {k n} (t) \right] c _ {n} (t) \end{aligned} \right. \label{2.52}\]or \(\frac {\partial c _ {k} (t)} {\partial t} + \frac {i} {\hbar} E _ {k} c _ {k} (t) = - \frac {i} {\hbar} \sum _ {n} V _ {k n} (t) c _ {n} (t) \label{2.53}\)Next, we define and substitute\[c _ {m} (t) = e^{- i E _ {m} t / h} b _ {m} (t) \label{2.54}\]which implies a definition for the wavefunction as\[| \psi (t) \rangle = \sum _ {n} b _ {n} (t) e^{- i E _ {n} t / h} | n \rangle \label{2.55}\]This defines a slightly different complex amplitude, that allows us to simplify things considerably. Notice that\[\left| b _ {k} (t) \right|^{2} = \left| c _ {k} (t) \right|^{2}.\]Also, \(b _ {k} ( 0 ) = c _ {k} ( 0 )\). In practice what we are doing is pulling out the “trivial” part of the time evolution, the time-evolving phase factor, which typically oscillates much faster than the changes to the amplitude of \(b\) or \(c\).We will come back to this strategy which we discuss the interaction picture.Now Equation \ref{2.53} becomes\[e^{- i E _ {t} t / h} \frac {\partial b _ {k}} {\partial t} = - \frac {i} {\hbar} \sum _ {n} V _ {k n} (t) e^{- i E _ {n} t / h} b _ {n} (t) \label{2.56}\]or \(i \hbar \frac {\partial b _ {k}} {\partial t} = \sum _ {n} V _ {k n} (t) e^{- i \omega _ {n k} t} b _ {n} (t) \label{2.57}\)This equation is an exact solution. It is a set of coupled differential equations that describe how probability amplitude moves through eigenstates due to a time-dependent potential. Except in simple cases, these equations cannot be solved analytically, but it is often straightforward to integrate numerically.When can we use the approach described here? Consider partitioning the full Hamiltonian into two components, one that we want to study \(H_0\) and the remaining degrees of freedom \(H_1\). For each part, we have knowledge of the complete eigenstates and eigenvalues of the Hamiltonian: \(H _ {i} | \psi _ {i , n} \rangle = E _ {i , n} | \psi _ {i , n} \rangle\). These subsystems will interact with one another through \(H_{int}\). If we are careful to partition this in such a way that Hint is small compared \(H_0\) and \(H_1\), then it should be possible to properly describe the state of the full system as product states in the subsystems: \(| \psi \rangle = | \psi _ {0} \psi _ {1} \rangle\). Further, we can write a time-dependent Schrödinger equation for the motion of each subsystem as:\[i \hbar \frac {\partial | \psi _ {1} \rangle} {\partial t} = H _ {1} | \psi _ {1} \rangle \label{2.58}\]Within these assumptions, we can write the complete time-dependent Schrödinger equation in terms of the two sub-states:\[i \hbar | \psi _ {0} \rangle \frac {\partial | \psi _ {1} \rangle} {\partial t} + i \hbar | \psi _ {1} \rangle \frac {\partial | \psi _ {0} \rangle} {\partial t} = | \psi _ {0} \rangle H _ {1} | \psi _ {1} \rangle + | \psi _ {1} \rangle H _ {0} | \psi _ {0} \rangle + H _ {\mathrm {int}} | \psi _ {0} \rangle | \psi _ {1} \rangle \label{2.59}\]Then left operating by \(\langle \psi _ {1} |\) and making use of Equation \ref{2.58}, we can write\[i \hbar \frac {\partial | \psi _ {0} \rangle} {\partial t} = \left[ H _ {0} + \left\langle \psi _ {1} \left| H _ {\mathrm {int}} \right| \psi _ {1} \right\rangle \right] | \psi _ {0} \rangle \label{2.60}\]This is equivalent to the TDSE for a Hamiltonian of form (Equation \ref{2.45}) where the external interaction \(V (t) = \left\langle \psi _ {1} \left| H _ {\mathrm {int}} (t) \right| \psi _ {1} \right\rangle\) comes from integrating the 1-2 interaction over the sub-space of \(| \psi _ {1} \rangle\). So this represents a time-dependent mean field method.This page titled 3.3: Transitions Induced by Time-Dependent Potential is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,302 |
3.4: Resonant Driving of a Two-Level System
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/03%3A__Time-Evolution_Operator/3.04%3A_Resonant_Driving_of_a_Two-Level_System | Let’s describe what happens when you drive a two-level system with an oscillating potential.\[V (t) = V \cos \omega t \label{2.61}\]\[V _ {k \ell} (t) = V _ {k \ell} \cos \omega t \label{2.62}\]Note, this is the form you would expect for an electromagnetic field interacting with charged particles, i.e. dipole transitions. In a simple sense, the electric field is \(\overline {E} (t) = \overline {E} _ {0} \cos \omega t\) and the interaction potential can be written as \(V = - \overline {E} \cdot \overline {\mu},\) where \(\overline {\mu}\) represents the dipole operator.We will look at the form of this interaction a bit more carefully later. We now couple two states \(| a \rangle\) and \(| b \rangle\) with the oscillating field. Here the energy of the states is ordered so that \(\varepsilon _ {b} > \varepsilon _ {a}\). Let’s ask if the system starts in \(| a \rangle\) what is the probability of finding it in \(| a \rangle\) at time \(t\) ?The system of differential equations that describe this problem is:\[\begin{align} i \hbar \frac {\partial} {\partial t} b _ {k} (t) & = \sum _ {n = a , b} b _ {n} (t) V _ {k n} (t) e^{- i \omega _ {n k} t} \\[4pt] & = \sum _ {n = a , b} b _ {n} (t) V _ {k n} e^{- i \omega _ {n k} t} \cdot \frac {1} {2} \left( e^{- i \omega t} + e^{i \omega t} \right) \label{2.63} \end{align}\]Where \(\cos \omega t\) in written its complex form. Writing this explicitly\[i \hbar \dot {b} _ {b} = \frac {1} {2} b _ {a} V _ {b a} \left[ e^{i \left( \omega _ {b a} - \omega \right) t} + e^{i \left( \omega _ {b a} + \omega \right) t} \right] + \frac {1} {2} b _ {b} V _ {b b} \left[ e^{i \omega t} + e^{- i \omega t} \right]\]\[i \hbar \dot {b} _ {a} = \frac {1} {2} b _ {a} V _ {a a} \left[ e^{i \omega t} + e^{- i \omega t} \right] + \frac {1} {2} b _ {b} V _ {a b} \left[ e^{i \left( \omega _ {a b} - \omega \right) t} + e^{i \left( \omega _ {a b} + \omega \right) t} \right]\]or alterntively changing the last term:\[i \hbar \dot {b} _ {a} = \frac {1} {2} b _ {a} V _ {a a} \left[ e^{i \omega t} + e^{- i \omega t} \right] + \frac {1} {2} b _ {b} V _ {a b} \left[ e^{- i \left( \omega _ {b a} + \omega \right) t} + e^{- i \left( \omega _ {b a} - \omega \right) t} \right]\]Here the expressions have been written in terms of the frequency \(\omega_{ba}\). Two of these terms are dropped, since (for our case) the diagonal matrix elements \(V_{ii} =0\). We also make the secular approximation (or rotating wave approximation) in which the nonresonant terms are dropped. When \(\omega _ {b a} \approx \omega\), terms like \(e^{\pm i \omega t}\) or \(e^{i \left( \omega _ {b a} + \omega \right) t}\) oscillate very rapidly (relative to \(\left| V _ {b a} \right|^{- 1}\)) and so do not contribute much to change of \(c_n\). (Remember, we take the frequencies \(\omega_{b a}\) and \(\omega\) to be positive). So now we have:\[\dot {b} _ {b} = \frac {- i} {2 \hbar} b _ {a} V _ {b a} e^{i \left( \omega _ {b a} - \omega \right) t} \label{2.66}\]\[\dot {b} _ {a} = \frac {- i} {2 \hbar} b _ {b} V _ {a b} e^{- i \left( \omega _ {b a} - \omega \right) t} \label{2.67}\]Note that the coefficients are oscillating at the same frequency but phase shifted to one another. Now if we differentiate Equation \ref{2.66}:\[\ddot {b} _ {b} = \frac {- i} {2 \hbar} \left[ \dot {b} _ {a} V _ {b a} e^{i \left( \omega _ {b a} - \omega \right) t} + i \left( \omega _ {b a} - \omega \right) b _ {a} V _ {b a} e^{i \left( \omega _ {b a} - \omega \right) t} \right] \label{2.68}\]Rewrite Equation \ref{2.66}:\[b _ {a} = \frac {2 i \hbar} {V _ {b a}} \dot {b} _ {b} e^{- i \left( \omega _ {b a} - \omega \right) t} \label{2.69}\]and substitute Equations \ref{2.69} and \ref{2.67} into Equation \ref{2.68}, we get linear second order equation for \(b_b\).\[\ddot {b} _ {b} - i \left( \omega _ {b a} - \omega \right) \dot {b} _ {b} + \frac {\left| V _ {b a} \right|^{2}} {4 \hbar^{2}} b _ {b} = 0 \label{2.70}\]This is just the second order differential equation for a damped harmonic oscillator:\[a \ddot {x} + b \dot {x} + c x = 0 \label{2.71}\]\[x = e^{- ( b / 2 a ) t} ( A \cos \mu t + B \sin \mu t ) \label{2.72}\]with\[\mu = \frac {1} {2 a} \sqrt {4 a c - b^{2}}\]With a little more manipulation, and remembering the initial conditions \(b_j=0\) and \(b_{\ell} =1\), we find\[P _ {b} (t) = \left| b _ {b} (t) \right|^{2} = \frac {\left| V _ {b a} \right|^{2}} {\left| V _ {b a} \right|^{2} + \hbar^{2} \left( \omega _ {b a} - \omega \right)^{2}} \sin^{2} \Omega _ {R} t \label{2.73}\]Where the Rabi Frequency\[\Omega _ {R} = \frac {1} {2 \hbar} \sqrt {\left| V _ {b a} \right|^{2} + \hbar^{2} \left( \omega _ {b a} - \omega \right)^{2}} \label{2.74}\]Also,\[P _ {a} = 1 - P _ {b} \label{2.75}\]The amplitude oscillates back and forth between the two states at a frequency dictated by the coupling between them. [Note a result we will return to later: electric fields couple quantum states, creating coherences!]An important observation is the importance of resonance between the driving potential and the energy splitting between states. To get transfer of probability density you need the driving field to be at the same frequency as the energy splitting. On resonance, you always drive probability amplitude entirely from one state to another.The efficiency of driving between \(| a \rangle\) and \(|b \rangle\) states drops off with detuning. Here plotting the maximum value of \(P_b\) as a function of frequency:This page titled 3.4: Resonant Driving of a Two-Level System is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,303 |
3.5: Schrödinger and Heisenberg Representations
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/03%3A__Time-Evolution_Operator/3.05%3A_Schrodinger_and_Heisenberg_Representations | The mathematical formulation of quantum dynamics that has been presented is not unique. So far, we have described the dynamics by propagating the wavefunction, which encodes probability densities. Ultimately, since we cannot measure a wavefunction, we are interested in observables, which are probability amplitudes associated with Hermitian operators, with time dependence that can be interpreted differently. Consider the expectation value:\[\begin{align} \langle \hat {A} (t) \rangle & = \langle \psi (t) | \hat {A} | \psi (t) \rangle = \left\langle \psi ( 0 ) \left| U^{\dagger} \hat {A} U \right| \psi ( 0 ) \right\rangle \\[4pt] & = ( \langle \psi ( 0 ) | U^{\dagger} ) \hat {A} ( U | \psi ( 0 ) \rangle ) \label{S rep} \\[4pt] & = \left\langle \psi ( 0 ) \left| \left( U^{\dagger} \hat {A} U \right) \right| \psi ( 0 ) \right\rangle \label{2.76} \end{align} \]The last two expressions are written to emphasize alternate “pictures” of the dynamics. Equation \ref{S rep} is known as the Schrödinger picture, refers to everything we have done so far. Here we propagate the wavefunction or eigenvectors in time as \(U | \psi \rangle\). Operators are unchanged because they carry no time-dependence. Alternatively, we can work in the Heisenberg picture (Equation \ref{2.76}) that uses the unitary property of \(U\) to time-propagate the operators as \(\hat {A} (t) = U^{\dagger} \hat {A} U,\) but the wavefunction is now stationary. The Heisenberg picture has an appealing physical picture behind it, because particles move. That is, there is a time-dependence to position and momentum.In the Schrödinger picture, the time-development of \(| \psi \rangle\) is governed by the TDSE\[i \hbar \frac {\partial} {\partial t} | \psi \rangle = H | \psi \rangle \label{2.77A}\]or equivalently, the time propagator:\[| \psi (t) \rangle = U \left( t , t _ {0} \right) | \psi \left( t _ {0} \right) \rangle \label{2.77B}\]In the Schrödinger picture, operators are typically independent of time, \(\partial A / \partial t = 0\). What about observables? For expectation values of operators\[\langle A (t) \rangle = \langle \psi | A | \psi \rangle\]\[\begin{align} i \hbar \frac {\partial} {\partial t} \langle \hat {A} (t) \rangle & = i \hbar \left[ \left\langle \psi | \hat {A} | \frac {\partial \psi} {\partial t} \right\rangle + \left\langle \frac {\partial \psi} {\partial t} | \hat {A} | \psi \right\rangle + \cancel{\left\langle \psi \left| \frac {\partial \hat {A}} {\partial t} \right| \psi \right\rangle} \right] \\[4pt] & = \langle \psi | \hat {A} H | \psi \rangle - \langle \psi | H \hat {A} | \psi \rangle \\[4pt] & = \langle [ \hat {A} , H ] \rangle \label{2.78} \end{align}\]If\(\hat{A}\) is independent of time (as we expect in the Schrödinger picture), and if it commutes with \(\hat{H}\), it is referred to as a constant of motion.From Equation \ref{2.76}, we can distinguish the Schrödinger picture from Heisenberg operators:\[\hat {A} (t) = \langle \psi (t) | \hat {A} | \psi (t) \rangle _ {S} = \left\langle \psi \left( t _ {0} \right) \left| U^{\dagger} \hat {A} U \right| \psi \left( t _ {0} \right) \right\rangle _ {S} = \langle \psi | \hat {A} (t) | \psi \rangle _ {H} \label{2.79}\]where the operator is defined as\[\left.\begin{aligned} \hat {A} _ {H} (t) & = U^{\dagger} \left( t , t _ {0} \right) \hat {A} _ {S} U \left( t , t _ {0} \right) \\[4pt] \hat {A} _ {H} \left( t _ {0} \right) & = \hat {A} _ {S} \end{aligned} \right. \label{2.80}\]Note, the pictures have the same wavefunction at the reference point \(t_0\). Since the wavefunction should be time-independent, \(\partial | \psi _ {H} \rangle / \partial t = 0\), we can relate the Schrödinger and Heisenberg wavefunctions as\[| \psi _ {S} (t) \rangle = U \left( t , t _ {0} \right) | \psi _ {H} \rangle \label{2.81}\]So,\[| \psi _ {H} \rangle = U^{\dagger} \left( t , t _ {0} \right) | \psi _ {S} (t) \rangle = | \psi _ {S} \left( t _ {0} \right) \rangle \label{2.82}\]As expected for a unitary transformation, in either picture the eigenvalues are preserved:\[\begin{align} \hat {A} | \varphi _ {i} \rangle _ {S} & = a _ {i} | \varphi _ {i} \rangle _ {S} \\[4pt] U^{\dagger} \hat {A} U U^{\dagger} | \varphi _ {i} \rangle _ {S} & = a _ {i} U^{\dagger} | \varphi _ {i} \rangle _ {S} \\[4pt] \hat {A} _ {H} | \varphi _ {i} \rangle _ {H} & = a _ {i} | \varphi _ {i} \rangle _ {H} \end{align} \label{2.83}\]The time evolution of the operators in the Heisenberg picture is:\[ \begin{aligned} \frac {\partial \hat {A} _ {H}} {\partial t} & = \frac {\partial} {\partial t} \left( U^{\dagger} \hat {A} _ {s} U \right) = \frac {\partial U^{\dagger}} {\partial t} \hat {A} _ {s} U + U^{\dagger} \hat {A} _ {s} \frac {\partial U} {\partial t} + U^{\dagger} \cancel{\frac {\partial \hat {A}} {\partial t}} U \\[4pt] &= \frac {i} {\hbar} U^{\dagger} H \hat {A} _ {S} U - \frac {i} {\hbar} U^{\dagger} \hat {A} _ {S} H U + \left( \cancel{\frac {\partial \hat {A}} {\partial t}} \right) _ {H} \\[4pt] &= \frac {i} {\hbar} H _ {H} \hat {A} _ {H} - \frac {i} {\hbar} \hat {A} _ {H} H _ {H} \\[4pt] &= - \frac {i} {\hbar} [ \hat {A} , H ] _ {H} \end{aligned} \label{2.84}\]The result\[i \hbar \frac {\partial} {\partial t} \hat {A} _ {H} = [ \hat {A} , H ] _ {H} \label{2.85}\]is known as the Heisenberg equation of motion. Here I have written the odd looking \(H _ {H} = U^{\dagger} H U\). This is mainly to remind one about the time-dependence of \(\hat{H}\). Generally speaking, for a time-independent Hamiltonian \(U = e^{- i H t / h}\), \(U\) and \(H\) commute, and \(H_H =H\). For a time-dependent Hamiltonian, \(U\) and \(H\) need not commute.The Heisenberg equation is commonly applied to a particle in an arbitrary potential. Consider a particle with an arbitrary one-dimensional potential\[H = \frac {p^{2}} {2 m} + V (x) \label{2.86}\]For this Hamiltonian, the Heisenberg equation gives the time-dependence of the momentum and position as\[\dot {p} = - \frac {\partial V} {\partial x} \label{2.87}\]\[\dot {x} = \frac {p} {m} \label{2.88}\]Here, I have made use of\[\left[ \hat {x}^{n} , \hat {p} \right] = i \hbar n \hat {x}^{n - 1} \label{2.89}\]\[\left[ \hat {x} , \hat {p}^{n} \right] = i \hbar n \hat {p}^{n - 1} \label{2.90}\]Curiously, the factors of \(\hbar\) have vanished in Equations \ref{2.87} and \ref{2.88}, and quantum mechanics does not seem to be present. Instead, these equations indicate that the position and momentum operators follow the same equations of motion as Hamilton’s equations for the classical variables. If we integrate Equation \ref{2.88} over a time period \(t\) we find that the expectation value for the position of the particle follows the classical motion.\[\langle x (t) \rangle = \frac {\langle p \rangle t} {m} + \langle x ( 0 ) \rangle \label{2.91}\]We can also use the time derivative of Equation \ref{2.88} to obtain an equation that mirrors Newton’s second law of motion, \(F=ma\):\[m \frac {\partial^{2} \langle x \rangle} {\partial t^{2}} = - \langle \nabla V \rangle \label{2.92}\]These observations underlie Ehrenfest’s Theorem, a statement of the classical correspondence of quantum mechanics, which states that the expectation values for the position and momentum operators will follow the classical equations of motion.This page titled 3.5: Schrödinger and Heisenberg Representations is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,304 |
3.6: Interaction Picture
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/03%3A__Time-Evolution_Operator/3.06%3A_Interaction_Picture | The interaction picture is a hybrid representation that is useful in solving problems with time-dependent Hamiltonians in which we can partition the Hamiltonian as\[H (t) = H_0 + V (t) \label{2.93}\]\(H_0\) is a Hamiltonian for the degrees of freedom we are interested in, which we treat exactly, and can be (although for us usually will not be) a function of time. \(V(t)\) is a time-dependent potential which can be complicated. In the interaction picture, we will treat each part of the Hamiltonian in a different representation. We will use the eigenstates of \(H_0\) as a basis set to describe the dynamics induced by \(V(t)\), assuming that \(V(t)\) is small enough that eigenstates of \(H_0\) are a useful basis. If \(H_0\) is not a function of time, then there is a simple time-dependence to this part of the Hamiltonian that we may be able to account for easily. Setting \(V\) to zero, we can see that the time evolution of the exact part of the Hamiltonian \(H_0\) is described by\[\frac {\partial} {\partial t} U_0 \left( t , t_0 \right) = - \frac {i} {\hbar} H_0 (t) U_0 \left( t , t_0 \right) \label{2.94}\]where,\[U_0 \left( t , t_0 \right) = \exp _ {+} \left[ - \frac {i} {\hbar} \int _ {t_0}^{t} d \tau H_0 (t) \right] \label{2.95}\]or, for a time-independent \(H_0\),\[U_0 \left( t , t_0 \right) = e^{- i H_0 \left( t - t_0 \right) / \hbar} \label{2.96}\]We define a wavefunction in the interaction picture \(| \psi _ {I} \rangle\) in terms of the Schrödinger wavefunction through:\[| \psi _ {S} (t) \rangle \equiv U_0 \left( t , t_0 \right) | \psi _ {I} (t) \rangle \label{2.97}\]or\[| \psi _ {I} \rangle = U_0^{\dagger} | \psi _ {S} \rangle \label{2.98}\]Effectively the interaction representation defines wavefunctions in such a way that the phase accumulated under \(e^{- i H_0 t / h}\) is removed. For small \(V\), these are typically high frequency oscillations relative to the slower amplitude changes induced by \(V\).Now we need an equation of motion that describes the time evolution of the interaction picture wavefunctions. We begin by substituting Equation \ref{2.97} into the TDSE:\[ \begin{align} | \psi _ {S} (t) \rangle & = U_0 \left( t , t_0 \right) | \psi _ {1} (t) \rangle \\[4pt] & = U_0 \left( t , t_0 \right) U _ {I} \left( t , t_0 \right) | \psi _ {I} \left( t_0 \right) \rangle \\[4pt] & = U_0 \left( t , t_0 \right) U _ {I} \left( t , t_0 \right) | \psi _ {S} \left( t_0 \right) \rangle \\[4pt] \therefore \quad U & \left( t , t_0 \right) = U_0 \left( t , t_0 \right) U _ {I} \left( t , t_0 \right) \end{align} \]\[\therefore \quad i \hbar \frac {\partial | \psi _ {I} \rangle} {\partial t} = V_I | \psi _ {I} \rangle \label{2.101}\]where\[V_I (t) = U_0^{\dagger} \left( t , t_0 \right) V (t) U_0 \left( t , t_0 \right) \label{2.102}\]\(| \psi _ {I} \rangle\) satisfies the Schrödinger equation with a new Hamiltonian in Equation \ref{2.102}: the interaction picture Hamiltonian, \(V_I(t)\). We have performed a unitary transformation of \(V(t)\) into the frame of reference of \(H_0\), using \(U_0\). Note: Matrix elements in\[V_I = \left\langle k \left| V_I \right| l \right\rangle = e^{- i \omega _ {l k} t} V _ {k l}\]where \(k\) and \(l\) are eigenstates of \(H_0\). We can now define a time-evolution operator in the interaction picture:\[| \psi _ {I} (t) \rangle = U _ {I} \left( t , t_0 \right) | \psi _ {I} \left( t_0 \right) \rangle \label{2.103}\]where\[U _ {I} \left( t , t_0 \right) = \exp _ {+} \left[ \frac {- i} {\hbar} \int _ {t_0}^{t} d \tau V_I ( \tau ) \right] \label{2.104}\]Now we see that\[\begin{aligned}
\left|\psi_{S}(t)\right\rangle &=U_{0}\left(t, t_{0}\right)\left|\psi_{I}(t)\right\rangle \\[4pt]
&=U_{0}\left(t, t_{0}\right) U_{I}\left(t, t_{0}\right)\left|\psi_{I}\left(t_{0}\right)\right\rangle \\[4pt]
&=U_{0}\left(t, t_{0}\right) U_{I}\left(t, t_{0}\right)\left|\psi_{S}\left(t_{0}\right)\right\rangle
\end{aligned}\]\[\therefore U\left(t, t_{0}\right)=U_{0}\left(t, t_{0}\right) U_{I}\left(t, t_{0}\right)\label{2.106}\]Also, the time evolution of conjugate wavefunction in the interaction picture can be written\[U^{\dagger} \left( t , t_0 \right) = U _ {I}^{\dagger} \left( t , t_0 \right) U_0^{\dagger} \left( t , t_0 \right) = \exp _ {-} \left[ \frac {i} {\hbar} \int _ {t_0}^{t} d \tau V_I ( \tau ) \right] \exp _ {-} \left[ \frac {i} {\hbar} \int _ {t_0}^{t} d \tau H_0 ( \tau ) \right] \label{2.107}\]For the last two expressions, the order of these operators certainly matters. So what changes about the time-propagation in the interaction representation? Let’s start by writing out the time-ordered exponential for \(U\) in Equation \ref{2.106} using Equation \ref{2.104}:\[ \begin{align} U \left( t , t_0 \right) &= U_0 \left( t , t_0 \right) + \left( \frac {- i} {\hbar} \right) \int _ {t_0}^{t} d \tau U_0 ( t , \tau ) V ( \tau ) U_0 \left( \tau , t_0 \right) + \cdots \\[4pt] &= U_0 \left( t , t_0 \right) + \sum _ {n = 1}^{\infty} \left( \frac {- i} {\hbar} \right)^{n} \int _ {t_0}^{t} d \tau _ {n} \int _ {t_0}^{\tau _ {n}} d \tau _ {n - 1} \cdots \int _ {t_0}^{\tau _ {2}} d \tau _ {1} U_0 \left( t , \tau _ {n} \right) V \left( \tau _ {n} \right) U_0 \left( \tau _ {n} , \tau _ {n - 1} \right) \ldots \times U_0 \left( \tau _ {2} , \tau _ {1} \right) V \left( \tau _ {1} \right) U_0 \left( \tau _ {1} , t_0 \right) \label{2.108} \end{align}\]Here I have used the composition property of \(U \left( t , t_0 \right)\). The same positive time-ordering applies. Note that the interactions \(V(\tau_i)\) are not in the interaction representation here. Rather we used the definition in Equation \ref{2.102} and collected terms. Now consider how \(U\) describes the timedependence if \(I\) initiate the system in an eigenstate of \(H_0\), \(| l \rangle\) and observe the amplitude in a target eigenstate \(| k \rangle\). The system evolves in eigenstates of \(H_0\) during the different time periods, with the time-dependent interactions \(V\) driving the transitions between these states. The first-order term describes direct transitions between \(l\) and \(k\) induced by \(V\), integrated over the full time period. Before the interaction phase is acquired as \(e^{- i E _ {\ell} \left( \tau - t_0 \right) / \hbar}\), whereas after the interaction phase is acquired as \(e^{- i E _ {\ell} ( t - \tau ) / \hbar}\). Higher-order terms in the time-ordered exponential accounts for all possible intermediate pathways.We now know how the interaction picture wavefunctions evolve in time. What about the operators? First of all, from examining the expectation value of an operator we see\[\left.\begin{aligned} \langle \hat {A} (t) \rangle & = \langle \psi (t) | \hat {A} | \psi (t) \rangle \\[4pt] & = \left\langle \psi \left( t_0 \right) \left| U^{\dagger} \left( t , t_0 \right) \hat {A} U \left( t , t_0 \right) \right| \psi \left( t_0 \right) \right\rangle \\[4pt] & = \left\langle \psi \left( t_0 \right) \left| U _ {I}^{\dagger} U_0^{\dagger} \hat {A} U_0 U _ {I} \right| \psi \left( t_0 \right) \right\rangle \\[4pt] & = \left\langle \psi _ {L} (t) \left| \hat {A} _ {L} \right| \psi _ {L} (t) \right\rangle \end{aligned} \right. \label{2.109}\]where\[A _ {I} \equiv U_0^{\dagger} A _ {S} U_0 \label{2.110}\]So the operators in the interaction picture also evolve in time, but under \(H_0\). This can be expressed as a Heisenberg equation by differentiating\[\frac {\partial} {\partial t} \hat {A} _ {I} = \frac {i} {\hbar} \left[ H_0 , \hat {A} _ {I} \right] \label{2.111}\]Also, we know\[\frac {\partial} {\partial t} | \psi _ {I} \rangle = \frac {- i} {\hbar} V_I (t) | \psi _ {I} \rangle \label{2.112}\]Notice that the interaction representation is a partition between the Schrödinger and Heisenberg representations. Wavefunctions evolve under VI , while operators evolve under\[\text {For} H_0 = 0 , V (t) = H \quad \Rightarrow \quad \frac {\partial \hat {A}} {\partial t} = 0 ; \quad \frac {\partial} {\partial t} | \psi _ {S} \rangle = \frac {- i} {\hbar} H | \psi _ {S} \rangle \text{For Schrödinger} \]\[\text {For} H_0 = H , V (t) = 0 \Rightarrow \frac {\partial \hat {A}} {\partial t} = \frac {i} {\hbar} [ H , \hat {A} ] ; \quad \frac {\partial \psi} {\partial t} = 0 \text{For Heisenberg} \label{2.113}\]Earlier we described how time-dependent problems with Hamiltonians of the form \(H = H_0 + V (t)\) could be solved in terms of the time-evolving amplitudes in the eigenstates of \(H_0\). We can describe the state of the system as a superposition\[| \psi (t) \rangle = \sum _ {n} c _ {n} (t) | n \rangle \label{2.114}\]where the expansion coefficients \(c _ {k} (t)\) are given by\[\left.\begin{aligned} c _ {k} (t) & = \langle k | \psi (t) \rangle = \left\langle k \left| U \left( t , t_0 \right) \right| \psi \left( t_0 \right) \right\rangle \\[4pt] & = \left\langle k \left| U_0 U _ {I} \right| \psi \left( t_0 \right) \right\rangle \\[4pt] & = e^{- i E _ {k} t / \hbar} \left\langle k \left| U _ {I} \right| \psi \left( t_0 \right) \right\rangle \end{aligned} \right. \label{2.115}\]Now, comparing equations \ref{2.115} and \ref{2.54} allows us to recognize that our earlier modified expansion coefficients \(b_n\) were expansion coefficients for interaction picture wavefunctions\[b _ {k} (t) = \langle k | \psi _ {I} (t) \rangle = \left\langle k \left| U _ {I} \right| \psi \left( t_0 \right) \right\rangle \label{2.116}\]This page titled 3.6: Interaction Picture is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,305 |
3.7: Time-Dependent Perturbation Theory
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/03%3A__Time-Evolution_Operator/3.07%3A_Time-Dependent_Perturbation_Theory | Perturbation theory refers to calculating the time-dependence of a system by truncating the expansion of the interaction picture time-evolution operator after a certain term. In practice, truncating the full time-propagator \(U\) is not effective, and only works well for times short compared to the inverse of the energy splitting between coupled states of your Hamiltonian. The interaction picture applies to Hamiltonians that can be cast as\[H=H_o + V(t)\]and allows us to focus on the influence of the coupling. We can then treat the time evolution under \(H_o\) exactly, but truncate the influence of \(V(t)\). This works well for weak perturbations. Let’s look more closely at this.We know the eigenstates for \(H_o\):\[H _ {0} | n \rangle = E _ {n} | n \rangle\]and we can calculate the evolution of the wavefunction that results from \(V(t)\):\[| \psi _ {I} (t) \rangle = \sum _ {n} b _ {n} (t) | n \rangle \label{2.117}\]For a given state \(k\), we calculate \(b_k(t)\) as:\[b _ {k} = \left\langle k \left| U _ {I} \left( t , t _ {0} \right) \right| \psi \left( t _ {0} \right) \right\rangle \label{2.118}\]where\[U _ {I} \left( t , t _ {0} \right) = \exp _ {+} \left[ \frac {- i} {\hbar} \int _ {t _ {0}}^{t} V _ {I} ( \tau ) d \tau \right ]\label{2.119}\]Now we can truncate the expansion after a few terms. This works well for small changes in amplitude of the quantum states with small coupling matrix elements relative to the energy splittings involved (\(\left| b _ {k} (t) \right| \approx \left| b _ {k} ( 0 ) \right| ; | V | \ll \left| E _ {k} - E _ {n} \right|\)). As we will see, the results we obtain from perturbation theory are widely used for spectroscopy, condensed phase dynamics, and relaxation. Let’s take the specific case where we have a system prepared in , and we want to know the probability of observing the system in \(| k \rangle\) at time \(t\) due to \(V(t)\):\[P _ {k} (t) = \left| b _ {k} (t) \right|^{2}] \label{2.120}.\]Expanding\[b _ {k} (t) = \langle k | \exp _ {+} \left[ - \frac {i} {\hbar} \int _ {t _ {0}}^{t} d \tau V _ {I} ( \tau ) \right] \ell \rangle\]\[\left.\begin{aligned} b _ {k} (t) = \langle k | \ell \rangle & - \frac {i} {\hbar} \int _ {t _ {0}}^{t} d \tau \left\langle k \left| V _ {I} ( \tau ) \right| \ell \right\rangle \\ & + \left( \frac {- i} {\hbar} \right)^{2} \int _ {t _ {0}}^{t} d \tau _ {2} \int _ {t _ {0}}^{\tau _ {2}} d \tau _ {1} \left\langle k \left| V _ {I} \left( \tau _ {2} \right) V _ {I} \left( \tau _ {1} \right) \right| \ell \right\rangle + \ldots \end{aligned} \right. \label{2.121}\]Now, using\[\left\langle k \left| V _ {I} (t) \right| \ell \right\rangle = \left\langle k \left| U _ {0}^{\dagger} V (t) U _ {0} \right| \ell \right\rangle = e^{- i \omega _ {l k} t} V _ {k \ell} (t) \label{2.122}\]we obtain:\[b _ {k} (t) = \delta _ {k \ell} - \frac {i} {\hbar} \int _ {t _ {0}}^{t} d \tau _ {1} e^{- i \omega _ {l k} \tau _ {1}} V _ {k \ell} \left( \tau _ {1} \right) "first-order” \label{2.123}\]\[+ \sum _ {m} \left( \frac {- i} {\hbar} \right)^{2} \int _ {t _ {0}}^{t} d \tau _ {2} \int _ {t _ {0}}^{\tau _ {2}} d \tau _ {1} e^{- i \omega _ {m k} \tau _ {2}} V _ {k n} \left( \tau _ {2} \right) e^{- i \omega _ {\ln t _ {1}}} V _ {m \ell} \left( \tau _ {1} \right) \label{2.124}\]\[ + ...\]The first-order term allows only direct transitions between \(| \ell \rangle\) and \(| k \rangle\), as allowed by the matrix element in \(V\), whereas the second-order term accounts for transitions occurring through all possible intermediate states \(| m \rangle\). For perturbation theory, the time-ordered integral is truncated at the appropriate order. Including only the first integral is first-order perturbation theory. The order of perturbation theory that one would extend a calculation should be evaluated initially by which allowed pathways between \(| \ell \rangle\) and \(| k \rangle\) you need to account for and which ones are allowed by the matrix elements.For first-order perturbation theory, the expression in Equation \ref{2.123} is the solution to the differential equation that you get for direct coupling between \(| \ell \rangle\) and \(| k \rangle\):\[\frac {\partial} {\partial t} b _ {k} = \frac {- i} {\hbar} e^{- i \omega _ {a x} t} V _ {k \ell} (t) b _ {\ell} ( 0 ) \label{2.125}\]This indicates that the solution does not allow for the feedback between \(| \ell \rangle\) and \(| k \rangle\) that accounts for changing populations. This is the reason we say that validity dictates\[\left| b _ {k} (t) \right|^{2} - \left| b _ {k} ( 0 ) \right|^{2} \ll 1.\]If the initial state of the system \(\left|\psi_{0}\right\rangle\) is not an eigenstate of \(H_0\), we can express it as a superposition of eigenstates,\[b _ {k} (t) = \sum _ {n} b _ {n} ( 0 ) \left\langle k \left| U _ {I} \right| n \right\rangle \label{2.126}\]Another observation applies to first-order perturbation theory. If the system is initially prepared in a state \(| \ell \rangle\), and a time-dependent perturbation is turned on and then turned off over the time interval \(t=-\infty \text {to}+\infty\), then the complex amplitude in the target state \(| k \rangle\) is just related to the Fourier transform of \(V_{\ell k}(t)\) evaluated at the energy gap \(\omega_{\ell k}\).\[b _ {k} (t) = - \frac {i} {\hbar} \int _ {- \infty}^{+ \infty} d \tau \,e^{- i \omega _ {\ell k} \tau} V _ {k \ell} ( \tau ) \label{2.127}\]If the Fourier transform pair is defined in the following manner:\[\tilde {V} ( \omega ) \equiv \tilde {\mathcal {F}} [ V (t) ] = \int _ {- \infty}^{+ \infty} d t \,V (t) \exp ( i \omega t ) \label{2.128}\]\[V (t) \equiv \tilde {\mathcal {F}}^{- 1} [ \tilde {V} ( \omega ) ] = \frac {1} {2 \pi} \int _ {- \infty}^{+ \infty} d \omega\, \tilde {V} ( \omega ) \exp ( - i \omega t ) \label{2.129}\]Then we can write the probability of transfer to state \(k\) as\[P _ {k \ell} = \frac {2 \pi \left| \tilde {V} _ {k \ell} \left( \omega _ {k \ell} \right) \right|^{2}} {\hbar^{2}} \label{2.130}\]Example: First-order Perturbation TheoryLet’s consider a simple model for vibrational excitation induced by the compression of harmonic oscillator. We will subject a harmonic oscillator initially in its ground state to a Gaussian compression pulse, which increases its force constant.First, write the complete time-dependent Hamiltonian:\[H (t) = T + V (t) = \frac {p^{2}} {2 m} + \frac {1} {2} k (t) x^{2} \label{2.131}\]Now, partition it according to \(H=H_o + V(t) \) in such a manner that we can write \(H_o\) as a harmonic oscillator Hamiltonian. This involves partitioning the time-dependent force constant into two parts:\[k (t) = k _ {0} + \delta k (t)\]\[k _ {0} = m \Omega^{2}\]\[\delta k (t) = \delta k _ {0} \exp \left( - \frac {\left( t - t _ {0} \right)^{2}} {2 \sigma^{2}} \right) \label{2.133}\]\[H=\underbrace{\frac{p^{2}}{2 m}+\frac{1}{2} k_{0} x^{2}}_{H_{0}}+\underbrace{\frac{1}{2} \delta k_{0} x^{2} \exp \left(-\frac{\left(t-t_{0}\right)^{2}}{2 \sigma^{2}}\right)}_{V(t)}\]Here \(\delta k _ {0}\) is the magnitude of the induced change in the force constant, and \(\sigma\) is the time-width of the Gaussian perturbation. So, we know the eigenstates of \(H_0\): \(H _ {0} | n \rangle = E _ {n} | n \rangle\)\[H_{0}=\hbar \Omega\left(a^{\dagger} a+\frac{1}{2}\right)\]and\[E _ {n} = \hbar \Omega \left( n + \frac {1} {2} \right)\]Now we ask, if the system is in \(|0\rangle\) before applying the perturbation, what is the probability of finding it in state n after the perturbation?For \(n \neq 0\)\[b _ {n} (t) = \frac {- i} {\hbar} \int _ {t _ {0}}^{t} d \tau V _ {n 0} ( \tau ) e^{i \omega _ {n 0} \tau} \label{2.135}\]Using\[\omega _ {n 0} = \left( E _ {n} - E _ {0} \right) / \hbar = n \Omega\]and recognizing that we can set the limits to \(t_{0}=-\infty \text {and} \mathrm{t}=\infty\)\[b _ {n} (t) = \frac {- i} {2 \hbar} \delta k _ {0} \left\langle n \left| x^{2} \right| 0 \right\rangle \int _ {- \infty}^{+ \infty} d \tau e^{i n \Omega \tau} e^{- \tau^{2} / 2 \sigma^{2}} \label{2.136}\]This leads to\[b _ {n} (t) = \frac {- i} {2 \hbar} \delta k _ {0} \sqrt {2 \pi} \sigma \left\langle n \left| x^{2} \right| 0 \right\rangle e^{- n^{2} \sigma^{2} \Omega^{2} / 2} \label{2.137}\]Here we made use of an important identity for Gaussian integrals:\[\int _ {- \infty}^{+ \infty} \exp \left( a x^{2} + b x + c \right) d x = \sqrt {\frac {- \pi} {a}} \exp \left( c - \frac {1} {4} \frac {b^{2}} {a} \right) \label{2.138}\]and\[\int _ {- \infty}^{+ \infty} \exp \left( - a x^{2} + i b x \right) d x = \sqrt {\frac {\pi} {a}} \exp \left( - \frac {b^{2}} {4 a} \right) \label{2.139}\]What about the matrix element?\[x^{2} = \frac {\hbar} {2 m \Omega} \left( a + a^{\dagger} \right)^{2} = \frac {\hbar} {2 m \Omega} \left( a a + a^{\dagger} a + a a^{\dagger} + a^{\dagger} a^{\dagger} \right)\label{2.140}\]From these we see that first-order perturbation theory will not allow transitions to \(n =1\), only \(n = 0\) and \(n = 2\). Generally this would not be realistic, because you would certainly expect excitation to \(n=1\) would dominate over excitation to \(n=2\). A real system would also be anharmonic, in which case, the leading term in the expansion of the potential V(x), that is linear in x, would not vanish as it does for a harmonic oscillator, and this would lead to matrix elements that raise and lower the excitation by one quantum.However for the present case,\[\left\langle 2 \left| x^{2} \right| 0 \right\rangle = \sqrt {2} \frac {\hbar} {2 m \Omega} \label{2.141}\]So,\[b _ {2} = \frac {- i \sqrt {\pi} \delta k _ {0} \sigma} {2 m \Omega} e^{- 2 \sigma^{2} \Omega^{2}} \label{2.142}\]and we can write the probability of occupying the \(n = 2\) state as\[P _ {2} = \left| b _ {2} \right|^{2} = \frac {\pi \delta k _ {0}^{2} \sigma^{2}} {2 m^{2} \Omega^{2}} e^{- 4 \sigma^{2} \Omega^{2}} \label{2.143}\]From the exponential argument, significant transfer of amplitude occurs when the compression pulse width is small compared to the vibrational period.\[\sigma \ll \dfrac {1} {\Omega} \label{2.144}\]In this regime, the potential is changing faster than the atoms can respond to the perturbation. In practice, when considering a solid-state problem, with frequencies matching those of acoustic phonons and unit cell dimensions, we need perturbations that move faster than the speed of sound, i.e., a shock wave. The opposite limit, \(\sigma \Omega > > 1\), is the adiabatic limit. In this case, the perturbation is so slow that the system always remains entirely in n=0, even while it is compressed.Now, let’s consider the validity of this first-order treatment. Perturbation theory does not allow for \(b_n\) to change much from its initial value. First we re-write Equation \ref{2.143} as\[P _ {2} = \sigma^{2} \Omega^{2} \frac {\pi} {2} \left( \frac {\delta k _ {0}^{2}} {k _ {0}^{2}} \right) e^{- 4 \sigma^{2} \Omega^{2}} \label{2.145}\]Now for changes that don’t differ much from the initial value, \(P _ {2} \ll 1\)\[\sigma^{2} \Omega^{2} \frac {\pi} {2} \left( \frac {\delta k _ {0}^{2}} {k _ {0}^{2}} \right) \ll 1 \label{2.146}\]Generally, the magnitude of the perturbation \(\delta k _ {0}\) must be small compared to \(k_0\).The preceding example was simple, but it tracks the general approach to setting up problems that you treat with time-dependent perturbation theory. The approach relies on writing a Hamiltonian that can be cast into a Hamiltonian that you can treat exactly \(H_0\), and time-dependent perturbations that shift amplitudes between its eigenstates. For this scheme to work well, we need the magnitude of perturbation to be small, which immediately suggests working with a Taylor series expansion of the potential. For instance, take a one-dimensional potential for a bound particle, \(V(x)\), which is dependent on the form of an external variable y. We can expand the potential in x about its minimum \(x = 0\) as\[\begin{align} V (x) &= \frac {1} {2 !} \left. \frac {\partial^{2} V} {\partial x^{2}} \right| _ {x = 0} x^{2} + \frac {1} {2 !} \left. \frac {\partial^{2} V} {\partial x \partial y} \right| _ {x = 0} x y + \frac {1} {3 !} \sum _ {y , z} \left. \frac {\partial^{3} V} {\partial x \partial y \partial z} \right| _ {x = 0} x y z + \cdots \label{2.147} \\ &= \frac {1} {2} k x^{2} + V^{( 2 )} x y + \left( V _ {3}^{( 3 )} x^{3} + V _ {2}^{( 3 )} x^{2} y + V _ {1}^{( 3 )} x y^{2} \right) + \cdots\end{align}\]The first term is the harmonic force constant for \(x\), and the second term is a bi-linear coupling whose magnitude \(V^{}\) indicates how much a change in the variable y influences the variable \(x\). The remaining terms are cubic expansion terms. \(V_3^{}\) is the cubic anharmonicity of \(V(x)\), and the remaining two terms are cubic couplings that describe the dependence of x and y. Introducing a time-dependent potential is equivalent to introducing a time-dependence to the operator y, where the form and strength of the interaction is subsumed into the amplitude \(V\). In the case of the previous example, our formulation of the problem was equivalent to selecting only the \(V _ {2}^{( 3 )}\) term, so that \(\delta k _ {0} / 2 = V _ {2}^{( 3 )}\), and giving the value of y a time-dependence described by the Gaussian waveform.This page titled 3.7: Time-Dependent Perturbation Theory is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,306 |
3.8: Fermi’s Golden Rule
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/03%3A__Time-Evolution_Operator/3.08%3A_Fermis_Golden_Rule | A number of important relationships in quantum mechanics that describe rate processes come from first-order perturbation theory. These expressions begin with two model problems that we want to work through:As before, we will ask: if we prepare the system in the state \(| \ell \rangle\), what is the probability of observing the system in state \(| k \rangle\) following the perturbation?The system is prepared such that \(| \psi ( - \infty ) \rangle = | \ell \rangle\). A constant perturbation of amplitude \(V\) is applied at \(t_o\):\[V (t) = V \Theta \left( t - t _ {0} \right) = \left\{\begin{array} {l l} {0} & {t < t _ {0}} \\ {V} & {t \geq t _ {0}} \end{array} \right. \label{2.148}\]Here \(\Theta \left( t - t _ {0} \right)\) is the Heaviside step response function, which is 0 for \(t < t_o\) and 1 for \(t ≥ t_0\). Now, turning to first order perturbation theory, the amplitude in \(k \neq \ell\), we have:\[b _ {k} = - \frac {i} {\hbar} \int _ {t _ {0}}^{t} d \tau \,e^{i \omega _ {k l} \left( \tau - t _ {0} \right)} V _ {k \ell} ( \tau ) \label{2.149}\]Here \(V_{k\ell}\) is independent of time.Setting \(t_o = 0\)\[ \begin{align} b _ {k} &= - \frac {i} {\hbar} V _ {k \ell} \int _ {0}^{t} d \tau \,e^{i \omega _ {k \ell} \tau} \\[4pt] &= - \frac {V _ {k \ell}} {E _ {k} - E _ {\ell}} \left[ \exp \left( i \omega _ {k l} t \right) - 1 \right] \\[4pt] &= - \frac {2 i V _ {k \ell} e^{i \omega _ {k t} t / 2}} {E _ {k} - E _ {\ell}} \sin \left( \omega _ {k t} t / 2 \right) \label{2.150} \end{align}\]For Equation \ref{2.150}, the following identity was used\[e^{i \theta} - 1 = 2 i e^{i \theta / 2} \sin ( \theta / 2 ).\]Now the probability of being in the \(k\) state is\[ \begin{align} P _ {k} &= \left| b _ {k} \right|^{2} \\[4pt] &= \frac {4 \left| V _ {k \ell} \right|^{2}} {\left| E _ {k} - E _ {\ell} \right|^{2}} \sin^{2} \left( \frac {\omega _ {k \ell} t} {2} \right) \label{2.151} \end{align}\]If we write this using the energy splitting variable we used earlier:\[\Delta = \left( E _ {k} - E _ {t} \right) / 2\]then\[P _ {k} = \frac {V^{2}} {\Delta^{2}} \sin^{2} ( \Delta t / \hbar ) \label{2.152}\]Fortunately, we have the exact result for the two-level problem to compare this approximation to\[P _ {k} = \frac {V^{2}} {V^{2} + \Delta^{2}} \sin^{2} \left( \sqrt {\Delta^{2} + V^{2}} t / \hbar \right) \label{2.153}\]From comparing Equation \ref{2.152} and \ref{2.153}, it is clear that the perturbation theory result works well for \(V \ll \Delta\), as expected for this approximation approach.Let’s examine the time-dependence to \(P_k\), and compare the perturbation theory (solid lines) to the exact result (dashed lines) for different values of \(\Delta\).The worst correspondence is for \(\Delta=0.5\) (red curves) for which the behavior appears quadratic and the probability quickly exceeds unity. It is certainly unrealistic, but we do not expect that the expression will hold for the “strong coupling” case: \(\Delta \ll V\). One begins to have quantitative accuracy in for the regime \(P _ {k} (t) - P _ {k} ( 0 ) < 0.1\) or \(\Delta < 4V\).Now let’s look at the dependence on \(\Delta\). We can write the first-order result Equation \ref{2.152} as\[P _ {k} = \frac {V^{2} t^{2}} {\hbar^{2}} \text{sinc}^{2} ( \Delta t / 2 \hbar ) \label{2.154}\]where\[\text{sinc} (x) = \dfrac{\sin (x)}{x}.\]If we plot the probability of transfer from \(| \ell \rangle\) to \(| k \rangle\) as a function of the energy level splitting (\(E_k-K_{\ell}\)), we have: The probability of transfer is sharply peaked where energy of the initial state matches that of the final state, and the width of the energy mismatch narrows with time. Since\[\lim _ {x \rightarrow 0} \operatorname {sinc} (x) = 1,\]we see that the short time behavior is a quadratic growth in \(P_k\)\[\lim _ {\Delta \rightarrow 0} P _ {k} = \dfrac{V^{2} t^{2}}{\hbar^{2}} \label{2.155}\]The integrated area grows linearly with time.UncertaintySince the energy spread of states to which transfer is efficient scales approximately as \(E _ {k} - E _ {\ell} < 2 \pi \hbar / t\), this observation is sometimes referred to as an uncertainty relation with\[\Delta E \cdot \Delta t \geq 2 \pi \hbar\]However, remember that this is really just an observation of the principles of Fourier transforms. A frequency can only be determined as accurately as the length of the time over which you observe oscillations. Since time is not an operator, it is not a true uncertainly relation like\[\Delta p \cdot \Delta x \geq 2 \pi \hbar.\]In the long time limit, the \(\text{sinc}^2 (x)\) function narrows to a delta function:\[\lim _ {t \rightarrow \infty} \frac {\sin^{2} ( a x / 2 )} {a x^{2}} = \frac {\pi} {2} \delta (x) \label{2.156}\]So the long time probability of being in the \(k\) state is\[\lim _ {t \rightarrow \infty} P _ {k} (t) = \frac {2 \pi} {\hbar} \left| V _ {k \ell} \right|^{2} \delta \left( E _ {k} - E _ {\ell} \right) t \label{2.157}\]The delta function enforces energy conservation, saying that the energies of the initial and target state must be the same in the long time limit. What is interesting in Equation \ref{2.157} is that we see a probability growing linearly in time. This suggests a transfer rate that is independent of time, as expected for simple first-order kinetics:\[w _ {k} (t) = \frac {\partial P _ {k} (t)} {\partial t} = \frac {2 \pi \left| V _ {k \ell} \right|^{2}} {\hbar} \delta \left( E _ {k} - E _ {\ell} \right) \label{2.158}\]This is one statement of Fermi’s Golden Rule—the state-to-state form—which describes relaxation rates from first-order perturbation theory. We will show that this rate properly describes long time exponential relaxation rates that you would expect from the solution to\[\dfrac{d P}{d t} = - w P.\]The second model calculation is the interaction of a system with an oscillating perturbation turned on at time \(t_0 = 0\). The results will be used to describe how a light field induces transitions in a system through dipole interactions.Again, we are looking to calculate the transition probability between states \(\ell\) and \(k\):\[V (t) = V \cos \omega t \Theta (t) \label{2.159}\]\[ \begin{align} V _ {k \ell} (t) &= V _ {k \ell} \cos \omega t \label{2.160} \\[4pt] &= \frac {V _ {k \ell}} {2} \left[ e^{- i \omega t} + e^{i \omega t} \right] \end{align}\]Setting \(t _ {0} \rightarrow 0\), first-order perturbation theory leads to\[ \begin{align} b _ {k} &= \frac {- i} {\hbar} \int _ {t _ {0}}^{t} d \tau\, V _ {k \ell} ( \tau ) e^{i \omega _ {k l} \tau} \\[4pt] &= \frac {- i V _ {k \ell}} {2 \hbar} \int _ {0}^{t} d \tau \left[ e^{i \left( \omega _ {k l} - \omega \right) \tau} + e^{i \left( \omega _ {k l} + \omega \right) \tau} \right] \\[4pt] &= \frac {- i V _ {k \ell}} {2 \hbar} \left[ \frac {e^{i \left( \omega _ {k l} - \omega \right) t} - 1} {\omega _ {k \ell} - \omega} + \frac {e^{i \left( \omega _ {k t} + \omega \right) t} - 1} {\omega _ {k \ell} + \omega} \right] \end{align} \]Using\[e^{\theta} - 1 = 2 i e^{i \theta / 2} \sin ( \theta / 2 )\]as before:\[b _ {k} = \frac {V _ {k \ell}} {\hbar} \left[ \underbrace{\frac {e^{i \left( \omega _ {k \ell} - \omega \right) t / 2} \sin \left[ \left( \omega _ {k \ell} - \omega \right) t / 2 \right]} {\omega _ {k \ell} - \omega}}_{\text{absorption}} + \underbrace{\frac {e^{i \left( \omega _ {k \ell} + \omega \right) t / 2} \sin \left[ \left( \omega _ {k \ell} + \omega \right) t / 2 \right]} {\omega _ {k \ell} + \omega}}_{\text{stimulated emission}} \right] \label{2.162}\]Notice that these terms are only significant when \(\omega \approx \pm \omega _ {k \ell}\). The condition for efficient transfer is resonance, a matching of the frequency of the harmonic interaction with the energy splitting between quantum states. Consider the resonance conditions that will maximize each of these:If we consider only absorption,\[P _ {k \ell} = \left| b _ {k} \right|^{2} = \frac {\left| V _ {k l} \right|^{2}} {\hbar^{2} \left( \omega _ {k t} - \omega \right)^{2}} \quad \sin^{2} \left[ \frac {1} {2} \left( \omega _ {k \ell} - \omega \right) t \right] \label{2.163}\]We can compare this with the exact expression:\[P _ {k \ell} = \left| b _ {k} \right|^{2} = \frac {\left| V _ {k l} \right|^{2}} {\hbar^{2} \left( \omega _ {k t} - \omega \right)^{2} + \left| V _ {k l} \right|^{2}} \sin^{2} \left[ \frac {1} {2 \hbar} \sqrt {\left| V _ {k l} \right|^{2} + \left( \omega _ {k \ell} - \omega \right)^{2}} t \right] \label{2.164}\]Again, we see that the first-order expression is valid for couplings \(\left| V _ {k \ell} \right|\) that are small relative to the detuning \(\Delta \omega = \left( \omega _ {k \ell} - \omega \right)\). The maximum probability for transfer is on resonance \(\omega _ {k \ell} = \omega\) Similar to our description of the constant perturbation, the long time limit for this expression leads to a delta function \(\delta \left( \omega _ {k \ell} - \omega \right)\). In this long time limit, we can neglect interferences between the resonant and antiresonant terms. The rates of transitions between \(k\) and \(\ell\) states determined from \(w _ {k \ell} = \partial P _ {k} / \partial t\) becomes\[w _ {k \ell} = \frac {\pi} {2 \hbar^{2}} \left| V _ {k \ell} \right|^{2} \left[ \delta \left( \omega _ {k \ell} - \omega \right) + \delta \left( \omega _ {k \ell} + \omega \right) \right] \label{2.165}\]We can examine the limitations of this formula. When we look for the behavior on resonance, expanding the sin(x) shows us that Pk rises quadratically for short times:\[\lim _ {\Delta \omega \rightarrow 0} P _ {k} (t) = \frac {\left| V _ {k \ell} \right|^{2}} {4 \hbar^{2}} t^{2} \label{2.166}\]This clearly will not describe long-time behavior, but it will hold for small \(P_k\), so we require\[t \ll \frac {2 \hbar} {V _ {k \ell}} \label{2.167}\]At the same time, we cannot observe the system on too short a time scale. We need the field to make several oscillations for this to be considered a harmonic perturbation.\[t > \frac {1} {\omega} \approx \frac {1} {\omega _ {k \ell}} \label{2.168}\]These relationships imply that we require\[V _ {k \ell} \ll \hbar \omega _ {k \ell} \label{2.169}\]This page titled 3.8: Fermi’s Golden Rule is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,307 |
4.1: Introduction to Dissipative Dynamics
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/04%3A_Irreversible_Relaxation/4.01%3A_Introduction_to_Dissipative_Dynamics | How does irreversible behavior, a hallmark of chemical systems, arise from the deterministic Time Dependent Schrödinger Equation? We will answer this question specifically in the context of quantum transitions from a given energy state of the system to energy states its surroundings. Qualitatively, such behavior can be expected to arise from destructive interference between oscillatory solutions of the system and the set of closely packed manifold of energy states of the bath. To illustrate this point, consider the following calculation for the probability amplitude for an initial state of the system coupled to a finite but growing number of randomly chosen states belonging to the bath.Here, even with only 100 or 1000 states, recurrences in the initial state amplitude are suppressed by destructive interference between paths. Clearly in the limit that the accepting state are truly continuous, the initial amplitude prepared in \(\ell\) will be spread through an infinite number of continuum states. We will look at this more closely by describing the relaxation of an initially prepared state as a result of coupling to a continuum of states of the surroundings. This is common to all dissipative processes in which the surroundings to the system of interest form a continuous band of states.To begin, let us define a continuum. We are familiar with eigenfunctions being characterized by quantized energy levels, where only discrete values of the energy are allowed. However, this is not a general requirement. Discrete levels are characteristic of particles in bound potentials, but free particles can take on a continuous range of energies given by their momentum,\[ E = \dfrac{\langle p^2 \rangle}{2m}.\]The same applies to dissociative potential energy surfaces, and bound potentials in which the energy exceeds the binding energy. For instance, photoionization or photodissociation of a molecule involves a light field coupling a bound state into a continuum. Other examples are common in condensed matter. The intermolecular motions of a liquid, the lattice vibrations of a crystal, or the allowed energies within the band structure of a metal or semiconductor are all examples of a continuum.For a discrete state imbedded in such a continuum, the Golden Rule gives the probability of transition from the system state \(| \ell \rangle\) to a continuum state \(| \ell \rangle\) as:\[\overline {w} _ {k \ell} = \frac {\partial \overline {P} _ {k \ell}} {\partial t} = \frac {2 \pi} {\hbar} \left| V _ {k \ell} \right|^{2} \rho \left( E _ {k} = E _ {\ell} \right)\]The transition rate \(\overline {\mathcal {W}} _ {k \ell}\) is constant in time, when \(\left| V _ {k \ell} \right|^{2}\) is constant in time, which will be true for short time intervals. Under these conditions integrating the rate equation on the left gives\[\begin{align} \overline {P} _ {k \ell} &= \overline {w} _ {k \ell} \left( t - t _ {0} \right) \\[4pt] \overline {P} _ {\ell \ell} &= 1 - \overline {P} _ {k \ell}. \end{align}\]The probability of transition to the continuum of bath states varies linearly in time. As we noted, this will clearly only work for times such that\[P _ {k} (t) - P _ {k} ( 0 ) \gg1.\]What long time behavior do we expect? A time independent rate with population governed by\[\overline {w} _ {k \ell} = \partial \overline {P} _ {k \ell} / \partial t\]is a hallmark of first order kinetics and exponential relaxation. In fact, for exponential relaxation out of a state \(\ell\), the short time behavior looks just like the first order result:\[\begin{align} \overline {P} _ {\ell \ell} (t) &= \exp \left( - \overline {w} _ {k \ell} t \right) \\ &= 1 - \overline {w} _ {k \ell} t + \ldots\label{3.4} \end{align} \]So we might believe that \(\overline {\mathcal {W}} _ {k \ell}\) represents the tangent to the relaxation behavior at \(t - 0\). The problem we had previously was we did not account for depletion of initial state. In fact, we will see that when we look a touch more carefully, that the long time relaxation behavior of state \(\ell\) is exponential and governed by the golden rule rate. The decay of the initial state is irreversible because there is feedback with a distribution of destructively interfering phases.Let’s formulate this problem a bit more carefully. We will look at transitions to a continuum of states \(\{k \}\) from an initial state \(\ell\) under a constant perturbation.These together form a complete set; so for\[H (t) = H _ {0} + V (t)\]with \(H _ {0} | n \rangle = E _ {n} | n \rangle\).\[1 = \sum _ {n} | n \rangle \langle n | = | \ell \rangle \left\langle \ell \left| + \sum _ {k} \right| k \right\rangle \langle k | \label{3.5}\]As we go on, you will see that we can identify \(\ell\) with the “system” and \(\{k \}\) with the “bath” when we partition\[H _ {0} = H _ {S} + H _ {B}.\]Now let’s make some simplifying assumptions. For transitions into the continuum, we will assume that transitions only occur between \(\ell\) and states of the continuum, but that there are no interactions between states of the continuum: \(\left\langle k | V | k^{\prime} \right\rangle = 0\). This can be rationalized by thinking of this problem as a discrete set of states interacting with a continuum of normal modes. Moreover, we will assume that the coupling of the initial to continuum states is a constant for all states \(k\): \(\langle \ell | V | k \rangle = \left\langle \ell | V | k^{\prime} \right\rangle = \cdots\). For reasons that we will see later, we will also keep the diagonal matrix element \(\langle \ell | V | \ell \rangle = 0\). With these assumptions, we can summarize the Hamiltonian for our problem as\begin{aligned}
H(t) &=H_{0}+V(t) \\
H_{0} &=|\ell\rangle E_{\ell}\left\langle\ell\left|+\sum_{k}\right| k\right\rangle E_{k}\langle k| \\
V(t) &=\sum_{k}\left[|k\rangle V_{k \ell}\langle\ell|+| \ell\rangle V_{2 k}\langle k|\right]+|\ell\rangle V_{\ell \ell}\langle\ell|
\label{3.6}\end{aligned}We are seeking a more accurate description of the occupation of the initial and continuum states, for which we will use the interaction picture expansion coefficients\[b _ {k} (t) = \left\langle k \left| U _ {I} \left( t , t _ {0} \right) \right| \ell \right\rangle \label{3.7}\]Earlier, we saw that the exact solution to \(U_I\) was:\[U _ {I} \left( t , t _ {0} \right) = 1 - \frac {i} {\hbar} \int _ {t _ {0}}^{t} d \tau V _ {I} ( \tau ) U _ {I} \left( \tau , t _ {0} \right) \label{3.8}\]This form was not very practical, since \(U_I\) is a function of itself. For first-order perturbation theory, we set the final term in this equation \(U_I\), \(U _ {I} \left( \tau , t _ {0} \right) \rightarrow 1\). Here, in order to keep the feedback between \( |\ell \rangle \) and the continuum states, we keep it as is.\[b _ {k} (t) = \langle k | \ell \rangle - \frac {i} {\hbar} \int _ {t _ {0}}^{t} d \tau \left\langle k \left| V _ {I} ( \tau ) U _ {I} \left( \tau , t _ {0} \right) \right| \ell \right\rangle \label{3.9}\]Inserting Equation \ref{3.7}, and recognizing \(k \neq l\),\[b _ {k} (t) = - \frac {i} {\hbar} \sum _ {n} \int _ {t _ {0}}^{t} d \tau e^{i \omega _ {h n} \tau} V _ {k n} b _ {n} ( \tau ) \label{3.10}\]Note, \(V_{kn}\) is not a function of time. Equation \ref{3.10} expresses the occupation of state \(k\) in terms of the full history of the system from \(t _ {0} \rightarrow t\) with amplitude flowing back and forth between the states n. Equation \ref{3.10} is just the integral form of the coupled differential equations that we used before:\[i \hbar \frac {\partial b _ {k}} {\partial t} = \sum _ {n} e^{i \omega _ {b n} t} V _ {k n} b _ {n} (t) \label{3.11}\]These exact forms allow for feedback between all the states, in which the amplitudes \(b_k\) depend on all other states. Since you only feed from \(\ell\) into \(k\), we can remove the summation in Equation \ref{3.10} and express the complex amplitude of a state within the continuum as\[b _ {k} = - \frac {i} {\hbar} V _ {k \ell} \int _ {t _ {0}}^{t} d \tau e^{i \omega _ {k} \tau} b _ {\ell} ( \tau ) \label{3.12}\]We want to calculate the rate of leaving \(| \ell \rangle\), including feeding from continuum back into initial state. From Equation \ref{3.11} we can separate terms involving the continuum and the initial state:\[i \hbar \frac {\partial} {\partial t} b _ {\ell} = \sum _ {k \neq \ell} e^{i \omega _ {\mu} t} V _ {\ell k} b _ {k} + V _ {\ell \ell} b _ {\ell} \label{3.13}\]Now substituting Equation \ref{3.12} into Equation \ref{3.13}, and setting \(t_0 =0\):\[\frac {\partial b _ {\ell}} {\partial t} = - \frac {1} {\hbar^{2}} \sum _ {k \neq \ell} \left| V _ {k \ell} \right|^{2} \int _ {0}^{t} b _ {\ell} ( \tau ) e^{i \omega _ {k} ( \tau - t )} d \tau - \frac {i} {\hbar} V _ {\ell \ell} b _ {\ell} (t) \label{3.14}\]This is an integro-differential equation that describes how the time-development of \(b_ℓ\) depends on the entire history of the system. Note we have two time variables for the two propagation routes:\[\left. \begin{array} {l} {\tau : | \ell \rangle \rightarrow | k \rangle} \\ {t : | k \rangle \rightarrow | \ell \rangle} \end{array} \right. \label{3.15}\]The next assumption is that \(b_ℓ\) varies slowly relative to \(\omega_{kℓ}\), so we can remove it from integral. This is effectively a weak coupling statement: \(\hbar \omega _ {k \ell} \gg V _ {k \ell}\). \(b\) is a function of time, but since it is in the interaction picture it evolves slowly compared to the \(\omega_{kℓ}\) oscillations in the integral.\[\frac {\partial b _ {\ell}} {\partial t} = b _ {\ell} \left[ - \frac {1} {\hbar^{2}} \sum _ {k \neq \ell} \left| V _ {k \ell} \right|^{2} \int _ {0}^{t} e^{i \omega _ {k} ( \tau - t )} d \tau - \frac {i} {\hbar} V _ {\ell \ell} \right] \label{3.16}\]Now, we want the long time evolution of \(b\), for times \(\omega _ {k \ell} t > > 1\), we will investigate the integration limit \(t \rightarrow \infty\).NoteComplex integration of Equation \ref{3.16}: Defining \(t^{\prime} = \tau - t\)\[\int _ {0}^{t} e^{i \omega _ {k l} ( \tau - t )} d \tau = - \int _ {0}^{t} e^{i \omega _ {k l} t^{\prime}} d t^{\prime} \label{3.17}\]The integral \(\lim _ {T \rightarrow \infty} \int _ {0}^{T} e^{i \omega t^{\prime}} d t^{\prime}\) is purely oscillatory and not well behaved. The strategy to solve this is to integrate:\[\begin{align} \lim _ {\varepsilon \rightarrow 0^{+}} \int _ {0}^{\infty} e^{( i \omega + \varepsilon ) t^{\prime}} d t^{\prime} & = \lim _ {\varepsilon \rightarrow 0^{+}} \frac {1} {i \omega + \varepsilon} \\ & = \lim _ {\varepsilon \rightarrow 0^{+}} \left( \frac {\varepsilon} {\omega^{2} + \varepsilon^{2}} + i \frac {\omega} {\omega^{2} + \varepsilon^{2}} \right) \\ & = \pi \delta ( \omega ) - i \mathbb {P} \frac {1} {\omega} \label{3.19} \end{align}\](This expression is valid when used under an integral) In the final term we have written in terms of the Cauchy Principle Part:\[\mathbb {P} \left( \frac {1} {x} \right) = \left\{\begin{array} {l l} {\frac {1} {x}} & {x \neq 0} \\ {0} & {x = 0} \end{array} \right. \label{3.20}\]Using Equation \ref{3.19}, Equation \ref{3.16} becomes \ref{3.21}\[\frac {\partial b _ {\ell}} {\partial t} = b _ {\ell} [ - \underbrace {\frac {\pi} {\hbar^{2}} \sum _ {k \neq \ell} \left| V _ {k \ell} \right|^{2} \delta \left( \omega _ {k \ell} \right)} _ {\text {term} 1} - \frac {i} {\hbar} \left( \underbrace {V _ {\ell \ell} + \mathbb {P} \sum _ {k \neq \ell} \frac {\left| V _ {k \ell} \right|^{2}} {E _ {k} - E _ {\ell}} )} _ {\text {term} 2} \right] \label{3.21}\]Note that Term 1 is just the Golden Rule rate, written explicitly as a sum over continuum states instead of an integral\[\sum _ {k \neq \ell} \delta \left( \omega _ {k \ell} \right) \Rightarrow \hbar \rho \left( E _ {k} = E _ {\ell} \right) \label{3.22}\]\[\overline {w} _ {k \ell} = \int d E _ {k} \rho \left( E _ {k} \right) \left[ \frac {2 \pi} {\hbar} \left| V _ {k l} \right|^{2} \delta \left( E _ {k} - E _ {\ell} \right) \right] \label{3.23}\]Term 2 is just the correction of the energy of \(E_ℓ\) from second-order time-independent perturbation theory,\[\Delta E _ {\ell} = \langle \ell | V | \ell \rangle + \sum _ {k \neq l} \frac {\langle k | V | \left. \ell \right|^{2}} {E _ {k} - E _ {\ell}} \label{3.25} \]So, the time evolution of \(b _ {\ell}\) is governed by a simple first-order differential equation\[\frac{\partial b_{\ell}}{\partial t}=b_{\ell}\left(-\frac{\bar{w}_{k \ell}}{2}-\frac{i}{\hbar} \Delta E_{\ell}\right)\]Which can be solved with \(b _ {\ell} ( 0 ) = 1\) to give\[b _ {\ell} (t) = \exp \left( - \frac {\overline {w} _ {k l} t} {2} - \frac {i} {\hbar} \Delta E _ {\ell} t \right) \label{3.26}\]We see that one has exponential decay of amplitude of \(b _ {\ell}\)! This is a manner of irreversible relaxation from coupling to the continuum. Now, since there may be additional interferences between paths, we switch from the interaction picture back to Schrödinger Picture,\[c _ {\ell} (t) = \exp \left[ - \left( \frac {\overline {w} _ {k \ell}} {2} + i \frac {E _ {\ell}^{\prime}} {\hbar} \right) t \right] \label{3.27}\]with the corrected energy\[E _ {\ell}^{\prime} \equiv E _ {\ell} + \Delta E \label{3.28}\]and\[P _ {\ell} = \left| c _ {\ell} \right|^{2} = \exp \left[ - \overline {w} _ {k \ell} t \right] \label{3.29}\]The solutions to the Time Dependent Schrödinger Equation are expected to be complex and oscillatory. What we see here is a real dissipative component and an imaginary dispersive component. The probability decays exponentially from initial state. Fermi’s Golden Rule rate tells you about long times!Now, what is the probability of appearing in any of the states \(|k \rangle\)? Using Equation \ref{3.12}:\[b _ {k} (t) = - \frac {i} {\hbar} \int _ {0}^{t} V _ {k \ell} e^{i \omega _ {k l} \tau} b _ {\ell} ( \tau ) d \tau\]\[= V _ {k \ell} \frac {1 - \exp \left( - \frac {\overline {w} _ {k \ell}} {2} t - \frac {i} {h} \left( E _ {\ell}^{\prime} - E _ {k} \right) t \right)} {E _ {k} - E _ {\ell}^{\prime} + i \hbar \overline {w} _ {k \ell} / 2}\]\[= V _ {k \ell} \frac {1 - c _ {\ell} (t)} {E _ {k} - E _ {\ell}^{\prime} + i \hbar \overline {w} _ {k \ell} / 2}\](3.30) If we investigate the long time limit (\(t \rightarrow \infty\)), noting that \(P _ {k \ell} = \left| b _ {k} \right|^{2}\), we find\[P _ {k l} = \frac {\left| V _ {k l} \right|^{2}} {\left( E _ {k} - E _ {i}^{\prime} \right)^{2} + \Gamma^{2} / 4} \label{3.31}\]with\[\Gamma \equiv \overline {w} _ {k \ell} \cdot \hbar \label{3.32}\]The probability distribution for occupying states within the continuum is described by a Lorentzian distribution with maximum probability centered at the corrected energy of the initial state \(E _ {\ell}^{\prime}\). The width of the distribution is given by the relaxation rate, which is proxy for \(\left| V _ {k \ell} \right|^{2} \rho \left( E _ {\ell} \right)\), the coupling to the continuum and density of states.This page titled 4.1: Introduction to Dissipative Dynamics is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,309 |
5.1: Introduction to the Density Matrix
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/05%3A_The_Density_Matrix/5.01%3A_Introduction_to_the_Density_Matrix | The density matrix or density operator is an alternate representation of the state of a quantum system for which we have previously used the wavefunction. Although describing a quantum system with the density matrix is equivalent to using the wavefunction, one gains significant practical advantages using the density matrix for certain time-dependent problems—particularly relaxation and nonlinear spectroscopy in the condensed phase.The density matrix is defined as the outer product of the wavefunction with its conjugate.\[\rho (t) \equiv | \psi (t) \rangle \langle \psi (t) | \label{4.1}\]This implies that if you specify a state \(| x \rangle\), then \(\langle x | p | x \rangle\) gives the probability of finding a particle in the state \(| x \rangle\). Its name derives from the observation that it plays the quantum role of a probability density. If you think of the statistical description of a classical observable obtained from moments of a probability distribution \(P\), then \(ρ\) plays the role of \(P\) in the quantum case:\[ \begin{align} \langle A \rangle &= \int A P ( A ) d A \label{4.2} \\[4pt] &= \langle \psi | A | \psi \rangle = \operatorname {Tr} [ A \rho ] \label{4.3} \end{align}\]where \(Tr[…]\) refers to tracing over the diagonal elements of the matrix,\[T r [ \cdots ] = \sum _ {a} \langle a | \cdots | a \rangle.\]The last expression is obtained as follows. If the wavefunction for the system is expanded as\[| \psi (t) \rangle = \sum _ {n} c _ {n} (t) | n \rangle \label{4.4}\]the expectation value of an operator is\[\langle \hat {A} (t) \rangle = \sum _ {n , m} c _ {n} (t) c _ {m} ^{*} (t) \langle m | \hat {A} | n \rangle \label{4.5}\]Also, from Equation \ref{4.1} we obtain the elements of the density matrix as\[\left.\begin{aligned} \rho (t) & {= \sum _ {n , m} c _ {n} (t) c _ {m} ^{*} (t) | n \rangle \langle m |} \\[4pt] & {\equiv \sum _ {n = m} \rho _ {n m} (t) | n \rangle \langle m |} \end{aligned} \right. \label{4.6}\]We see that \(\rho_{nm}\), the density matrix elements, are made up of the time-evolving expansion coefficients. Substituting into Equation \ref{4.5} we see that\[\left.\begin{aligned} \langle \hat {A} (t) \rangle & = \sum _ {n , m} A _ {m n} \rho _ {m n} (t) \\[4pt] & = \operatorname {Tr} [ \hat {A} \rho (t) ] \end{aligned} \right. \label{4.7}\]In practice this makes evaluating expectation values as simple as tracing over a product of matrices.What information is in the density matrix elements, \(\rho_{nm}\)? The diagonal elements (\(n = m\)) give the probability of occupying a quantum state:\[\rho _ {n n} = c _ {n} c _ {n} ^{*} = p _ {n} \geq 0 \label{4.8}\]For this reason, diagonal elements are referred to as populations. The off-diagonal elements (\(n \neq m\)) are complex and have a time-dependent phase factor\[ \rho _ {n m} = c _ {n} (t) c _ {m} ^{*} (t) = c _ {n} c _ {m} ^{*} \mathrm {e} ^{- i \omega _ {mn} t} \label{4.9}\]Since these describe the coherent oscillatory behavior of coherent superpositions in the system, these are referred to as coherences.So why would we need the density matrix? It becomes a particularly important tool when dealing with mixed states, which we take up later. Mixed states refer to statistical mixtures in which we have imperfect information about the system, for which me must perform statistical averages in order to describe quantum observables. For mixed states, calculations with the density matrix are greatly simplified. Given that you have a statistical mixture, and can describe \(p_k\), the probability of occupying quantum state \(| \psi _ {k} \rangle\), evaluation of expectation values is simplified with a density matrix:\[\langle \hat {A} (t) \rangle = \sum _ {k} p _ {k} \left\langle \psi _ {k} (t) | \hat {A} | \psi _ {k} (t) \right\rangle \label{4.10}\]\[\rho (t) \equiv \sum _ {k} p _ {k} | \psi _ {k} (t) \rangle \langle \psi _ {k} (t) | \label{4.11}\]\[\langle \hat {A} (t) \rangle = \operatorname {Tr} [ \hat {A} \rho (t) ] \label{4.12}\]Evaluating expectation value is the same for pure or mixed states.Properties of the Density MatrixWe can now summarize some properties of the density matrix, which follow from the definitions above:In addition, when working with the density matrix it is convenient to make note of these trace properties:This page titled 5.1: Introduction to the Density Matrix is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,310 |
5.2: Time-Evolution of the Density Matrix
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/05%3A_The_Density_Matrix/5.02%3A_Time-Evolution_of_the_Density_Matrix | The equation of motion for the density matrix follows naturally from the definition of \(\rho\) and the time-dependent Schrödinger equation.\[ \begin{align} \dfrac {\partial \rho} {\partial t} &= \dfrac {\partial} {\partial t} [ | \psi \rangle \langle \psi | ] \\[4pt] &= \left[ \dfrac {\partial} {\partial t} | \psi \rangle \right] \langle \psi | + | \psi \rangle \dfrac {\partial} {\partial t} \langle \psi | \\[4pt] &= \dfrac {- i} {\hbar} H | \psi \rangle \langle \psi | + \dfrac {i} {\hbar} | \psi \rangle \langle \psi | H . \label{4.13} \\[4pt] &= \dfrac {- i} {\hbar} [ H , \rho ] \label{4.14} \end{align}\]Equation \ref{4.14} is the Liouville-Von Neumann equation. It is isomorphic to the Heisenberg equation of motion, since \(ρ\) is also an operator. The solution to Equation \ref{4.14} is\[\rho (t) = U \rho ( 0 ) U^{\dagger} \label{4.15}\]This can be demonstrated by first integrating Equation \ref{4.14} to obtain\[\rho (t) = \rho ( 0 ) - \dfrac {i} {\hbar} \int _ {0}^{t} d \tau [ H ( \tau ) , \rho ( \tau ) ] \label{4.16}\]If we expand Equation \ref{4.16} by iteratively substituting into itself, the expression is the same as when we substitute\[U = \exp _ {+} \left[ - \dfrac {i} {\hbar} \int _ {0}^{t} d \tau H ( \tau ) \right] \label{4.17}\]into Equation \ref{4.15} and collect terms by orders of \(H(\tau)\).Note that Equation \ref{4.15} and the cyclic invariance of the trace imply that the time-dependent expectation value of an operator can be calculated either by propagating the operator (Heisenberg) or the density matrix (Schrödinger or interaction picture):\[\left.\begin{aligned} \langle \hat {A} (t) \rangle & = \operatorname {Tr} [ \hat {A} \rho (t) ] \\[4pt] & = \operatorname {Tr} \left[ \hat {A} U \rho _ {0} U^{\dagger} \right] \\[4pt] & = \operatorname {Tr} \left[ \hat {A} (t) \rho _ {0} \right] \end{aligned} \right. \label{4.18}\]For a time-independent Hamiltonian it is straightforward to show that the density matrix elements evolve as\[ \begin{align} \rho _ {n m} (t) &= \langle n | \rho (t) | m \rangle \\[4pt] &= \left\langle n | U | \psi _ {0} \right\rangle \left\langle \psi _ {0} \left| U^{\dagger} \right| m \right\rangle \label{4.19} \\[4pt] &= e^{- i \omega _ {n m} \left( t - t _ {0} \right)} \rho _ {n m} \left( t _ {0} \right) \label{4.20} \end{align}\]From this we see that populations, \(\rho _ {m n} (t) = \rho _ {n m} \left( t _ {0} \right)\), are time-invariant, and coherences oscillate at the energy splitting \(\omega _ {n m}\).This page titled 5.2: Time-Evolution of the Density Matrix is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,311 |
5.3: The Density Matrix in the Interaction Picture
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/05%3A_The_Density_Matrix/5.03%3A_The_Density_Matrix_in_the_Interaction_Picture | For the case in which we wish to describe a material Hamiltonian \(H_0\) under the influence of an external potential \(V(t)\),\[H (t) = H _ {0} + V (t) \label{4.21}\]we can also formulate the density operator in the interaction picture, \(\rho_I\). From our original definition of the interaction picture wavefunctions\[| \psi _ {I} \rangle = U _ {0}^{\dagger} | \psi _ {S} \rangle \label{4.22}\]We obtain \(\rho_I\) as\[\rho _ {I} = U _ {0}^{\dagger} \rho _ {S} U _ {0} \label{4.23}\]Similar to the discussion of the density operator in the Schrödinger equation, above, the equation of motion in the interaction picture is\[\dfrac {\partial \rho _ {I}} {\partial t} = - \dfrac {i} {\hbar} \left[ V _ {I} (t) , \rho _ {I} (t) \right] \label{4.24}\]where, as before, \(V _ {I} = U _ {0}^{\dagger} V U _ {0}\).Equation \ref{4.24} can be integrated to obtain\[\rho _ {I} (t) = \rho _ {I} \left( t _ {0} \right) - \dfrac {i} {\hbar} \int _ {t _ {0}}^{t} d t^{\prime} \left[ V _ {I} \left( t^{\prime} \right) , \rho _ {I} \left( t^{\prime} \right) \right] \label{4.25}\]Repeated substitution of \(\rho _ {I} (t)\) into itself in this expression gives a perturbation series expansion\[.\begin{align} \rho _ {I} (t) &= \rho _ {0} - \dfrac {i} {\hbar} \int _ {t _ {0}}^{t} d t _ {2} \left[ V _ {I} \left( t _ {1} \right) , \rho _ {0} \right] \\[4pt] & + \left( - \dfrac {i} {\hbar} \right) \int _ {t _ {0}}^{t} d t _ {2} \int _ {t _ {0}}^{t _ {2}} d t _ {1} \left[ V _ {I} \left( t _ {2} \right) , \left[ V _ {I} \left( t _ {1} \right) , \rho _ {0} \right] \right] + \cdots \\[4pt] & + \left( - \dfrac {i} {\hbar} \right)^{n} \int _ {t _ {0}}^{t} d t _ {n} \int _ {t _ {0}}^{t _ {n}} d t _ {n - 1} \\[4pt] & + \cdots \label{4.26}\\[4pt] &= \rho^{( 0 )} + \rho^{( 1 )} + \rho^{( 2 )} + \cdots + \rho^{( n )} + \cdots \label{4.27} \end{align}\]Here \(\rho _ {0} = \rho \left( t _ {0} \right)\) and \(\rho^{( n )}\) is the nth-order expansion of the density matrix. This perturbative expansion will play an important role later in the description of nonlinear spectroscopy. An nth order expansion term will be proportional to the observed polarization in an nth order nonlinear spectroscopy, and the commutators observed in Equation \ref{4.26} are proportional to nonlinear response functions. Similar to Equation \ref{4.15}, Equation \ref{4.26} can also be expressed as\[\rho _ {I} (t) = U _ {0} \rho _ {I} ( 0 ) U _ {0}^{\dagger} \label{4.28}\]This is the solution to the Liouville equation in the interaction picture. In describing the time-evolution of the density matrix, particularly when describing relaxation processes later, it is useful to use a superoperator notation to simplify the expressions above. The Liouville equation can be written in shorthand in terms of the Liovillian superoperator \(\hat {\hat {\mathcal {L}}}\)\[\dfrac {\partial \hat {\rho} _ {I}} {\partial t} = \dfrac {- i} {\hbar} \hat {\mathcal {L}} \hat {\rho} _ {l} \label{4.29}\]where \(\hat {\hat {\mathcal {L}}}\) is defined in the Schrödinger picture as\[\hat {\hat {L}} \hat {A} \equiv [ H , \hat {A} ] \label{4.30}\]Similarly, the time propagation described by Equation \ref{4.28} can also be written in terms of a superoperator \(\hat {\boldsymbol {\hat {G}}}\), the time-propagator, as\[\rho _ {I} (t) = \hat {\hat {G}} (t) \rho _ {I} ( 0 ) \label{4.31}\]\(\hat {\boldsymbol {\hat {G}}}\) is defined in the interaction picture as\[\hat {\hat {G}} \hat {A} _ {I} \equiv U _ {0} \hat {A} _ {I} U _ {0}^{\dagger} \label{4.32}\]Given the eigenstates of \(H_0\), the propagation for a particular density matrix element is\[ \begin{align} \hat {G} (t) \rho _ {a b} & = e^{- i H _ {d} t h} | a \rangle \langle b | e^{iH_0 t \hbar} \\[4pt] &= e^{- i \omega _ {\omega} t} | a \rangle \langle b | \end{align} \label{4.33}\]Using the Liouville space time-propagator, the evolution of the density matrix to arbitrary order in Equation \ref{4.26} can be written as\[\rho _ {I}^{( n )} = \left( - \dfrac {i} {\hbar} \right)^{n} \int _ {t _ {0}}^{t} d t _ {n} \int _ {t _ {0}}^{t _ {n}} d t _ {n - 1} \ldots \int _ {t _ {0}}^{t _ {2}} d t _ {1} \hat {G} \left( t - t _ {n} \right) V \left( t _ {n} \right) \hat {G} \left( t _ {n} - t _ {n - 1} \right) V \left( t _ {n - 1} \right) \cdots \hat {G} \left( t _ {2} - t _ {1} \right) V \left( t _ {1} \right) \rho _ {0} \label{4.34}\]This page titled 5.3: The Density Matrix in the Interaction Picture is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,312 |
6.1: Born–Oppenheimer Approximation
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/06%3A_Adiabatic_Approximation/6.01%3A_BornOppenheimer_Approximation | As a starting point, it is helpful to review the Born–Oppenheimer Approximation (BOA). For a molecular system, the Hamiltonian can be written in terms of the kinetic energy of the nuclei (\(N\)) and electrons (\(e\)) and the potential energy for the Coulomb interactions of these particles.\[\begin{align} \hat {H} = \hat {T} _ {e} + \hat {T} _ {N} + \hat {V} _ {e e} + \hat {V} _ {N N} + \hat {V} _ {e N} \\[4pt] = - \sum _ {i = 1}^{n} \frac {\hbar^{2}} {2 m _ {e}} \nabla _ {i}^{2} - \sum _ {J = 1}^{N} \frac {\hbar^{2}} {2 M _ {J}} \nabla _ {J}^{2} \\[4pt] = - \sum _ {i = 1}^{n} \frac {\hbar^{2}} {2 m _ {e}} \nabla _ {i}^{2} - \sum _ {J = 1}^{N} \frac {\hbar^{2}} {2 M _ {J}} \nabla _ {J}^{2} \label{5.1} \end{align}\]Here and in the following, we will use lowercase variables to refer to electrons and uppercase to nuclei. The variables \(n\), \(i\), \(\mathbf {r}\), \(\nabla _ {r}^{2}\), and \(m_e\) refer to the number, index, position, Laplacian, and mass of electrons, respectively, and \(N\), \(J\), \(\mathbf {R}\), and \(M\) refer to the nuclei. \(e\) is the electron charge, and \(Z\) is the atomic number of the nucleus. Note, this Hamiltonian does not include relativistic effects such as spin-orbit coupling.The time-independent Schrödinger equation is\[\hat {H} ( \hat {\mathbf {r}} , \hat {\mathbf {R}} ) \Psi ( \hat {\mathbf {r}} , \hat {\mathbf {R}} ) = E \Psi ( \hat {\mathbf {r}} , \hat {\mathbf {R}} ) \label{5.2}\]\(\Psi ( \hat {\mathbf {r}} , \hat {\mathbf {R}} )\) is the total vibronic wavefunction, where “vibronic” refers to the combined electronic and nuclear eigenstates. Exact solutions using the molecular Hamiltonian are intractable for most problems of interest, so we turn to simplifying approximations. The BO approximation is motivated by noting that the nuclei are far more massive than an electron (\(m_e \ll M_I\)). With this criterion, and when the distances separating particles is not unusually small, the kinetic energy of the nuclei is small relative to the other terms in the Hamiltonian. Physically, this means that the electrons move and adapt rapidly—adiabatically—in response to shifting nuclear positions. This offers an avenue to solving for \(\Psi\) by fixing the position of the nuclei, solving for the electronic wavefunctions \(\psi_i\), and then iterating for varying \(\boldsymbol {R}\) to obtain effective electronic potentials on which the nuclei move.Since it is fixed for the electronic calculation, we proceed by treating \(\mathbf {R}\) as a parameter rather than an operator, set \(\hat{T}_N\) to 0, and solve the electronic TISE:\[\hat {H} _ {e l} ( \hat {\mathbf {r}} , \mathbf {R} ) \psi _ {i} ( \hat {\mathbf {r}} , \mathbf {R} ) = U _ {i} ( \mathbf {R} ) \psi _ {i} ( \hat {\mathbf {r}} , \mathbf {R} ) \label{5.3}\]\(U_i\) are the electronic energy eigenvalues for the fixed nuclei, and the electronic Hamiltonian in the BO approximation is\[\hat {H} _ {e l} = \hat {T} _ {e} + \hat {V} _ {e e} + \hat {V} _ {e N} \label{5.4}\]In Equation \ref{5.3}, \(\psi_i\) is the electronic wavefunction for fixed \(\mathbf {R}\), with \(i = 0\) referring to the electronic ground state. Repeating this calculation for varying \(\mathbf {R}\), we obtain \(U_i(R)\), an effective or mean-field potential for the electronic states on which the nuclei can move. These effective potentials are known as Born–Oppenheimer or adiabatic potential energy surfaces (PES).For the nuclear degrees of freedom, we can define a Hamiltonian for the ith electronic PES: ,\[\hat {H} _ {N u c , i} = \hat {T} _ {N} + U _ {i} ( \hat {R} ) \label{5.5}\]which satisfies a TISE for the nuclear wave functions \(\Phi(R)\) :\[\hat {H} _ {N u c , i} \Phi _ {i J} ( R ) = E _ {i J} \Phi _ {i J} ( R ) \label{5.6}\]Here \(J\) refers to the Jth eigenstate for nuclei evolving on the i th PES. Equation \ref{5.5} is referred to as the BO Hamiltonian.The BOA effectively separates the nuclear and electronic contributions to the wavefunction, allowing us to express the total wavefunction as a product of these contributions\[\Psi ( \mathbf {r} , \mathbf {R} ) = \Phi ( \mathbf {R} ) \psi ( \mathbf {r} , \mathbf {R} )\] and the eigenvalues as sums over the electronic and nuclear contribution:\(E = E _ {N} + E _ {e}.\) BOA does not treat the nuclei classically. However, it is the basis for semiclassical dynamics methods in which the nuclei evolve classically on a potential energy surface, and interact with quantum electronic states. If we treat the nuclear dynamics classically, then the electronic Hamiltonian can be thought of as depending on \(\mathbf {R}\) or on time as related by velocity or momenta. If the nuclei move infinitely slowly, the electrons will adiabatically follow the nuclei and systems prepared in an electronic eigenstate will remain in that eigenstate for all times.This page titled 6.1: Born–Oppenheimer Approximation is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,313 |
6.2: Nonadiabatic Effects
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/06%3A_Adiabatic_Approximation/6.02%3A_Nonadiabatic_Effects | Even without the BO approximation, we note that the nuclear-electronic product states form a complete basis in which to express the total vibronic wavefunction:\[\Psi ( \mathbf {r} , \mathbf {R} ) = \sum _ {i , J} c _ {i , j} \Phi _ {i , J} ( \mathbf {R} ) \psi _ {i} ( \mathbf {r} , \mathbf {R} ) \label{5.7}\]We can therefore use this form to investigate the consequences of the BO approximation. For a given vibronic state, the action of the Hamiltonian on the wavefunction in the TISE is\[\hat {H} \Psi _ {i , J} = \left( \hat {T} _ {N} ( \mathbf {R} ) + \hat {H} _ {e l} ( \mathbf {R} ) \right) \Phi _ {i , j} ( \mathbf {R} ) \psi _ {i} ( \mathbf {R} ) \label{5.8}\]Expanding the Laplacian in the nuclear kinetic energy via the chain rule as\[\nabla^{2} A B = \left( \nabla^{2} A \right) B + 2 ( \nabla A ) \nabla B + A \nabla^{2} B, \nonumber\]we obtain an expression with three terms\[\begin{align} \hat {H} \Psi _ {i , J} & = \Phi _ {i , J} ( \mathbf {R} ) \left( \hat {T} _ {N} ( \mathbf {R} ) + U _ {i} ( \mathbf {R} ) \right) \psi _ {i} ( \mathbf {R} ) \nonumber \\[4pt] & - \sum _ {J} \frac {\hbar^{2}} {M _ {J}} \nabla _ {R} \Phi _ {i , J} ( \mathbf {R} ) \nabla _ {R} \psi _ {i} ( \mathbf {R} ) \nonumber \\[4pt] & - \sum _ {J} \frac {\hbar^{2}} {2 M _ {J}} \Phi _ {i , J} ( \mathbf {R} ) \nabla _ {R}^{2} \psi _ {i} ( \mathbf {R} ) \label{5.9} \end{align}\]This expression is exact for vibronic problems, and is referred to as the coupled channel Hamiltonian. Note that if we set the last two terms in Equation \ref{5.9} to zero, we are left with \[\hat {H} = \hat {T} _ {N} + U\] which is just the Hamiltonian we used in the Born-Oppenheimer approximation, Equation 6.1.7. Therefore, the last two terms describe deviations from the BO approximation, and are referred to as nonadiabatic terms. These depend on the spatial gradient of the wavefunction in the region of interest, and act to couple adiabatic Born–Oppenheimer states. The coupled channel Hamiltonian has a form that is reminiscent of a perturbation theory Hamiltonian in which the Born–Oppenheimer states play the role of the zero-order Hamiltonian being perturbed ay a nonadiabatic coupling\[\hat {H} = \hat {H} _ {B O} + \hat {V} \label{5.10}\]To investigate this relationship further, it is helpful to write this Hamiltonian in its matrix form. We obtain the matrix elements by sandwiching the Hamiltonian between two projection operators and evaluating\[\hat {H} _ {i , I , J} = \iint d \mathbf {r} \,d \mathbf {R} \Psi _ {i , l}^{*} ( \mathbf {r} , \mathbf {R} ) \hat {H} ( \mathbf {r} , \mathbf {R} ) \Psi _ {j , J} ( \mathbf {r} , \mathbf {R} ) \label{5.11}\]Making use of Equation \ref{5.9} we find that the Hamiltonian can be expressed in three terms\[\begin{align} \hat {H} _ {i , I , j , J} & = \int d \mathbf {R} \Phi _ {i , I} ( \mathbf {R} ) \left( \hat {T} _ {N} ( \mathbf {R} ) + U _ {j} ( \mathbf {R} ) \right) \Phi _ {j , J} ( \mathbf {R} ) \delta _ {i , j} \\ & - \sum _ {I} \frac {\hbar^{2}} {M _ {I}} \int d \mathbf {R} \Phi _ {i , I} ( \mathbf {R} ) \nabla _ {R} \Phi _ {j , J} ( \mathbf {R} ) \cdot \mathbf {F} _ {i j} ( \mathbf {R} ) \\ & - \sum _ {I} \frac {\hbar^{2}} {2 M _ {I}} \int d \mathbf {R} \Phi _ {i , I} ( \mathbf {R} ) \Phi _ {j , J} ( \mathbf {R} ) \mathbf {G} _ {i j} ( \mathbf {R} ) \\ & - \sum _ {I} \frac {\hbar^{2}} {2 M _ {I}} \int d \mathbf {R} \Phi _ {i , I} ( \mathbf {R} ) \Phi _ {j , J} ( \mathbf {R} ) \mathbf {G} _ {i j} ( \mathbf {R} ) \label{5.12} \end{align} \]where\[\begin{align} \mathbf {F} _ {i j} ( \mathbf {R} ) & = \int d \mathbf {r} \psi _ {i}^{*} ( \mathbf {r} , \mathbf {R} ) \nabla _ {R} \psi _ {j} ( \mathbf {r} , \mathbf {R} ) \\ \mathbf {G} _ {i j} ( \mathbf {R} ) & = \int d \mathbf {r} \psi _ {i}^{*} ( \mathbf {r} , \mathbf {R} ) \nabla _ {R}^{2} \psi _ {j} ( \mathbf {r} , \mathbf {R} ) \label{5.13} \end{align}\]The first term in Equation \ref{5.12} gives the BO Hamiltonian. In the latter two terms, \(\mathbf {F}\) is referred to as the nonadiabatic, first-order, or derivative coupling, and \(\mathbf {G}\) is the second-order nonadiabatic coupling or diagonal BO correction. Although they are evaluated by integrating over electronic degrees of freedom, both depend parametrically on the position of the nuclei. In most circumstances the last term is much smaller than the other two, so that we can concentrate on the second term in evaluating couplings between adiabatic states. For our purposes, we can write the nonadiabatic coupling in Equation \ref{5.10} as\[\hat {V} _ {i , I , j , J} ( \mathbf {R} ) = - \sum _ {I} \frac {\hbar^{2}} {M _ {I}} \int d \mathbf {R} \Phi _ {i , I} ( \mathbf {R} ) \nabla _ {R} \Phi _ {j , J} ( \mathbf {R} ) \cdot \mathbf {F} _ {i j} ( \mathbf {R} ) \label{5.14}\]This emphasizes that the coupling between surfaces depends parametrically on the nuclear positions, the gradient of the electronic and nuclear wavefunctions, and the spatial overlap of those wavefunctions.This page titled 6.2: Nonadiabatic Effects is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,314 |
6.3: Diabatic and Adiabatic States
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/06%3A_Adiabatic_Approximation/6.03%3A_Diabatic_and_Adiabatic_States | Although the Born–Oppenheimer surfaces are the most straightforward and commonly calculated, they may not be the most chemically meaningful states. As an example consider the potential energy curves for the diatomic \(\ce{NaCl}\). The chemically distinct potential energy surfaces one is likely to discuss have distinct atomic or ionic character at large separation between the atoms. These “diabatic” curves focus on physical effects, but are not eigenstates. In the figure, the ionic state \(| a \rangle\) is influenced by the Coulomb attraction between ions that draws them together, leading to a stable configuration at \(R_{eq}\) once these attractive terms are balanced by nuclear repulsive forces. However, the neutral atoms (\(\ce{Na^{0}}\) and \(\ce{Cl^{0}}\)) have a potential energy surface \(| b \rangle\) which is dominated by repulsive interactions. The adiabatic potentials from the BO Hamiltonian will reflect significant coupling between the diabatic electronic states. BO states of the same symmetry will exhibit an avoided crossing where the electronic energy between corresponding diabatic states is equal. As expected from our earlier discussion, the splitting at the crossing for this one-dimensional system would be \(2V_{ab}\), twice the coupling between diabatic states.The adiabatic potential energy surfaces are important in interpreting the reaction dynamics, as can be illustrated with the reaction between \(\ce{Na}\) and \(\ce{Cl}\) atoms. If the neutral atoms are prepared on the ground state at large separation and slowly brought together, the atoms are weakly repelled until the separation reaches the transition state \(R^‡\). Here we cross into the regime where the ionic configuration has lower energy. As a result of the nonadiabatic couplings, we expect that an electron will transfer from \(\ce{Na^{0}}\) to \(\ce{Cl^{0}}\), and the ions will then feel an attractive force leading to an ionic bond with separation \(R_{eq}\).Diabatic states can be defined in an endless number of ways, but only one adiabatic surface exists. In that respect, the term “nonadiabatic” is also used to refer to all possible diabatic surfaces. However, diabatic states are generally chosen so that the nonadiabatic electronic couplings in Equation 6.2.14 and 6.2.15 are zero. This can be accomplished by making the electronic wavefunction independent of \(R\).As seen above, for coupled states with the same symmetry the couplings repel the adiabatic states and we get an avoided crossing. However, it is still possible for two adiabatic states to cross. Mathematically this requires that the energies of the adiabatic states be degenerate (\(E _ {\alpha} = E _ {\beta}\)) and that the coupling at that configuration be zero (\(V _ {\alpha \beta} = V _ {\beta \alpha} = 0\)). This isn’t possible for a one-dimensional problem, such as the \(\ce{NaCl}\) example above, unless symmetry dictates that the nonadiabatic coupling vanishes. To accomplish this for a Hermitian coupling operator you need two independent nuclear coordinates, which enable you to independently tune the adiabatic splitting and coupling. This leads to a single point in the two-dimensional space at which degeneracy exists, which is known as a conical intersection (an important topic that is not discussed further here).This page titled 6.3: Diabatic and Adiabatic States is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,315 |
6.4: Adiabatic and Nonadiabatic Dynamics
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/06%3A_Adiabatic_Approximation/6.04%3A_Adiabatic_and_Nonadiabatic_Dynamics | The BO approximation never explicitly addresses electronic or nuclear dynamics, but neglecting the nuclear kinetic energy to obtain potential energy surfaces has implicit dynamical consequences. As we discussed for our \(\ce{NaCl}\) example, moving the neutral atoms together slowly allows electrons to completely equilibrate about each forward step, resulting in propagation on the adiabatic ground state. This is the essence of the adiabatic approximation. If you prepare the system in \(\Psi _ {\alpha}\), an eigenstate of \(H\) at the initial time \(t_0\), and propagate slowly enough, that \(\Psi _ {\alpha}\) will evolve as an eigenstate for all times:\[H (t) \Psi _ {\alpha} (t) = E _ {\alpha} (t) \Psi _ {\alpha} (t) \label{5.15}\]Equivalently this means that the nth eigenfunction of \(H(t_o)\) will also be the nth eigenfunction of \(H (t)\). In this limit, there are no transitions between BO surfaces, and the dynamics only reflect the phases acquired from the evolving system. That is the time propagator can be expressed as\[U \left( t , t _ {0} \right) _ {a d i a b a t i c} = \sum _ {\alpha} | \alpha \rangle \langle \alpha | \exp \left( - \frac {i} {\hbar} \int _ {t _ {0}}^{t} d t^{\prime} E _ {\alpha} \left( t^{\prime} \right) \right) \label{5.16}\]In the opposite limit, we also know that if the atoms were incident on each other so fast (with such high kinetic energy) that the electron did not have time to transfer at the crossing, that the system would pass smoothly through the crossing along the diabatic surface. In fact it is expected that the atoms would collide and recoil. This implies that there is an intermediate regime in which the velocity of the system is such that the system will split and follow both surfaces to some degree.In a more general sense, we would like to understand the criteria for adiabaticity that enable a time-scale separation between the fast and slow degrees of freedom. Speaking qualitatively about any time-dependent interaction between quantum mechanical states, the time-scale that separates the fast and slow propagation regimes is determined by the strength of coupling between those states. We know that two coupled states exchange amplitude at a rate dictated by the Rabi frequency \(\Omega _ {R}\), which in turn depends on the energy splitting and coupling of the states. For systems in which there is significant nonperturbative transfer of population between two states \(a\) and \(b\), the time scale over which this can occur is approximately \(\Delta \mathrm {t} \approx 1 / \Omega _ {\mathrm {R}} \approx \hbar / V _ {\mathrm {ab}}\). This is not precise, but is provides a reasonable starting point for discussing “slow” versus “fast”. “Slow” in an adiabatic sense would mean that a time-dependent interaction act on the system over a period such that \(\Delta t \ll \hbar / V _ {\mathrm {ab}}\). In the case of our \(\ce{NaCl}\) example, we would be concerned with the time scale over which the atoms pass through the crossing region between diabatic states, which is determined by the incident velocity between atoms.Let’s investigate these issues by looking more carefully at the adiabatic approximation. Since the adiabatic states (\(\Psi _ {\alpha} (t) \equiv | \alpha \rangle\)) are orthogonal for all times, we can evaluate the time propagator as\[U (t) = \sum _ {\alpha} e^{- \frac {i} {\hbar} \int _ {0}^{t} E _ {\alpha} (t) d t^{\prime}} | \alpha \rangle \langle \alpha | \label{5.17}\]and the time-dependent wavefunction is\[\Psi (t) = \sum _ {\alpha} b _ {\alpha} (t) e^{- \frac {i} {\hbar} \int _ {0}^{t} E _ {\alpha} (t) d t^{\prime}} | \alpha \rangle \label{5.18}\]Although these are adiabatic states we recognize that the expansion coefficients can be time dependent in the general case. So, we would like to investigate the factors that govern this time-dependence. To make the notation more compact, let’s define the time-rate of change of the eigenfunction as\[| \dot {\alpha} \rangle = \frac {\partial} {\partial t} | \Psi _ {\alpha} (t) \rangle \label{5.19}\]If we substitute the general solution Equation \ref{5.18} into the TDSE, we get\[ i \hbar \sum _ {\alpha} \left( \dot {b} _ {\alpha} | \alpha \right\rangle + b _ {\alpha} | \dot {\alpha} \rangle - \frac {i} {\hbar} E _ {\alpha} b _ {\alpha} | \alpha \rangle ) e^{- \frac {i} {\hbar} \int _ {0}^{t} E _ {\alpha} (t) d t^{\prime}} = \sum _ {\alpha} b _ {\alpha} E _ {\alpha} | \alpha \rangle e^{- \frac {i} {\hbar} \int _ {0}^{t} E _ {\alpha} (t) d t^{\prime}} \label{5.20}\]Note, the third term on the left hand side equals the right hand term. Acting on both sides from the left with \(\langle \beta |\) leads to\[- \dot {b} _ {\beta} e^{- \frac {i} {h} \int _ {0}^{t} E _ {\beta} (t) d t^{\prime}} = \sum _ {\alpha} b _ {\alpha} \langle \beta | \dot {\alpha} \rangle e^{- \frac {i} {\hbar} \int _ {0}^{t} E _ {\alpha} (t) d t^{\prime}} \label{5.21}\]We can break up the terms in the summation into one for the target state \(| \beta \rangle\) and one for the remaining states.\[- \dot {b} _ {\beta} = b _ {\beta} \langle \beta | \dot {\beta} \rangle + \sum _ {\alpha \neq \beta} b _ {\alpha} \langle \beta | \dot {\alpha} \rangle \exp \left[ - \frac {i} {\hbar} \int _ {0}^{t} d t^{\prime} E _ {\alpha \beta} \left( t^{\prime} \right) \right] \label{5.22}\]where\[E _ {\alpha \beta} (t) = E _ {\alpha} (t) - E _ {\beta} (t).\]The adiabatic approximation applies when we can neglect the summation in Equation \ref{5.22}, or equivalently when \(\langle \beta | \dot {\alpha} \rangle \ll \langle \beta | \dot {\beta} \rangle\) for all \(| \alpha \rangle\). In that case, the system propagates on the adiabatic state \(| \beta \rangle\) independent of the other states: \(\dot {b} _ {\beta} = - b _ {\beta} \langle \beta | \dot {\beta} \rangle\). The evolution of the coefficients is\[\begin{align} b _ {\beta} (t) & = b _ {\beta} ( 0 ) \exp \left[ - \int _ {0}^{t} \left\langle \beta \left( t^{\prime} \right) | \dot {\beta} \left( t^{\prime} \right) \right\rangle d t^{\prime} \right] \\ & \approx b _ {\beta} ( 0 ) \exp \left[ \frac {i} {\hbar} \int _ {0}^{t} E _ {\beta} \left( t^{\prime} \right) d t^{\prime} \right] \label{5.23} \end{align}\]Here we note that in the adiabatic approximation\[E _ {\beta} (t) = \langle \beta (t) | H (t) | \beta (t) \rangle.\]Equation \ref{5.23} indicates that in the adiabatic approximation the population in the states never changes, only their phase. The second term on the right in Equation \ref{5.22} describes the nonadiabatic effects, and the overlap integral\[ \langle \beta | \dot {\alpha} \rangle = \left\langle \Psi _ {\beta} | \frac {\partial \Psi _ {\alpha}} {\partial t} \right\rangle \label{5.24}\]determines the magnitude of this effect. \(\langle \beta | \dot {\alpha} \rangle\) is known as the nonadiabatic coupling (even though it refers to couplings between adiabatic surfaces), or as the geometrical phase. Note the parallels here to the expression for the nonadiabatic coupling in evaluating the validity of the BornOppenheimer approximation, however, here the gradient of the wavefunction is evaluated in time rather than the nuclear position. It would appear that we can make some connections between these two results by linking the gradient variables through the momentum or velocity of the particles involved.So, when can we neglect the nonadiabatic effects? We can obtain an expression for the nonadiabatic coupling by expanding\[\frac {\partial} {\partial t} [ H | \alpha \rangle = E _ {\alpha} | \alpha \rangle ] \label{5.25}\]and acting from the left with \(\langle \beta |\), which for \(\alpha \neq \beta\) leads to\[ \langle \beta | \dot {\alpha} \rangle = \frac {\langle \beta | \dot {H} | \alpha \rangle} {E _ {\alpha} - E _ {\beta}} \label{5.26}\]For adiabatic dynamics to hold \(\langle \beta | \dot {\alpha} \rangle < < \langle \beta | \dot {\beta} \rangle\), and so we can say\[\frac {\langle \beta | \dot {H} | \alpha \rangle} {E _ {\alpha} - E _ {\beta}} \ll - \frac {i} {\hbar} E _ {\beta} \label{5.27}\]So how accurate is the adiabatic approximation for a finite time-period over which the systems propagates? We can evaluate Equation \ref{5.22}, assuming that the system is prepared in state \(| \alpha \rangle\) and that the occupation of this state never varies much from one. Then the occupation of any other state can be obtained by integrating over a period\[\left.\begin{aligned} \dot {b} _ {\beta} & = \langle \beta | \dot {\alpha} \rangle \exp \left[ - \frac {i} {\hbar} \int _ {0}^{\tau} d t^{\prime} E _ {\alpha \beta} \left( t^{\prime} \right) \right] \\ b _ {\beta} & \approx i \hbar \frac {\langle \beta | \dot {H} | \alpha \rangle} {E _ {\alpha \beta}^{2}} \left\{\exp \left[ - \frac {i} {\hbar} E _ {\alpha \beta} \tau \right] - 1 \right\} \\ & = 2 \hbar \frac {\langle \beta | \dot {H} | \alpha \rangle} {E _ {\alpha \beta}^{2}} e^{- \frac {i} {h} E _ {\omega \beta} \tau} \sin \left( \frac {E _ {\alpha \beta} \tau} {2 \hbar} \right) \end{aligned} \right. \label{5.28}\]Here I used\[e^{i \theta} - 1 = 2 i e^{i \theta / 2} \sin ( \theta / 2 ).\]For \(| b _ {\beta} k < 1\), we expand the \(\sin\) term and find\[\left\langle \Psi _ {\beta} \left| \frac {\partial H} {\partial t} \right| \Psi _ {\alpha} \right\rangle \ll E _ {\alpha \beta} / \tau \label{5.29}\]This is the criterion for adiabatic dynamics, which can be seen to break down near adiabatic curve crossings where \(E _ {\alpha \beta} = 0\), regardless of how fast we propagate through the crossing. Even away from curve crossing, there is always the possibility that nuclear kinetic energies are such that (\(\partial H / \partial t\)) will be greater than or equal to the energy splitting between adiabatic states.This page titled 6.4: Adiabatic and Nonadiabatic Dynamics is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,316 |
6.5: Landau–Zener Transition Probability
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/06%3A_Adiabatic_Approximation/6.05%3A_LandauZener_Transition_Probability | Clearly the adiabatic approximation has significant limitations in the vicinity of curve crossings. This phenomenon is better described through transitions between diabatic surfaces. To begin, how do we link the temporal and spatial variables in the curve crossing picture? We need a time-rate of change of the energy splitting, \(\dot {E} = d E _ {a b} / d t\). The Landau–Zener expression gives the transition probabilities as a result of propagating through the crossing between diabatic surfaces at a constant \(\dot {E} \). If the energy splitting between states varies linearly in time near the crossing point, then setting the crossing point to \(t = 0\) we write\[E _ {a} - E _ {b} = \dot {E} t \label{5.30}\]If the coupling between surfaces \(V_{ab}\) is constant, the transition probability for crossing from surface \(a\) to \(b\) for a trajectory that passes through the crossing is\[P _ {b a} = 1 - \exp \left[ - \frac {2 \pi V _ {a b}^{2}} {\hbar | \dot {E} |} \right] \label{5.31}\]and \(P _ {a a} = 1 - P _ {b a}\). Note if \(V_{ab} =0\) then \(P_{ba} =0\), but if the splitting sweep rate \(\dot {E} \) is small as determined by\[2 \pi V _ {a b}^{2} \gg \hbar | \dot {E} |\label{5.32}\]then we obtain the result expected for the adiabatic dynamics \(P _ {b a} \approx 1\).We can provide a classical interpretation to Equation \ref{5.31} by equating \(\dot {E} \) with the velocity of particles involved in the crossing. We define the velocity as\[v = \dfrac{\partial R}{\partial t}\]and the slope of the diabatic surfaces at the crossing,\[F _ {i} = \partial E _ {i} / \partial R.\]Recognizing\[\left( E _ {a} - E _ {b} \right) / t = v \left( F _ {a} - F _ {b} \right) \label{5.33}\]we find\[P _ {b a} = 1 - \exp \left[ - \frac {2 \pi V _ {a b}^{2}} {\hbar v \left| F _ {a} - F _ {b} \right|} \right] \label{5.34}\]In the context of potential energy surfaces, what this approximation says is that you need to know the slopes of the potentials at their crossing point, the coupling and their relative velocity in order to extract the rates of chemical reactions.This page titled 6.5: Landau–Zener Transition Probability is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,317 |
7.1: Introduction to Light-Matter Interactions
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/07%3A_Interaction_of_Light_and_Matter/7.01%3A_Introduction_to_Light-Matter_Interactions | The term “spectroscopy” comes from the Latin “spectron” for spirit or ghost and the Greek "σκοπιεν" for to see. These roots are telling because in molecular spectroscopy you use light to interrogate matter, but you actually never see the molecules, only their influence on the light. Different types of spectroscopy give you different perspectives. This indirect contact with the microscopic targets means that the interpretation of spectroscopy requires a model, whether it is stated or not. Modeling and laboratory practice of spectroscopy are dependent on one another, and spectroscopy is only as useful as its ability to distinguish different models. This makes an accurate theoretical description of the underlying physical process governing the interaction of light and matter important.Quantum mechanically, we will treat spectroscopy as a perturbation induced by the light which acts to couple quantum states of the charged particles in the matter, as we have discussed earlier. Our starting point is to write a Hamiltonian for the light–matter interaction, which in the most general sense would be of the form\[H = H _ {M} + H _ {L} + H _ {L M} \label{6.1}\]Although the Hamiltonian for the matter may be time-dependent, we will treat the Hamiltonian for the matter \(H_M\) as time-independent, whereas the electromagnetic field \(H_L\) and its interaction with the matter \(H_{LM}\) are time-dependent. A quantum mechanical treatment of the light would describe the light in terms of photons for different modes of electromagnetic radiation, which we will describe later. We begin with a semiclassical treatment of the problem, which describes the matter quantum mechanically and the light field classically. We assume that a light field described by a time-dependent vector potential acts on the matter, but the matter does not influence the light. (Strictly, energy conservation requires that any change in energy of the matter be matched with an equal and opposite change in the light field.) For the moment, we are just interested in the effect that the light has on the matter. In that case, we can really ignore \(H_L\), and we have a Hamiltonian for the system that is\[\left.\begin{aligned} H & \approx H _ {M} + H _ {L M} (t) \\[4pt] & = H _ {0} + V (t) \end{aligned} \right. \label{6.2}\]which we can solve in the interaction picture. We will derive an explicit expression for the Hamiltonian \(H_{LM}\) in the Electric Dipole Approximation. Here, we will derive a Hamiltonian for the light–matter interaction, starting with the force experienced by a charged particle in an electromagnetic field, developing a classical Hamiltonian for this interaction, and then substituting quantum operators for the matter:\[\left. \begin{array} {l} {p \rightarrow - i \hbar \hat {\nabla}} \\ {x \rightarrow \hat {x}} \end{array} \right. \label{6.3}\]In order to get the classical Hamiltonian, we need to work through two steps:This page titled 7.1: Introduction to Light-Matter Interactions is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,318 |
7.2: Classical Light–Matter Interactions
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/07%3A_Interaction_of_Light_and_Matter/7.02%3A_Classical_LightMatter_Interactions | As a starting point, it is helpful to first summarize the classical description of electromagnetic fields. A derivation of the plane wave solutions to the electric and magnetic fields and vector potential is described in the appendix in Section 6.6.Maxwell’s equations describe electric (\(\overline {E}\)) and magnetic fields (\(\overline {B}\)); however, to construct a Hamiltonian, we must use the time-dependent interaction potential (rather than a field). To construct the potential representation of \(\overline {E}\) and \(\overline {B}\), you need a vector potential \(\overline {A} ( \overline {r} , t )\), and a scalar potential \(\varphi ( \overline {r} , t )\). For electrostatics we normally think of the field being related to the electrostatic potential through \(\overline {E} = - \nabla \varphi\), but for a field that varies in time and in space, the electrodynamic potential must be expressed in terms of both \(\overline {A}\) and \(\varphi\).In general, an electromagnetic wave written in terms of the electric and magnetic fields requires six variables (the \(x\), \(y\), and \(z\) components of \(E\) and \(B\)). This is an over determined problem; Maxwell’s equations constrain these. The potential representation has four variables (\(A _ {x}\), \(A _ {y}\), \(A _ {z}\), and \(\varphi\)), but these are still not uniquely determined. We choose a constraint—a representation or gauge—that allows us to uniquely describe the wave. Choosing a gauge such that \(\varphi=0\) (i.e., the Coulomb gauge) leads to a unique description of \(\overline {E}\) and \(\overline {B}\):\[- \overline {\nabla}^{2} \overline {A} ( \overline {r} , t ) + \frac {1} {c^{2}} \frac {\partial^{2} \overline {A} ( \overline {r} , t )} {\partial t^{2}} = 0 \label{6.4}\]and\[\overline {\nabla} \cdot \overline {A} = 0 \label{6.5)}\]This wave equation for the vector potential gives a plane wave solution for charge free space and suitable boundary conditions:\[\overline {A} ( \overline {r} , t ) = A _ {0} \hat {\varepsilon} e^{i ( \overline {k} \cdot \overline {r} - \omega t )} + A _ {0}^{*} \hat {\varepsilon} e^{- i ( \overline {k} \cdot \overline {r} - \omega t )} \label{6.6}\]This describes the wave oscillating in time at an angular frequency \(\omega\) and propagating in space in the direction along the wave vector \(\overline {k}\), with a spatial period \(\lambda = 2 \pi / | \overline {k} |\). Writing the relationship between \(k\), \(\omega\), and \(\lambda\) in a medium with index of refraction \(n\) in terms of their values in free space:\[k = n k _ {0} = \frac {n \omega _ {0}} {c} = \frac {2 \pi n} {\lambda _ {0}} \label{6.7}\]The wave has an amplitude \(A_0\), which is directed along the polarization unit vector \(\overline {k}\). Since \(\overline {\nabla} \cdot \overline {A} = 0\), we see that \(\overline {k} \cdot \hat {\mathcal {E}} = 0\) or \(\overline {k} \perp \hat {\mathcal {E}}\). From the vector potential we can obtain \(\overline {E}\) and \(\overline {B}\)\[\begin{align} \overline {E} & = - \frac {\partial \overline {A}} {\partial t} \\[4pt] & = i \omega A _ {0} \hat {\varepsilon} \left( e^{i ( \overline {k} \cdot \overline {r} - \omega t )} - e^{- i ( \overline {k} \cdot \overline {r} - \omega t )} \right) \label{6.8} \end{align}\]\[ \begin{align} \overline {B} & = \overline {\nabla} \times \overline {A} \\[4pt] & = i ( \overline {k} \times \hat {\varepsilon} ) A _ {0} \left( e^{i ( \overline {k} \cdot \overline {r} - \omega t )} - e^{- i ( \overline {k} \cdot \overline {r} - \omega t )} \right) \label{6.9} \end{align}\]If we define a unit vector along the magnetic field polarization as\[\hat {b} = ( \overline {k} \times \hat {\mathcal {\varepsilon}} ) / | \overline {k} | = \hat {k} \times \hat {\mathcal {E}},\]we see that the wave vector, the electric field polarization and magnetic field polarization are mutually orthogonal \(\hat {k} \perp \hat {\varepsilon} \perp \hat {b}\).Also, by comparing Equation \ref{6.6} and \ref{6.8} we see that the vector potential oscillates as \(\cos(\omega t)\), whereas the electric and magnetic fields oscillate as \(\sin(\omega t)\). If we define\[\frac {1} {2} E _ {0} = i \omega A _ {0} \label{6.10}\]\[\frac {1} {2} B _ {0} = i | k | A _ {0} \label{6.11}\]then,\[\overline {E} ( \overline {r} , t ) = \left| E _ {0} \right| \hat {\varepsilon} \sin ( \overline {k} \cdot \overline {r} - \omega t ) \label{6.12}\]\[\overline {B} ( \overline {r} , t ) = \left| B _ {0} \right| \hat {b} \sin ( \overline {k} \cdot \overline {r} - \omega t ) \label{6.13}\]Note that\[E _ {0} / B _ {0} = \omega / | k | = c.\]We will want to express the amplitude of the field in a manner that is experimentally accessible. The intensity \(I\), the energy flux through a unit area, is most easily measured. It is the time-averaged value of the Poynting vector\[I = \langle \overline {S} \rangle = \frac {1} {2} \varepsilon _ {0} c E _ {0}^{2} \quad \left( \mathrm {W} / \mathrm {m}^{2} \right) \label{6.15}\]An alternative representation of the amplitude that is useful for describing quantum light fields is the energy density\[U = \frac {I} {c} = \frac {1} {2} \varepsilon _ {0} E _ {0}^{2} \quad \left( \mathrm {J} / \mathrm {m}^{3} \right) \label{6.16}\]Now, we obtain a classical Hamiltonian that describes charged particles interacting with a radiation field in terms of the vector potential. Start with Lorentz force on a particle with charge \(q\):\[\overline {F} = q ( \overline {E} + \overline {v} \times \overline {B} ) \label{6.17}\]Here v is the velocity of the particle. Writing this for one direction (\(x\)) in terms of the Cartesian components of \(\overline {E}\), \(\overline {v}\), and \(\overline {B}\), we have:\[F _ {x} = q \left( E _ {x} + v _ {y} B _ {z} - v _ {z} B _ {y} \right) \label{6.18}\]In Lagrangian mechanics, this force can be expressed in terms of the total potential energy\[F _ {x} = - \frac {\partial U} {\partial x} + \frac {d} {d t} \left( \frac {\partial U} {\partial v _ {x}} \right) \label{6.19}\]Using the relationships that describe \(\overline {E}\) and \(\overline {B}\) in terms of \(\overline {A}\) and \(\varphi \) (Equations \ref{6.10} and \ref{6.11}), inserting into Equation \ref{6.18}, and working it into the form of Equation \ref{6.19}, we can show that\[U = q \varphi - q \overline {v} \cdot \overline {A} \label{6.20}\]This is derived elsewhere4 and is readily confirmed by replacing it into Equation \ref{6.19}. Now we can write a Lagrangian in terms of the kinetic and potential energy of the particle\[\begin{align} L &= T - U \label{6.21} \\[4pt] &= \frac {1} {2} m \overline {v}^{2} + q \overline {v} \cdot \overline {A} - q \varphi \label{6.22} \end{align}\]The classical Hamiltonian is related to the Lagrangian as\[\begin{align} H & = \overline {p} \cdot \overline {v} - L \\[4pt] & = \overline {p} \cdot \overline {v} - \frac {1} {2} m \overline {v}^{2} - q \overline {v} \cdot \overline {A} - q \varphi \label{6.23} \end{align}\]Recognizing\[\overline {p} = \frac {\partial L} {\partial \overline {v}} = m \overline {v} + q \overline {A} \label{6.24}\]we write\[\overline {v} = \frac {1} {m} ( \overline {p} - q \overline {A} ) \label{6.25}\]Now substituting Equations \ref{6.25} into Equation \ref{6.23}, we have\[ \begin{align} H &= \frac {1} {m} \overline {p} \cdot ( \overline {p} - q \overline {A} ) - \frac {1} {2 m} ( \overline {p} - q \overline {A} )^{2} - \frac {q} {m} ( \overline {p} - q \overline {A} ) \cdot A + q \varphi \label{6.26} \\[4pt] &= \frac {1} {2 m} [ \overline {p} - q \overline {A} ( \overline {r} , t ) ]^{2} + q \varphi ( \overline {r} , t ) \label{6.27} \end{align}\]This is the classical Hamiltonian for a particle in an electromagnetic field. In the Coulomb gauge (\(\varphi = 0\)), the last term is dropped.We can write a Hamiltonian for a single particle in a bound potential \(V_0\) in the absence of an external field as\[H _ {0} = \frac {\overline {p}^{2}} {2 m} + V _ {0} ( \overline {r} ) \label{6.28}\]and in the presence of the EM field,\[H = \frac {1} {2 m} ( \overline {p} - q \overline {A} ( \overline {r} , t ) )^{2} + V _ {0} ( \overline {r} ) \label{6.29}\]Expanding we obtain\[H = H _ {0} - \frac {q} {2 m} ( \overline {p} \cdot \overline {A} + \overline {A} \cdot \overline {p} ) + \frac {q^{2}} {2 m} | \overline {A} ( \overline {r} , t ) |^{2} \label{6.30}\]Generally the last term which goes as the square of \(A\) is small compared to the cross term, which is proportional to first power of \(A\). This term should be considered for extremely high field strength, which is non-perturbative and significantly distorts the potential binding molecules together, i.e., when it is similar in magnitude to \(V_0\). One can estimate that this would start to play a role at intensity levels \(>10^{15}\, W/cm^2\), which may be observed for very high energy and tightly focused pulsed femtosecond lasers. So, for weak fields we have an expression that maps directly onto solutions we can formulate in the interaction picture:\[H = H _ {0} + V (t) \label{6.31}\]with\[V (t) = \frac {q} {2 m} ( \overline {p} \cdot \overline {A} + \overline {A} \cdot \overline {p} ) \label{6.32}.\]This page titled 7.2: Classical Light–Matter Interactions is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,319 |
7.3: Quantum Mechanical Electric Dipole Hamiltonian
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/07%3A_Interaction_of_Light_and_Matter/7.03%3A_Quantum_Mechanical_Electric_Dipole_Hamiltonian | Now we are in a position to substitute the quantum mechanical momentum for the classical momentum:\[\overline {p} = - i \hbar \overline {\nabla} \label{6.33}\]Here the vector potential remains classical and only modulates the interaction strength:\[V (t) = \frac {i \hbar} {2 m} q ( \overline {\nabla} \cdot \overline {A} + \overline {A} \cdot \overline {\nabla} ) \label{6.34}\]We can show that \(\overline {\nabla} \cdot \overline {A} = \overline {A} \cdot \overline {\nabla}\). For instance, if we are operating on a wavefunction on the right, we can use the chain rule to write\(\overline {\nabla} \cdot ( \overline {A} | \psi \rangle ) = ( \overline {\nabla} \cdot \overline {A} ) | \psi \rangle + \overline {A} \cdot ( \overline {\nabla} | \psi \rangle ).\) The first term is zero since we are working in the Coulomb gauge (\(\overline {\nabla} \cdot \overline {A} = 0\)). Now we have\[\begin{align} V (t) & = \frac {i \hbar q} {m} \overline {A} \cdot \overline {\nabla} \\[4pt] & = - \frac {q} {m} \overline {A} \cdot \hat {p} \label{6.35} \end{align} \]We can generalize Equation \ref{6.35} for the case of multiple charged particles, as would be appropriate for interactions involving a molecular Hamiltonian:\[\begin{align} V (t) &= - \sum _ {j} \frac {q _ {j}} {m _ {j}} \overline {A} \left( \overline {r} _ {j} , t \right) \cdot \hat {p} _ {j} \label{6.36} \\[4pt] &= - \sum _ {j} \frac {q _ {j}} {m _ {j}} \left[ A _ {0} \hat {\varepsilon} \cdot \hat {p} _ {j} e^{i \left( \overline {k} \cdot \overline {r} _ {j} - \omega t \right)} + A _ {0}^{*} \hat {\varepsilon} \cdot \hat {p} _ {j}^{\dagger} e^{- i \left( \overline {k} \cdot \overline {r} _ {j} - \omega t \right)} \right] \label{6.37} \end{align}\]Under most of the circumstances we will encounter, we can neglect the wave vector dependence of the interaction potential. This applies if the wavelength of the field is much larger than the dimensions of the molecules we are interrogating, i.e., (\(\lambda \rightarrow \infty\)) and \(| k | \rightarrow 0\)). To see this, let’s define \(r_o\) as the center of mass of a molecule and expand about that position:\[\begin{align} e^{i \overline {k} \cdot \overline {r} _ {i}} & = e^{i \overline {k} \cdot \overline {r} _ {0}} e^{i \overline {k} \cdot \left( \overline {r} _ {i} - \overline {r} _ {0} \right)} \\[4pt] & = e^{i \overline {k} \cdot \overline {r} _ {0}} e^{i \overline {k} \cdot \delta \overline {r} _ {i}} \label{6.38} \end{align}\]For interactions with UV, visible, and infrared radiation, wavelengths are measured in hundreds to thousands of nanometers. This is orders of magnitude larger than the dimensions that describe charge distributions in molecules (\(\delta \overline {r} _ {i} = \overline {r} _ {i} - \overline {r} _ {0}\)). Under those circumstances \(| k | \delta r \ll 1\), and setting \(\overline {r _ {0}} = 0\) means that \(e^{i \overline {k} \cdot \overline {r}} \rightarrow 1\). This is known as the electric dipole approximation. Implicit in this is also the statement that all molecules within a macroscopic volume experience an interaction with a spatially uniform, homogeneous electromagnetic field.Certainly there are circumstances where the electric dipole approximation is poor. In the case where the wavelength of light in on the same scale as molecular dimensions, the light will now have to interact with spatially varying charge distributions, which will lead to scattering of the light and interferences between the scattering between different spatial regions. We will not concern ourselves with this limit further. We also retain the spatial dependence for certain other types of light–matter interactions. For instance, we can expand Equation \ref{6.38} as\[e^{i \overline {k} \cdot \overline {r_i}} \approx e^{i \overline {k} \cdot \overline {r} _ {0}} \left[ 1 + i \overline {k} \cdot \left( \overline {r} _ {i} - \overline {r} _ {0} \right) + \ldots \right] \label{6.39}\]We retain the second term for quadrupole transitions: charge distribution interacting with gradient of electric field and magnetic dipole (Section 6.7).Now, using \(A _ {0} = i E _ {0} / 2 \omega\), we write Equation \ref{6.35} as\[\begin{align} V (t) &= \frac {- i q E _ {0}} {2 m \omega} \left[ \hat {\mathcal {E}} \cdot \hat {p} e^{- i \omega t} - \hat {\varepsilon} \cdot \hat {p} e^{i \omega t} \right] \label{6.40} \\[4pt] & = \frac {- q E _ {0}} {m \omega} ( \hat {\varepsilon} \cdot \hat {p} ) \sin \omega t \\[4pt] & = \frac {- q} {m \omega} ( \overline {E} (t) \cdot \hat {p} ) \label{6.41} \end{align}\]or for a collection of charged particles (molecules):\[V (t) = - \left( \sum _ {j} \frac {q _ {j}} {m _ {j}} \left( \hat {\varepsilon} \cdot \hat {p} _ {j} \right) \right) \frac {E _ {0}} {\omega} \sin \omega t \label{6.42}\]This is the interaction Hamiltonian in the electric dipole approximation.In Equation \ref{6.39}, the second term must be considered in certain cases, where variation in the vector potential over the distance scales of the molecule must be considered. This will be the case when one describes interactions with short wavelength radiation, such as x-rays. Then the scattering of radiation by electronic states of molecules and the interference between transmitted and scattered field are important. The second term is also retained for electric quadrupole transitions and magnetic dipole transitions, as described in the appendix in Section 6.7. Electric quadrupole transitions require a gradient of electric field across the molecule, and is generally an effect that is ~10-3 of the electric dipole interaction.We are seeking to use this Hamiltonian to evaluate the transition rates induced by \(V(t)\) from our first-order perturbation theory expression. For a perturbation\[V (t) = V _ {0} \sin \omega t\]the rate of transitions induced by field is\[w _ {k \ell} = \frac {\pi} {2 \hbar} \left| V _ {k \ell} \right|^{2} \left[ \delta \left( E _ {k} - E _ {\ell} - \hbar \omega \right) + \delta \left( E _ {k} - E _ {\ell} + \hbar \omega \right) \right] \label{6.43}\]which depends on the matrix elements for the Hamiltonian in Equation \ref{6.42}. Note in first-order perturbation matrix element calculations one uses unperturbed wavefunctions. Thus, we evaluate the matrix elements of the electric dipole Hamiltonian using the eigenfunctions of \(H_0\):\[V _ {k \ell} = \left\langle k \left| V _ {0} \right| \ell \right\rangle = \frac {- q E _ {0}} {m \omega} \langle k | \hat {\varepsilon} \cdot \hat {p} | \ell \rangle \label{6.44}\]We can evaluate \(\langle k | \overline {p} | \ell \rangle\) using an expression that holds for any one-particle Hamiltonian:\[\left[ \hat {r} , \hat {H} _ {0} \right] = \frac {i \hbar \hat {p}} {m} \label{6.45}\]This expression gives\[\begin{align} \langle k | \hat {p} | \ell \rangle & = \frac {m} {i \hbar} \left\langle k \left| \hat {r} \hat {H} _ {0} - \hat {H} _ {0} \hat {r} \right| \ell \right\rangle \\[4pt] & = \frac {m} {i \hbar} \left( \langle k | \hat {r} | \ell \rangle E _ {\ell} - E _ {k} \langle k | \hat {r} | \ell \rangle \right) \\[4pt] & = i m \omega _ {k \ell} \langle k | \hat {r} | \ell \rangle \label{6.46} \end{align}\]So we have\[V _ {k \ell} = - i q E _ {0} \frac {\omega _ {k \ell}} {\omega} \langle k | \hat {\varepsilon} \cdot \overline {r} | \ell \rangle \label{6.47}\]or for many charged particles\[V _ {k \ell} = - i E _ {0} \frac {\omega _ {k \ell}} {\omega} \left\langle k \left| \hat {\varepsilon} \cdot \sum _ {j} q \hat {r} _ {j} \right| \ell \right\rangle \label{6.48}\]The matrix element can be written in terms of the dipole operators, which describes the spatial distribution of charges,\[\hat {\mu} = \sum _ {j} q _ {j} \hat {r} _ {j} \label{6.49}\]We can see that it is the quantum analog of the classical dipole moment, which describes the distribution of charge density \(\rho\) in the molecule:\[\overline {\mu} = \int d \overline {r} \overline {r} \rho ( \overline {r} ) \label{6.50}\]The strength of interaction between light and matter is given by the matrix element in the dipole operator,\[\mu _ {f i} \equiv \langle f | \overline {\mu} \cdot \hat {\mathcal {\varepsilon}} | i \rangle \label{6.51}\]which is known as the transition dipole moment. In order that we have absorption, the part \(\langle f | \mu | i \rangle\), which is a measure of change of charge distribution between \(| f \rangle\) and \(| i \rangle\), should be non-zero. In other words, the incident radiation has to induce a change in the charge distribution of matter to get an effective absorption rate. This matrix element is the basis of selection rules based on the symmetry of the matter charge eigenstates. The second part, namely the electric field polarization vector says that the electric field of the incident radiation field must project onto the matrix elements of the dipole moment between the final and initial sates of the charge distribution.Then the matrix elements in the electric dipole Hamiltonian are\[V _ {k \ell} = - i E _ {0} \frac {\omega _ {k \ell}} {\omega} \mu _ {k l} \label{6.52}\]This expression allows us to write in a simplified form the well-known interaction potential for a dipole in a field:\[V (t) = - \overline {\mu} \cdot \overline {E} (t) \label{6.53}\]Note that we have reversed the order of terms because they commute. This leads to an expression for the rate of transitions between quantum states induced by the light field:\[\begin{align} w _ {k \ell} & = \frac {\pi} {2 \hbar} \left| E _ {0} \right|^{2} \frac {\omega _ {k \ell}^{2}} {\omega^{2}} \left| \overline {\mu} _ {k l} \right|^{2} \left[ \delta \left( E _ {k} - E _ {\ell} - \hbar \omega \right) + \left( E _ {k} - E _ {\ell} + \hbar \omega \right) \right] \\[4pt] & = \frac {\pi} {2 \hbar^{2}} \left| E _ {0} \right|^{2} \left| \overline {\mu} _ {k l} \right|^{2} \left[ \delta \left( \omega _ {k \ell} - \omega \right) + \delta \left( \omega _ {k \ell} + \omega \right) \right] \label{6.54} \end{align}\]In essence, Equation \ref{6.54} is an expression for the absorption and emission spectrum since the rate of transitions can be related to the power absorbed from or added to the light field. More generally, we would express the spectrum in terms of a sum over all possible initial and final states, the eigenstates of \(H_0\):\[w _ {f i} = \sum _ {i , f} \frac {\pi} {\hbar^{2}} \left| E _ {0} \right|^{2} \left| \mu _ {f i} \right|^{2} \left[ \delta \left( \omega _ {f i} - \omega \right) + \delta \left( \omega _ {f i} + \omega \right) \right] \label{6.55}\]This page titled 7.3: Quantum Mechanical Electric Dipole Hamiltonian is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,320 |
7.4: Relaxation and Line-Broadening
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/07%3A_Interaction_of_Light_and_Matter/7.04%3A_Relaxation_and_Line-Broadening | Let’s describe absorption to a state that is coupled to a continuum. What happens to the probability of absorption if the excited state decays exponentially?We can start with the first-order expression\[\frac {\partial} {\partial t} b _ {k} = - \frac {i} {\hbar} e^{i \omega _ {k l} t} V _ {k \ell} (t) \label{6.56}\]where we make the approximation \(b _ {\ell} (t) \approx 1\). We can add irreversible relaxation to the description of \(b_k\) using our earlier expression for the relaxation of\[b _ {k} (t) = \exp \left[ - \overline {w} _ {n k} t / 2 - i \Delta E _ {k} t / \hbar \right].\]In this case, we will neglect the correction to the energy \(\Delta E _ {k} = 0\), so\[\frac {\partial} {\partial t} b _ {k} = - \frac {i} {\hbar} e^{i \omega _ {\mu l} t} V _ {k \ell} (t) - \frac {\overline {w} _ {n k}} {2} b _ {k} \label{6.57}\]Or using \(V (t) = - i E _ {0} \overline {\mu} _ {k l} \sin \omega t\)\[\begin{align} \frac {\partial} {\partial t} b _ {k} & = \frac {- i} {\hbar} e^{i \omega _ {k t} t} \sin \omega t V _ {k \ell} - \frac {\overline {w} _ {n k}} {2} b _ {k} (t) \\[4pt] & = \frac {E _ {0} \omega _ {k \ell}} {2 i \hbar \omega} \left[ e^{i \left( \omega _ {k \ell} + \omega \right)} - e^{i \left( \omega _ {k \ell} - \omega \right) t} \right] \overline {\mu} _ {k \ell} - \frac {\overline {w} _ {n k}} {2} b _ {k} (t) \label{6.58} \end{align}\]The solution to the differential equation\[\dot {y} + a y = b e^{i \alpha t} \label{6.59}\]is\[y (t) = A e^{- a t} + \frac {b e^{i \alpha t}} {a + i \alpha} \label{6.60}\]with\[b _ {k} (t) = A e^{- \overline {w} _ {n k} t / 2} + \frac {E _ {0} \overline {\mu} _ {k \ell}} {2 i \hbar} \left[ \frac {e^{i \left( \omega _ {k t} + \omega \right) t}} {\overline {w} _ {n k} / 2 + i \left( \omega _ {k \ell} + \omega \right)} - \frac {e^{i \left( \omega _ {k l} - \omega \right) t}} {\overline {w} _ {n k} / 2 + i \left( \omega _ {k \ell} - \omega \right)} \right] \label{6.61}\]Let’s look at absorption only, in the long time limit:\[b _ {k} (t) = \frac {E _ {0} \overline {\mu} _ {k \ell}} {2 \hbar} \left[ \frac {e^{i \left( \omega _ {k l} - \omega \right) t}} {\omega _ {k \ell} - \omega - i \overline {w} _ {n k} / 2} \right] \label{6.62}\]For which the probability of transition to \(k\) is\[P _ {k} = \left| b _ {k} \right|^{2} = \frac {E _ {0}^{2} \left| \mu _ {k \ell} \right|^{2}} {4 \hbar^{2}} \frac {1} {\left( \omega _ {k \ell} - \omega \right)^{2} + \overline {w} _ {n k}^{2} / 4} \label{6.63}\]The frequency dependence of the transition probability has a Lorentzian form:The FWHM line width gives the relaxation rate from \(k\) into the continuum \(n\). Also the line width is related to the system rather than the manner in which we introduced the perturbation. The line width or line shape is an additional feature that we interpret in our spectra, and commonly originates from irreversible relaxation or other processes that destroy the coherence first set up by the light field.This page titled 7.4: Relaxation and Line-Broadening is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,321 |
7.5: Absorption Cross-Sections
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/07%3A_Interaction_of_Light_and_Matter/7.05%3A_Absorption_Cross-Sections | The rate of absorption induced by a monochromatic electromagnetic field is\[w _ {k \ell} ( \omega ) = \frac {\pi} {2 \hbar^{2}} \left| E _ {0} ( \omega ) \right|^{2} | \langle k | \hat {\varepsilon} \cdot \overline {\mu} | \left. \ell \rangle \right|^{2} \delta \left( \omega _ {k \ell} - \omega \right) \label{6.64}\]The rate is clearly dependent on the strength of the field. The variable that you can most easily measure is the intensity \(I\), the energy flux through a unit area, which is the time-averaged value of the Poynting vector, \(S\):\[S = \varepsilon _ {0} c^{2} ( \overline {E} \times \overline {B} ) \label{6.65}\]\[I = \langle S \rangle = \frac {1} {2} \varepsilon _ {0} c E _ {0}^{2} \label{6.66}\]Using this we can write\[w _ {k \ell} = \frac {4 \pi} {3 \varepsilon _ {0} c h^{2}} I ( \omega ) | \langle k | \overline {\mu} | \ell \rangle |^{2} \delta \left( \omega _ {k \ell} - \omega \right) \label{6.67}\]where I have also made use of the uniform distribution of polarizations applicable to an isotropic field:\[\left| \overline {E} _ {0} \cdot \hat {x} \right| = \left| \overline {E} _ {0} \cdot \hat {y} \right| = \left| \overline {E} _ {0} \cdot \hat {z} \right| = \frac {1} {3} \left| E _ {0} \right|^{2}.\]Now let’s relate the rates of absorption to a quantity that is directly measured, an absorption cross section \( \alpha\):\[ \begin{align} \alpha &= \frac {\text {total energy absorbed per unit time}} {\text {total incident intensity (energy/unit time/area)}} \label{6.68} \\[4pt] &= \frac {\hbar \omega w _ {k \ell}} {I} \end{align}\]Note \( \alpha\) has units of cm2 . The golden rule rate for absorption also gives the same rate for stimulated emission. Given two levels \(| m \rangle\) and \(| n \rangle\),\[w _ {n m} = w _ {m n}\]\[ \therefore \left( \alpha _ {A} \right) _ {n m} = \left( \alpha _ {S E} \right) _ {m n} \label{6.69}\]We can now use a phenomenological approach to calculate the change in the intensity of incident light, \( I\), due to absorption and stimulated emission passing through a sample of length \(L\). Given that we have a thermal distribution of identical non-interacting particles with quantum states such that the level \(| m \rangle\) is higher in energy than \(| n \rangle\):\[\frac {d I} {d x} = - N _ {n} \alpha _ {A} I + N _ {m} \alpha _ {s E} I \label{6.70}\]\[\frac {d I} {I} = - \left( N _ {n} - N _ {m} \right) \alpha\, d x \label{6.71}\]Here \(N_n\) and \(N_m\) are population of the upper and lower states, but expressed as population densities (cm-3). Note that \(I\) and \(α\) are both functions of the frequency of the incident light. If \(N\) is the molecular density,\[N _ {n} = N \left( \frac {e^{- \beta E _ {n}}} {Z} \right) \label{6.72}\]Integrating Equation \ref{6.71} over a path length \(L\), we have\[ \begin{align} T &= \frac {I} {I _ {0}} \\[4pt] &= e^{- \Delta N \alpha L} \label{6.73} \\[4pt] &\approx e^{- N \alpha L} \end{align}\]We see that the transmission of light through the sample decays exponentially as a function of path length.\[\Delta N = N _ {n} - N _ {m}\]is the thermal population difference between states. The second expression in Equation \ref{6.73} comes from the high-frequency approximation applicable to optical spectroscopy. Equation \ref{6.73} can also be written in terms of the familiar Beer–Lambert Law:\[A = - \log \frac {I} {I _ {0}} = \epsilon C L \label{6.74}\]where \(A\) is the absorbance and \(C\) is the sample concentration in mol L-1, which is related to the number density via Avagadro’s number \(N_A\),\[ C \left[ \operatorname {mol} L^{- 1} \right] = \frac {N \left[ c m^{- 3} \right]} {N _ {A}} \times 1,000 \label{6.75}\]In Equation \ref{6.74}, the characteristic molecular quantity that describes the sample’s ability to absorb the light is \(\epsilon\), the molar decadic extinction coefficient, given in L mol-1 cm-1. With these units, we see that we can equate \(\epsilon\) with the cross section as\[\epsilon = \frac {N _ {A} \alpha} {2303} \label{6.76}\]In the context of sample absorption characteristics, our use of the variable \(α\) for cross section should not be confused with another use as an absorption coefficient with units of cm-1 that is equal to \(Nα\) in Equation \ref{6.73}.These relationships also allow us to obtain the magnitude of the transition dipole matrix element from absorption spectra by integrating over the absorption line shape:\[\begin{align} \left| \mu _ {i f} \right|^{2} &= \frac {6 \varepsilon _ {0} \hbar^{2} 2303 c} {N _ {A} n} \int \frac {\varepsilon ( v )} {v} d v \label{6.77} \\[4pt] &= \left( 108.86\, L \,mol^{- 1}\, cm^{-1}D^{-2} \right)^{- 1} \int \frac {\varepsilon ( v )} {v} d v \end{align}\]Here the absorption line shape is expressed in molar decadic units and the frequency in wavenumbers.This page titled 7.5: Absorption Cross-Sections is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,322 |
7.6: Appendix - Review of Free Electromagnetic Field
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/07%3A_Interaction_of_Light_and_Matter/7.06%3A_Appendix_-_Review_of_Free_Electromagnetic_Field | Here we review the derivation of the vector potential for the plane wave in free space. We begin with Maxwell’s equations (SI):\[\begin{align} \overline {\nabla} \cdot \overline {B} &= 0 \label{6.78} \\[4pt] \overline {\nabla} \cdot \overline {E} &= \rho / \varepsilon _ {0} \label{6.79} \\[4pt] \overline {\nabla} \times \overline {E} &= - \dfrac {\partial \overline {B}} {\partial t} \label{6.80} \\[4pt] \overline {\nabla} \times \overline {B} &= \mu _ {0} \overline {J} + \varepsilon _ {0} \mu _ {0} \dfrac {\partial \overline {E}} {\partial t} \label{6.81} \end{align}\]Here the variables are: \(\overline {E}\), electric field; \(\overline {B}\), magnetic field; \(\overline {J}\), current density; \(\rho\), charge density; \(\mathcal {E} _ {0}\), electrical permittivity; \(\mu _ {0}\), magnetic permittivity. We are interested in describing \(\overline {E}\) and \(\overline {B}\) in terms of a vector and scalar potential, \(\overline {A}\) and \(\varphi\).Next, let’s review some basic properties of vectors and scalars. Generally, vector field \(\overline {F}\) assigns a vector to each point in space. The divergence of the field\[\overline {\nabla} \cdot \overline {F} = \dfrac {\partial F _ {x}} {\partial x} + \dfrac {\partial F _ {y}} {\partial y} + \dfrac {\partial F _ {z}} {\partial z} \label{6.82}\]is a scalar. For a scalar field \(\phi\), the gradient\[\nabla \phi = \dfrac {\partial \phi} {\partial x} \hat {x} + \dfrac {\partial \phi} {\partial y} \hat {y} + \dfrac {\partial \phi} {\partial z} \hat {z} \label{6.83}\]is a vector for the rate of change at one point in space. Here\[\hat {x}^{2} + \hat {y}^{2} + \hat {z}^{2} = \hat {r}^{2}\]are unit vectors. Also, the curl\[\overline {\nabla} \times \overline {F} = \left| \begin{array} {l l l} {\hat {x}} & {\hat {y}} & {\hat {z}} \\ {\dfrac {\partial} {\partial x}} & {\dfrac {\partial} {\partial y}} & {\dfrac {\partial} {\partial z}} \\ {F _ {x}} & {F _ {y}} & {F _ {z}} \end{array} \right|\]is a vector whose \(x\), \(y\), and \(z\) components are the circulation of the field about that component. Some useful identities from vector calculus that we will use are\[\begin{align} \overline {\nabla} \cdot ( \overline {\nabla} \times \overline {F} ) &= 0 \label{6.85} \\[4pt] \nabla \times ( \nabla \phi ) &= 0 \label{6.86} \\[4pt] \nabla \times ( \overline {\nabla} \times \overline {F} ) &= \overline {\nabla} ( \overline {\nabla} \cdot \overline {F} ) - \overline {\nabla}^{2} \overline {F} \label{6.87} \end{align}\]We now introduce a vector potential \(\overline {A} ( \overline {r} , t )\) and a scalar potential \(\varphi ( \overline {r} , t )\), which we will relate to \(\overline {E}\) and \(\overline {B}\). Since\[\overline {\nabla} \cdot \overline {B} = 0\]and\[\overline {\nabla} ( \overline {\nabla} \times \overline {A} ) = 0,\]we can immediately relate the vector potential and magnetic field\[\overline {B} = \overline {\nabla} \times \overline {A} \label{6.88}\]Inserting this into Equation \ref{6.80} and rewriting, we can relate the electric field and vector potential:\[\overline {\nabla} \times \left[ \overline {E} + \dfrac {\partial \overline {A}} {\partial t} \right] = 0 \label{6.89}\]Comparing Equations \ref{6.89} and \ref{6.86} allows us to state that a scalar product exists with\[\overline {E} = \dfrac {\partial \overline {A}} {\partial t} - \nabla \varphi \label{6.90}\]So summarizing our results, we see that the potentials \(\overline {A}\) and \(\varphi\) determine the fields \(\overline {B}\) and \(\overline {E}\):\[\begin{align} \overline {B} ( \overline {r} , t ) &= \overline {\nabla} \times \overline {A} ( \overline {r} , t ) \label{6.91} \\[4pt] \overline {E} ( \overline {r} , t ) &= - \overline {\nabla} \varphi ( \overline {r} , t ) - \dfrac {\partial} {\partial t} \overline {A} ( \overline {r} , t ) \label{6.92} \end{align}\]We are interested in determining the classical wave equation for \(\overline {A}\) and \(\varphi\). Using Equation \ref{6.91}, differentiating Equation \ref{6.92}, and substituting into Equation \ref{6.81}, we obtain\[\overline {\nabla} \times ( \overline {\nabla} \times \overline {A} ) + \varepsilon _ {0} \mu _ {0} \left( \dfrac {\partial^{2} \overline {A}} {\partial t^{2}} + \overline {\nabla} \dfrac {\partial \varphi} {\partial t} \right) = \mu _ {0} \overline {J} \label{6.93}\]Using Equation \ref{6.87},\[\left[ - \overline {\nabla}^{2} \overline {A} + \varepsilon _ {0} \mu _ {0} \dfrac {\partial^{2} \overline {A}} {\partial t^{2}} \right] + \overline {\nabla} \left( \overline {\nabla} \cdot \overline {A} + \varepsilon _ {0} \mu _ {0} \dfrac {\partial \varphi} {\partial t} \right) = \overline {\mu} _ {0} \overline {J} \label{6.94}\]From Equation \ref{6.90}, we have\[\overline {\nabla} \cdot \overline {E} = - \dfrac {\partial \overline {\nabla} \cdot \overline {A}} {\partial t} - \overline {\nabla}^{2} \varphi \label{6.95}\]and using Equation \ref{6.79},\[\dfrac {- \partial \overline {V} \cdot \overline {A}} {\partial t} - \overline {\nabla}^{2} \varphi = \rho / \varepsilon _ {0} \label{6.96}\]Notice from Equations \ref{6.91} and \ref{6.92} that we only need to specify four field components (\(A_{x}, A_{y}, A_{z}, \varphi\) to determine all six \(\bar{E}\) and \(\bar{B}\) components. But \(\bar{E}\) and \(\bar{B}\) do not uniquely determine \(\bar{A}\) and \(\varphi\). So we can construct \(\bar{A}\) and \(\varphi\) in any number of ways without changing \(\bar{E}\) and \(\bar{B}\). Notice that if we change \(\bar{A}\) by adding \(\bar{\nabla} \chi \) where \(\chi\) is any function of \(\bar{r}\) and \(t\) this will not change \(\bar{B} \quad(\nabla \times(\nabla \cdot B)=0)\). It will change \(E\) by \(\left(-\frac{\partial}{\partial t} \bar{\nabla} \chi\right)\), but we can change \(\varphi\) to \(\varphi^{\prime}=\varphi-(\partial \chi / \partial t)\). Then \(\bar{E}\) and \(\bar{B}\) will both be unchanged. This property of changing representation (gauge) without changing \(\bar{E}\) and \(\bar{B}\) is gauge invariance. We can define a gauge transformation with\[\bar{A}^{\prime}(\bar{r}, t)=\bar{A}(\bar{r}, t)+\bar{\nabla} \cdot \chi(\bar{r}, t) \label{6.97}\]\[\varphi^{\prime}(\bar{r}, t)=\varphi(\bar{r}, t)-\frac{\partial}{\partial t} \chi(\bar{r}, t) \label{6.98}\]Up to this point, \(A^{\prime} \text {and} \varphi^{\prime}\) are undetermined. Let’s choose a \(\chi\) such that:\[\overline {\nabla} \cdot \overline {A} + \varepsilon _ {0} \mu _ {0} \dfrac {\partial \varphi} {\partial t} = 0 \label{6.99}\]which is known as the Lorentz condition. Then from Equation \ref{6.93}:\[- \nabla^{2} \overline {A} + \varepsilon _ {0} \mu _ {0} \dfrac {\partial^{2} \overline {A}} {\partial t^{2}} = \mu _ {0} \overline {J} \label{6.100}\]The right hand side of this equation can be set to zero when no currents are present. From Equation \ref{6.96}, we have:\[\varepsilon _ {0} \mu _ {0} \dfrac {\partial^{2} \varphi} {\partial t^{2}} - \nabla^{2} \varphi = \dfrac {\rho} {\varepsilon _ {0}} \label{6.101}\]Equations \ref{6.100} and \ref{6.101} are wave equations for \(\overline {A}\) and \(\varphi\). Within the Lorentz gauge, we can still arbitrarily add another \(\chi\); it must only satisfy Equation \ref{6.99}. If we substitute Equations \ref{6.97} and \ref{6.98} into Equation \ref{6.101}, we see\[\nabla^{2} \chi - \varepsilon _ {0} \mu _ {0} \dfrac {\partial^{2} \chi} {\partial t^{2}} = 0 \label{6.102}\]So we can make further choices/constraints on \(\bar{A} \text {and} \varphi\) as long as it obeys Equation \ref{6.102}. We now choose \(\varphi=0\), the Coulomb gauge, and from Equation \ref{6.99} we see\[\overline {\nabla} \cdot \overline {A} = 0 \label{6.103}\]So the wave equation for our vector potential when the field is far currents (\(J= 0\)) is\[- \overline {\nabla}^{2} \overline {A} + \varepsilon _ {0} \mu _ {0} \dfrac {\partial^{2} \overline {A}} {\partial t^{2}} = 0 \label{6.104}\]The solutions to this equation are plane waves:\[\overline {A} = \overline {A} _ {0} \sin ( \omega t - \overline {k} \cdot \overline {r} + \alpha ) \label{6.105}\]where \(\alpha\) is a phase. \(k\) is the wave vector which points along the direction of propagation and has a magnitude\[k^{2} = \omega^{2} \mu _ {0} \varepsilon _ {0} = \omega^{2} / c^{2} \label{6.106}\]Since \(\overline {\nabla} \cdot \overline {A} = 0\) (Equation \ref{6.103}), then\[- \overline {k} \cdot \overline {A} _ {0} \cos ( \omega t - \overline {k} \cdot \overline {r} + \alpha ) = 0\]and\[\overline {k} \cdot \overline {A} _ {0} = 0 \label{6.107}\]So the direction of the vector potential is perpendicular to the direction of wave propagation (\(\overline {k} \perp \overline {A _ {0}}\)). From Equations \ref{6.91} and \ref{6.92}, we see that for \(\varphi = 0\):\[\begin{align} \overline {E} &= - \dfrac {\partial \overline {A}} {\partial t} \\[4pt] &= - \omega \overline {A} _ {0} \cos ( \omega t - \overline {k} \cdot \overline {r} + \alpha ) \label{6.108} \\[4pt] \overline {B} &= \overline {\nabla} \times \overline {A} \\[4pt] &= - \left( \overline {k} \times \overline {A} _ {0} \right) \cos ( \omega t - \overline {k} \cdot \overline {r} + \alpha ) \label{6.109} \end{align}\]Here the electric field is parallel with the vector potential, and the magnetic field is perpendicular to the electric field and the direction of propagation (\(\overline {k} \perp \overline {E} \perp \overline {B}\)). The Poynting vector describing the direction of energy propagation is\[\overline {S} = \varepsilon _ {0} c^{2} ( \overline {E} \times \overline {B} )\]and its average value, the intensity, is\[I = \langle S \rangle = \dfrac {1} {2} \varepsilon _ {0} c E _ {0}^{2}.\]This page titled 7.6: Appendix - Review of Free Electromagnetic Field is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,323 |
7.7: Appendix - Magnetic Dipole and Electric Quadrupole Transitions
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/07%3A_Interaction_of_Light_and_Matter/7.07%3A_Appendix_-_Magnetic_Dipole_and_Electric_Quadrupole_Transitions | The second term in the expansion in eq. (6.39) leads to magnetic dipole and electric quadrupole transitions, which we will describe here. The interaction potential is\(V^{}(t)=-\frac{q}{m}\left[i A_{0}(\hat{\varepsilon} \cdot \bar{p})(\bar{k} \cdot \bar{r}) e^{-i \omega t}-i A_{0}^{*}(\hat{\varepsilon} \cdot \bar{p})(\bar{k} \cdot \bar{r}) e^{i \omega t}\right]\)We can use the identity\(\begin{aligned}
(\hat{\varepsilon} \cdot \bar{p})(\bar{k} \cdot \bar{r}) &=\hat{\varepsilon} \cdot(\overline{p r}) \cdot \bar{k} \\
&=\frac{1}{2} \hat{\varepsilon}(\overline{p r}-\overline{r p}) \bar{k}+\frac{1}{2} \hat{\varepsilon}(\overline{p r}+\overline{r p}) \bar{k}
\end{aligned}\)to separate V(t) into two distinct light–matter interaction terms:\[V^{}(t)=V_{m a g}^{}(t)+V_{Q}^{}(t) \label{6.112} \]\[V_{m a g}^{}(t)=\frac{-i q}{2 m} \hat{\varepsilon} \cdot(\overline{p r}-\overline{r p}) \cdot \bar{k}\left(A_{0} e^{-i \omega t}+A_{0}^{*} e^{i \omega t}\right) \label{6.113} \]\[V_{Q}^{}(t)=\frac{-i q}{2 m} \hat{\varepsilon} \cdot(\overline{p r}+\overline{r p}) \cdot \bar{k}\left(A_{0} e^{-i \omega t}+A_{0}^{*} e^{i \omega t}\right) \label{6.114} \]where the first \(V_{\operatorname{mag}}^{}\) gives rise to magnetic dipole transitions, and the second \(V_{Q}^{}\) leads to electric quadrupole transitions.For the notation above, \(\overline{p r}\) represents an outer product (tensor product \(\bar{p}: \bar{r}\)), so that \[\hat{\varepsilon} \cdot(\overline{p r}) \cdot \bar{k}=\left(\begin{array}{lll}
\varepsilon_{x} & \varepsilon_{y} & \varepsilon_{z}
\end{array}\right)\left(\begin{array}{ccc}
p_{x} r_{x} & p_{x} r_{y} & p_{x} r_{z} \\
p_{y} r_{x} & p_{y} r_{y} & p_{y} r_{z} \\
p_{z} r_{x} & p_{z} r_{y} & p_{z} r_{z}
\end{array}\right)\left(\begin{array}{c}
k_{x} \\
k_{y} \\
k_{z}
\end{array}\right)\]This expression is meant to imply that the component of r that lies along k can influence the magnitude of p along \(\varepsilon\). Alternatively this term could be written \(\sum_{a, b=x, y, z} \varepsilon_{a}\left(p_{a} r_{b}\right) k_{b}\). These interaction potentials can be simplified and made more intuitive. Considering first eq. \ref{6.113}, we can use the vector identity \((\bar{A} \times \bar{B}) \cdot(\bar{C} \times \bar{D})=(\bar{A} \times \bar{C})(\bar{B} \times \bar{D})-(\bar{A} \times \bar{D})(\bar{B} \times \bar{C})\) to show \[\begin{aligned}
\frac{1}{2} \hat{\varepsilon} \cdot(\overline{p r}-\overline{r p}) \cdot \bar{k} &=\frac{1}{2}[(\hat{\varepsilon} \cdot \bar{p})(\bar{r} \cdot \bar{k})-(\hat{\varepsilon} \cdot \bar{r})(\bar{p} \cdot \bar{k})]=\frac{1}{2}[(\bar{k} \cdot \hat{\varepsilon}) \cdot(\bar{r} \times \bar{p})] \\
&=\frac{1}{2}(\bar{k} \times \hat{\varepsilon}) \cdot \bar{L}
\end{aligned} \label{6.116}\]For electronic spectroscopy, \(\bar{L}\) is the orbital angular momentum. Since the vector \(\bar{k} \times \hat{\varepsilon}\) describes the direction of the magnetic field \(\bar{B}, \text {and since} A_{0}=B_{0} / 2 i k\) \[V_{m d}^{}(t)=\frac{-q}{2 m} \bar{B}(t) \cdot \bar{L} \quad \bar{B}(t)=\bar{B}_{0} \cos \omega t\]\(\bar{B} \cdot \bar{L} \text {more generally is} \bar{B} \cdot(\bar{L}+2 \bar{S})\) when considering the spin degrees of freedom. In the case of an electron,\[\frac{q \bar{L}}{m}=\frac{2 c}{\hbar} \beta \bar{L}=\frac{2 c}{\hbar} \bar{\mu}_{m a g}\]where the Bohr magneton \[\beta=\sum_{i} e \hbar / 2 m_{i} c, \text {and} \bar{\mu}_{m a g}=\beta \bar{L}\] is the magnetic dipole operator. So we have the form for the magnetic dipole interaction\[V_{m a g}^{}(t)=-\frac{c}{\hbar} \bar{B}(t) \cdot \bar{\mu}_{m a g}\]For electric quadrupole transitions, once can simplify eq. \ref{6.114} by evaluating matrix elements for the operator \((\overline{p r}+\overline{r p})\). \[\overline{p r}+\overline{r p}=\frac{i m}{\hbar}\left[\left[H_{0}, \bar{r}\right] \bar{r}-\bar{r}\left[\bar{r}, H_{0}\right]\right]=\frac{-i m}{\hbar}\left[\bar{r} \bar{r}, H_{0}\right]\]and\[V_{Q}^{}(t)=\frac{-q}{2 \hbar} \hat{\varepsilon} \cdot\left[\bar{r} \bar{r}, H_{0}\right] \cdot \bar{k}\left(A_{0} e^{-i \omega t}+A_{0}^{*} e^{i \omega t}\right) \label{6.121}\]Here \(\bar{r} \bar{r}\) is an outer product of vectors.
\end{array}\]Now, using \(A_{0}=E_{0} / 2 i \omega\) eq. (\ref{6.121}) becomes\[V(t)=-\frac{1}{2 i \hbar \omega} \bar{E}(t) \cdot\left[\overline{\bar{Q}}, H_{0}\right] \cdot \hat{k} \quad \bar{E}(t)=\bar{E}_{0} \cos \omega t \label{6.123}\]Since the matrix element \(\left\langle k\left|\left[Q, H_{0}\right]\right| \ell\right\rangle=\hbar \omega_{k \ell} \overline{\bar{Q}}_{k \ell}\), we can write the electric quadrupole transition moment as\[\begin{aligned}
V_{k \ell} &=\frac{i E_{0} \omega_{k \ell}}{2 \omega}\langle k|\hat{\varepsilon} \cdot \overline{\bar{Q}} \cdot \hat{k}| \ell\rangle \\
&=\frac{i E_{0} \omega_{k \ell}}{2 \omega} \overline{\bar{Q}}_{k \ell}
\end{aligned}\]This page titled 7.7: Appendix - Magnetic Dipole and Electric Quadrupole Transitions is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,324 |
8.1: Mixed States
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/08%3A_Mixed_States_and_the_Density_Matrix/8.01%3A_Mixed_States | Conceptually we are now switching gears to develop tools and ways of thinking about condensed phase problems. What we have discussed so far is the time-dependent properties of pure states, the states of a quantum system that can be characterized by a single wavefunction. For pure states one can precisely write the Hamiltonian for all particles and fields in the system of interest. These are systems that are isolated from their surroundings, or isolated systems to which we introduce a time-dependent potential. For describing problems in condensed phases, things are different. Molecules in dense media interact with one another, and as a result no two molecules have the same state. Energy placed into one degree of freedom will ultimately leak irreversibly into its environment. We cannot write down an exact Hamiltonian for these problems; however, we can concentrate on a few degrees of freedom that are observed in a measurement, and try and describe the influence of the surroundings in a statistical manner.These observations lead to the concept of mixed states or statistical mixtures. A mixed state refers to any case in which we describe the behavior of an ensemble for which there is initially no phase relationship between the elements of the mixture. Examples include a system at thermal equilibrium and independently prepared states. For mixed states we have imperfect information about the system, and we use statistical averages in order to describe quantum observables.How does a system get into a mixed state? Generally, if you have two systems and you put these in contact with each other, the interaction between the two will lead to a new system that is inseparable. Consider two systems \(H_S\) and \(H_B\) for which the eigenstates of \(H_S\) are \(| n \rangle\) and those of \(H_B\) are \(| \alpha \rangle\).\[H _ {0} = H _ {S} + H _ {B}\]\[\left. \begin{array} {l} {H _ {S} | n \rangle = E _ {n} | n \rangle} \\ {H _ {B} | \alpha \rangle = E _ {\alpha} | \alpha \rangle} \end{array} \right. \label{0.2}\]Before these systems interact, the state of the system \(| \psi _ {0} \rangle\) can be described as product states of \(| n \rangle\) and \(| \alpha \rangle\).\[| \psi _ {0} \rangle = | \psi _ {S}^{0} \rangle | \psi _ {B}^{0} \rangle \label{0.3}\]\[| \psi _ {S}^{0} \rangle = \sum _ {n} s _ {n} | n \rangle\]\[| \psi _ {B}^{0} \rangle = \sum _ {\alpha} b _ {\alpha} | \alpha \rangle\]\[| \psi _ {0} \rangle = \sum _ {n , \alpha} s _ {n} b _ {\alpha} | n \alpha \rangle\]where \(s\) and \(b\) are expansion coefficients. After these states are allowed to interact, we have a new state \(| \psi (t) \rangle\). The new state can still be expressed in the zero-order basis, although this does not represent the eigenstates of the new Hamiltonian:\[H = H _ {0} + V \label{0.6}\]\[| \psi (t) \rangle = \sum _ {n , \alpha} c _ {n , \alpha} | n \alpha \rangle \label{0.7}\]For any point in time, \(C _ {n , \alpha}\) is the complex amplitude for the mixed \(| n \alpha \rangle\) state. Generally speaking, at any time after bringing the systems in contact \(c _ {n , \alpha} \neq s _ {n} b _ {a}.\). The coefficient n, \(c_{n, \alpha}\) encodes \(P _ {n , \alpha} = \left| c _ {n , \alpha} \right|^{2}\), the joint probability for finding particle of \(\left|\psi_{S}\right\rangle \text {in state}|n\rangle\) and simultaneously finding particle of \(\left|\psi_{B}\right\rangle \text {in state}|\alpha\rangle\). In the case of experimental observables, we are typically able to make measurements on the system HS, and are blind to HB. Then we are interested in the probability of occupying a particular eigenstate of the system averaged over the bath degrees of freedom:\[P _ {n} = \sum _ {\alpha} P _ {n , \alpha} = \sum _ {\alpha} \left| c _ {n , \alpha} \right|^{2} = \left\langle \left| c _ {n} \right|^{2} \right\rangle \label{0.8}\]Now let’s look at the thinking that goes into describing ensembles. Imagine a room temperature solution of molecules dissolved in a solvent. The same molecular Hamiltonian and wavefunctions can be used to express the state of any molecule in the ensemble. However, the details of the amplitudes of the eigenstates at any moment will also depend on the timedependent local environment.We will describe this problem with the help of a molecular Hamiltonian \(H_{m o l}^{(j)}\), which describes the state of the molecule j within the solution through the wavefunction \(\left|\psi^{(j)}\right\rangle\). We also have a Hamiltonian for the liquid \(H_{l i q}\) into which we wrap all of the solvent degrees of freedom. The full Hamiltonian for the solution can be expressed in terms of a sum over N solute molecules and the liquid, the interactions between solute molecules \(H_{i n t}\), and any solute-solvent interactions \(H_{mol-liq}\):\[\overline {H} = \sum _ {j = 1}^{N} H _ {m o l}^{( j )} + H _ {l i q} + \sum _ {j , k = 1 \atop j > k}^{N} H _ {\text {int}}^{( j , k )} + \sum _ {j = 1}^{N} H _ {m o l - l i q}^{( j )} \label{0.9}\]For our purposes, we take the molecular Hamiltonian to be the same for all solute molecules, i.e., \(H_{m o l}^{(j)}=H_{m o l}\) which obeys a TISE\[H _ {m o l} | \psi _ {n} \rangle = E _ {n} | \psi _ {n} \rangle \label{0.10}\]We will express the state of each molecule in this isolated molecule eigenbasis. For the circumstances we are concerned with, where there are no interactions or correlations between solute molecules, we are allowed to neglect \(H_{int}\). Implicit in this statement is that we believe there is no quantum mechanical phase relationship between the different solute molecules. We will also drop \(H_{liq}\), since it is not the focus of our interests and will not influence the conclusions. We can therefore write the Hamiltonian for any individual molecule as\[H^{( j )} = H _ {m o l} + H _ {m o l - l i q}^{( j )} \label{0.11}\]and the statistically averaged Hamiltonian\[\overline {H} = \frac {1} {N} \sum _ {j = 1}^{N} H^{( j )} = H _ {m o l} + \left\langle H _ {m o l - l i q} \right\rangle \label{0.12}\]This Hamiltonian reflects an ensemble average of the molecular Hamiltonian under the influence of a varying solute–solvent interaction. To describe the state of any particular molecule, we can define a molecular wavefunction \(\left|\psi_{n}^{(j)}\right\rangle\), which we express as an expansion in the isolated molecular eigenstates,\[| \psi _ {n}^{( j )} \rangle = \sum _ {n} c _ {n}^{( j )} | \psi _ {n} \rangle \label{0.13}\]Here the expansion coefficients vary by molecule because of their interaction with the liquid, but they are all expressed in terms of the isolated molecule eigenstates. Note that this expansion is in essence the same as Equation \ref{0.7}, with the association \(c _ {n}^{( j )} \Leftrightarrow c _ {n , \alpha}\). In either case, the mixed state arises from varying interactions with the environment. These may be static and appear from ensemble averaging, or time-dependent and arise from fluctuations in the environment. Recognizing the independence of different molecules, the wavefunction for the complete system \(|\Psi\rangle\) can be expressed in terms of the wavefunctions for the individual molecules under the influence of their local environment \(\left|\psi^{(j)}\right\rangle\):\[\overline {H} | \Psi \rangle = \overline {E} | \Psi \rangle \label{0.14}\]\[| \Psi \rangle = | \psi^{( 1 )} \psi^{( 2 )} \psi^{( 3 )} \cdots \rangle = \prod _ {j = 1}^{N} | \psi^{( j )} \rangle \label{0.15}\]\[\overline {E} = \sum _ {j = 1}^{N} E^{( j )} \label{0.16}\]We now turn out attention to expectation values that we would measure in an experiment. First we recognize that for the individual molecule j, the expectation value for an internal operator would be expressed\[\left\langle A^{( j )} \right\rangle = \left\langle \psi^{( j )} \left| \hat {A} \left( p _ {j} , q _ {j} \right) \right| \psi^{( j )} \right\rangle \label{0.17}\]This purely quantum mechanical quantity is itself an average. It represents the mean value obtained for a large number of measurements made on an identically prepared system, and reflects the need to average over the intrinsic quantum uncertainties in the position and momenta of particles. In the case of a mixed state, we must also average the expectation value over the ensemble of different molecules. In the case of our solution, this would involve an average of the expectation value over the N molecules.\[\langle \langle A \rangle \rangle = \frac {1} {N} \sum _ {j = 1}^{N} \left\langle A^{( j )} \right\rangle \label{0.18}\]Double brackets are written here to emphasize that conceptually there are two levels of statistics in this average. The first involves the uncertainty over measurements of the same molecule in the identical pure state, whereas the second is an average over variations of the state of the system within an ensemble. However, we will drop this notation when we are dealing with ensembles, and take it as understood that expectation values must be averaged over a distribution. Expanding Equation \ref{0.18} with the use of Equations \ref{0.13} and \ref{0.17} allows us to write\[\langle A \rangle = \frac {1} {N} \sum _ {n , m} \sum _ {j = 1}^{N} c _ {m}^{( j )} \left( c _ {n}^{( j )} \right)^{*} \left\langle \psi _ {n} | \hat {A} | \psi _ {m} \right\rangle \label{0.19}\]The second term simplifies the first expression by performing an ensemble average over the complex wavefunction amplitudes. We use this expression to write a density matrix or density operator \(\rho\), whose matrix elements are\[\rho _ {m n} = \left\langle c _ {m} c _ {n}^{*} \right\rangle \label{0.20}\]Then the expectation value becomes\[\left.\begin{aligned} \langle A \rangle & = \sum _ {n , m} \rho _ {m n} A _ {n m} \\ & = \operatorname {Tr} ( \rho \hat {A} ) \end{aligned} \right. \label{0.21}\]Here the trace \(Tr[...]\) refers to a trace over the diagonal elements of the matrix \(\sum_{a}\langle a|\cdots| a\rangle\). Although these matrices were evaluated in the basis of the molecular eigenstates, we emphasize that the definition and evaluation of the density matrix and operator matrix elements are not specific to a particular basis set.Although this is just one example, the principles are quite generally to mixed states in the condensed phase. The wavefunction is a quantity that is meant to describe a molecular or nanoscale object. To the extent that finite temperature, fluctuations, disorder, and spatial separation ensure that phase relationships are randomized between different nano-environments, one can characterize the molecular properties of condensed phases as mixed states in which ensemble-averaging is used to describe the interactions of these molecular environments with their surroundings.The name density matrix derives from the observation that it plays the quantum role of a probability density. Comparing Equation \ref{0.21} with the statistical determination of the mean value of \(A\),\[\langle A \rangle = \sum _ {i = 1}^{M} P \left( A _ {i} \right) A _ {i} \label{0.22}\]we see that \(\rho\) plays the role of the probability distribution function \(P(A)\). Since \(\rho\) is Hermitian, it can be diagonalized, and in this diagonal basis the density matrix elements are in fact the statistical weights or probabilities of occupying a given state of the system.Returning to our example, and comparing Equation \ref{0.22} with Equation \ref{0.18} also implies that \(P_i(A) = 1/N\), i.e. that the contribution from each molecule to the average is statistically equivalent. Note also that the state of the system described by Equation \ref{0.15} is a system of fixed energy. So, the probability density in Equation \ref{0.18} indicates that this expression applies to a microcanonical ensemble (\(N\), \(V\), \(E\)) in which any realization of a system at fixed energy is equally probable, and the statistical weight is the inverse of the number of microstates: \(P=1 / \Omega\). In the case of a system in contact with a heat bath with temperature T, i.e., the canonical ensemble (N,V,T) we now express the average in terms of the probability that a member of an ensemble with fixed average energy can access a state of energy \(E\).This page titled 8.1: Mixed States is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,325 |
8.2: Density Matrix for a Mixed State
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/08%3A_Mixed_States_and_the_Density_Matrix/8.02%3A_Density_Matrix_for_a_Mixed_State | Based on the discussion of mixed state in Section 7.1, we are led to define the expectation value of an operator for a mixed state as\[\langle \hat {A} (t) \rangle = \sum _ {j} p _ {k} \langle \psi^{( j )} (t) | \hat {A} | \psi^{( j )} (t) \rangle \label{0.23}\]where \(p_j\) is the probability of finding a system in the state defined by the wavefunction \(| \psi^{( j )} \rangle\). Correspondingly, the density matrix for a mixed state is defined as:\[\rho (t) \equiv \sum _ {j} p _ {j} | \psi^{( j )} (t) \rangle \langle \psi^{( j )} (t) | \label{0.24}\]For the case of a pure state, only one wavefunction\(| \psi^{( k )} \rangle\) specifies the state of the system, and \(p _ {j} = \delta _ {j k}\). Then the density matrix is as we described before,\[\rho (t) = | \psi (t) \rangle \langle \psi (t) | \label{0.25}\]with the density matrix elements\[\left.\begin{aligned} \rho (t) & {= \sum _ {n , m} c _ {n} (t) c _ {m}^{*} (t) | n \rangle \langle m |} \\ & {\equiv \sum _ {n , m} \rho _ {n m} (t) | n \rangle \langle m |} \end{aligned} \right. \label{0.26}\]For mixed states, using the separation of system (\(a\)) and bath (\(\alpha\)) degrees of freedom that we used above, the expectation value of an operator \(A\) can be expressed as\[\begin{aligned} \langle A (t) \rangle & = \sum _ {a , \alpha} c _ {a , \alpha}^{*} c _ {b , \beta} \langle a | A | b \rangle \delta _ {\alpha , \beta} \\ & = \sum _ {a , b} \left( \sum _ {\alpha} c _ {a , \alpha}^{*} c _ {b , \alpha} \right) A _ {a b} \\ & \equiv \sum _ {a , b} \left( \rho _ {S} \right) _ {b a} A _ {a b} \\ & = T r \left[ \rho _ {S} A \right] \end{aligned} \label{0.27}\]Here, the density matrix elements are\[\rho _ {a , \alpha , b , \beta} = c _ {a , \alpha}^{*} c _ {b , \beta},\]We are now in a position, where we can average the system quantities over the bath configurations. If we consider that the operator \(A\) is only a function of the system coordinates, we can make further simplifications. An example is describing the dipole operator of a molecule dissolved in a liquid. Then we can average the expectation value of \(A\) over the bath degrees of freedom as\[\left.\begin{aligned} \langle A (t) \rangle & = \sum _ {a , \alpha} c _ {a , \alpha}^{*} c _ {b , \beta} \langle a | A | b \rangle \delta _ {\alpha , \beta} \\ & = \sum _ {a , b} \left( \sum _ {\alpha} c _ {a , \alpha}^{*} c _ {b , \alpha} \right) A _ {a b} \\ & \equiv \sum _ {a , b} \left( \rho _ {S} \right) _ {b a} A _ {a b} \\ & = T r \left[ \rho _ {S} A \right] \end{aligned} \right. \label{0.28}\]Here we have defined a density matrix for the system degrees of freedom (also called the reduced density matrix, \(\sigma\))\[\rho _ {s} = | \psi _ {s} \rangle \langle \psi _ {s} | \label{0.29}\]with density matrix elements that traced over the bath states:\[| b \rangle \rho _ {s} \langle a | = \sum _ {\alpha} c _ {a , \alpha}^{*} c _ {b , \alpha} \label{0.30}\]The “s” subscript should not be confused with the Schrödinger picture wavefunctions. To relate this to our similar expression for \(\rho\), Equation \ref{0.25}, it is useful to note that the density matrix of the system are obtained by tracing over the bath degrees of freedom:\[\left.\begin{aligned} \rho _ {S} & = T r _ {B} ( \rho ) \\ & = \sum _ {a , b} \left( \rho _ {S} \right) _ {b a} A _ {a b} \end{aligned} \right. \label{0.31}\]Also, note that\[\operatorname {Tr} ( A \times B ) = \operatorname {Tr} ( A ) \operatorname {Tr} ( B ) \label{0.32}\]To interpret what the system density matrix represents, let’s manipulate it a bit. Since \(\rho _ {S}\) is Hermitian, it can be diagonalized by a unitary transformation \(T\), where the new eigenbasis \(| m \rangle\) represents the mixed states of the original \(| \psi _ {S} \rangle\) system.\[\rho _ {S} = \sum _ {m} | m \rangle \rho _ {m m} \langle m | \label{0.33}\]\[\sum _ {m} \rho _ {m n} = 1 \label{0.34}\]The density matrix elements represent the probability of occupying state \(| m \rangle\), which includes the influence of the bath. To obtain these diagonalized elements, we apply the transformation \(T\) to the system density matrix:\[\begin{aligned} \left( \rho _ {S} \right) _ {m n} & = \sum _ {a , b} T _ {m b} \left( \rho _ {S} \right) _ {b a} T _ {a n}^{\dagger} \\ & = \sum _ {a , b , \alpha} c _ {b , \alpha} T _ {m b} c _ {a , \alpha}^{*} T _ {m a}^{*} \\ & = \sum _ {\alpha} f _ {m , \alpha} f _ {m , \alpha}^{*} \\ & = \left| f _ {m} \right|^{2} = p _ {m} \geq 0 \end{aligned}. \label{0.35}\]The quantum mechanical interaction of one system with another causes the system to be in a mixed state after the interaction. The mixed states, which are generally inseparable from the original states, are described by\[| \psi _ {S} \rangle = \sum _ {m} f _ {m} | m \rangle \label{0.36}\]If we only observe a few degrees of freedom, we can calculate observables by tracing over unobserved degrees of freedom. This forms the basis for treating relaxation phenomena.This page titled 8.2: Density Matrix for a Mixed State is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,326 |
9.1: Concepts and Definitions
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/09%3A_Irreversible_and_Random_Processes/9.01%3A_Concepts_and_Definitions | As one change to our thinking, we now have to be concerned with ensembles. Most often, we will be concerned with systems in an equilibrium state with a fixed temperature for which many quantum states are accessible to the system. For comparing calculations of pure quantum states to experimental observables on macroscopic samples, we assume that all molecules have been prepared and observed in the same manner, so that the quantum expectation values for the internal operators can be compared directly to experimental observations. For mixed states, we have seen the need to perform an additional layer of averaging over the ensemble in the calculation of expectation values.Perhaps the most significant change between isolated states and condensed matter is the dynamics. From the time-dependent Schrödinger equation, we see that the laws governing the time evolution of isolated quantum mechanical systems are invariant under time reversal. That is, there is no intrinsic directionality to time. If one reverses the sign of time and thereby momenta of objects, we should be able to exactly reverse the motion and propagate the system to where it was at an earlier time. This is also the case for classical systems evolving under Newton’s equation of motion. In contrast, when a quantum system is in contact with another system having many degrees of freedom, a definite direction emerges for time, “the arrow of time,” and the system’s dynamics is no longer reversible. In such irreversible systems a welldefined prepared state decays in time to an equilibrium state where energy has been dissipated and phase relationships are lost between the various degrees of freedom.Additionally, condensed phase systems on a local, microscopic scale all have a degree of randomness or noisiness to their dynamics that represent local fluctuations in energy on the scale of \(k _ {B} T\). This behavior is observed even through the equations of motion that govern the dynamics are deterministic. Why? It is because we generally have imperfect knowledge about all of the degrees of freedom influencing the system, or experimentally view its behavior through a highly restricted perspective. For instance, it is common in experiments to observe the behavior of condensed phases through a molecular probe imbedded within or under the influence of its surroundings. The physical properties of the probe are intertwined with the dynamics of the surrounding medium, and to us this appears as random behavior, for instance as Brownian motion. Other examples of the appearance of randomness from deterministic equations of motion include weather patterns, financial markets, and biological evolution. So, how do irreversible behavior and random fluctuations, hallmarks of all chemical systems, arise from the deterministic time-dependent Schrödinger equation? This fascinating question will be the central theme in our efforts going forward.Let’s begin by establishing some definitions and language that will be useful for us. We first classify chemical systems of interest as equilibrium or non-equilibrium systems. An equilibrium system is one in which the macroscopic properties (i.e., the intensive variables) are invariant with time, or at least invariant on the time scales over which one executes experiments and observes the system. Further, there are no steady state concentration or energy gradients (currents) in the system. Although they are macroscopically invariant, equilibrium states are microscopically dynamic.For systems at thermal equilibrium we will describe their time-dependent behavior as dynamically reversible or irreversible. For us, reversible will mean that a system evolves deterministically. Knowledge of the state of the system at one point in time and the equation of motion means that you can describe the state of the system for all points in time later or previously. Irreversible systems are not deterministic. That is, knowledge of the state of the system at one point in time does not provide enough information to precisely determine its past state.Since all states are irreversible in the strictest sense, the distinction is often related to the time scale of observation. For a given system, on a short enough time scale dynamics will appear deterministic whereas on very long times appear random. For instance, the dynamics of a dilute gas appear ballistic on time scales short compared to the mean collision time between particles, whereas their motion appears random and diffusive on much longer time scales. Memory refers to the ability to maintain deterministic motion and reversibility, and we will quantify the decay of memory in the system with correlation functions. For the case of quantum dynamics, we are particularly interested in the phase relationships between quantum degrees of freedom that results from deterministic motion under the time-dependent Schrödinger equation.Nonequilibrium states refers to open or closed systems that have been acted on externally, moving them from equilibrium by changing the population or energy of the quantum states available to the system. Thermodynamically, work is performed on the system, leading to a free-energy gradient that the nonequilibrium system will minimize as it re-equilibrates. For nonequilibrium states, we will be interested in relaxation processes, which refer to the timedependent processes involved in re-equilibrating the system. Dissipation refers to the relaxation processes involving redistribution of energy as a nonequilibrium state returns toward a thermal distribution. However, there are other relaxation processes such as the randomization of the orientation of an aligned system or the randomization of phase of synchronized oscillations.With the need to describe ensembles, will use statistical descriptions of the properties and behavior of a system. The variable \(A\), which can be a classical internal variable or quantum operator, can be described statistically in terms of the mean and mean-square values of \(A\) determined from a large number of measurements:\[\langle A \rangle = \frac {1} {N} \sum _ {i = 1}^{N} A _ {i} \label{8.1}\]\[\left\langle A^{2} \right\rangle = \frac {1} {N} \sum _ {i = 1}^{N} A _ {i}^{2} \label{8.2}\]Here, the summation over \(i\) refers to averaging over \(N\) independent measurements. Alternatively, these equations can be expressed as\[\langle A \rangle = \sum _ {n = 1}^{M} P _ {n} A _ {n} \label{8.3}\]\[\left\langle A^{2} \right\rangle = \sum _ {n = 1}^{M} P _ {n} A _ {n}^{2} \label{8.4}\]The sum over \(n\) refers to a sum over the \(M\) possible values that \(A\) can take, weighted by \(P_n\), the probability of observing a particular value \(A_n\). When the accessible values come from a continuous as opposed to discrete distribution, one can describe the statistics in terms of the moments of the distribution function, \(P(A)\), which characterizes the probability of observing \(A\) between \(A\) and \(A+dA\)\[\langle A \rangle = \int d A A P ( A ) \label{8.5}\]\[\left\langle A^{2} \right\rangle = \int d A A^{2} P ( A ) \label{8.6}\]For time-dependent processes, we recognize that it is possible that these probability distributions carry a time dependence, \(P(A,t)\). The ability to specify a value for \(A\) is captured in the variance of the distribution\[\sigma^{2} = \left\langle A^{2} \right\rangle - \langle A \rangle^{2} \label{8.7}\]We will apply averages over probability distributions to the description of ensembles of molecules; however, we should emphasize that a statistical description of a quantum system also applies to a pure state. A fundamental postulate is that the expectation value of an operator\[\langle \hat {A} \rangle = \langle \psi | \hat {A} | \psi \rangle\]is the mean value of \(A\) obtained over many observations on identically prepared systems. The mean and variance of this expectation value represent the fundamental quantum uncertainty in a measurement.To take this a step further and characterize the statistical relationship between two variables, one can define a joint probability distribution, \(P(A,B)\), which characterizes the probability of observing \(A\) between \(A\) and \(A+dA\) and \(B\) between \(B\) and \(B+dB\). The statistical relationship between the variables can also emerges from moments of \(P(A,B)\). The most important measure is a correlation function\[C _ {A B} = \langle A B \rangle - \langle A \rangle \langle B \rangle \label{8.8}\]You can see that this is the covariance—the variance for a bivariate distribution. This is a measure of the correlation between the variables \(A\) and \(B\). That is, for a specific value of \(A\), what are the associated statistics for \(B\). To interpret this it helps to define a correlation coefficient\[r = \frac {C _ {A B}} {\sigma _ {A} \sigma _ {B}} \label{8.9}\]\(r\) can take on values from +1 to -1. If \(r = 1\) then there is perfect correlation between the two distributions. If the variables \(A\) and \(B\) depend the same way on a common internal variable, then they are correlated. If no statistical relationship exists between the two distributions, then they are uncorrelated, \(r = 0\), and \(\langle A B \rangle = \langle A \rangle \langle B \rangle\). It is also possible that the distributions depend in an equal and opposite manner on an internal variable, in which case we call them anti-correlated with \(r = -1\).This page titled 9.1: Concepts and Definitions is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,327 |
9.2: Thermal Equilibrium
| https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Time_Dependent_Quantum_Mechanics_and_Spectroscopy_(Tokmakoff)/09%3A_Irreversible_and_Random_Processes/9.02%3A_Thermal_Equilibrium | For a statistical mixture at thermal equilibrium, individual molecules can occupy a distribution of energy states. An equilibrium system at temperature \(T\) has the canonical probability distribution\[\rho _ {e q} = \frac {e^{- \beta H}} {Z} \label{8.10}\]\(Z\) is the partition function and \(\beta = \left( k _ {B} T \right)^{- 1}\). Classically, we can calculate the equilibrium ensemble average value of a variable \(A\) as\[\langle A \rangle = \int d \mathbf {p} \, \int d \mathbf {q} A ( \mathbf {p} , \mathbf {q} ; t ) \rho _ {e q} ( \mathbf {p} , \mathbf {q} ) \label{8.11}\]In the quantum mechanical case, we can obtain an equilibrium expectation value of \(A\) by averaging \(\langle A \rangle\) over the thermal occupation of quantum states:\[\langle A \rangle = \operatorname {Tr} \left( \rho _ {e q} A \right) \label{8.12}\]where \(\rho_{eq}\) is the density matrix at thermal equilibrium and is a diagonal matrix characterized by Boltzmann weighted populations in the quantum states:\[\rho _ {m n} = p _ {n} = \frac {e^{- \beta E _ {s}}} {Z} \label{8.13}\]In fact, the equilibrium density matrix is defined by Equation \ref{8.10}, as we can see by calculating its matrix elements using\[\left( \rho _ {e q} \right) _ {\operatorname {mm}} = \frac {1} {Z} \left\langle n \left| e^{- \beta \hat {H}} \right| m \right\rangle = \frac {e^{- \beta E _ {n}}} {Z} \delta _ {m m} = p _ {n} \delta _ {m m} \label{8.15}\]Note also that\[Z = \operatorname {Tr} \left( e^{- \beta \hat {H}} \right) \label{8.16}\]Equation \ref{8.12} can also be written as\[\langle A \rangle = \sum _ {n} p _ {n} \langle n | A | n \rangle \label{8.14}\]It may not be obvious how this expression relates to our previous expression for mixed states\[\langle A \rangle = \sum _ {n , m} \left\langle c _ {n}^{*} c _ {m} \right\rangle A _ {m n} = \operatorname {Tr} ( \rho \hat {A} ). \]Remember that for an equilibrium system we are dealing with a statistical mixture in which no coherences (no phase relationships) are present in the sample. The lack of coherence is the important property that allows the equilibrium ensemble average of \(\left\langle c _ {m} c _ {n}^{*} \right\rangle\) to be equated with the thermal population \(p_n\). To evaluate this average we recognize that these are complex numbers, and that the equilibrium ensemble average of the expansion coefficients is equivalent to phase averaging over the expansion coefficients. Since at equilibrium all phases are equally probable\[\left\langle c _ {n}^{*} c _ {m} \right\rangle = \frac {1} {2 \pi} \int _ {0}^{2 \pi} c _ {n}^{*} c _ {m} d \phi = \frac {1} {2 \pi} | c _ {n} \| c _ {m} | \int _ {0}^{2 \pi} e^{- i \phi _ {m n}} d \phi _ {n m} \label{8.17}\]where\[c _ {n} = \left| c _ {n} \right| e^{i \phi _ {n}}\]and\[\phi _ {n m} = \phi _ {n} - \phi _ {m}.\]The integral in Equation \ref{8.17} is quite clearly zero unless \(\phi _ {n} = \phi _ {m}\) giving\[\left\langle c _ {n}^{*} c _ {m} \right\rangle = p _ {n} = \frac {e^{- \beta E _ {s}}} {Z} \label{8.18}\]This page titled 9.2: Thermal Equilibrium is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Andrei Tokmakoff via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 8,328 |